KEY POINTS
  • Apple previewed a number of accessibility advancements, including new technology that would let users synthesize their voice for use in-person or on the phone.
  • The technology is called Live Speech and requires a user to record 15 minutes of audio to generate their voice on device.
  • The new technology is part of a broader suite of accessibility features, including improvements targeted at visually or cognitively impaired users.

In this article

Apple iPhone 14

Ahead of its June WWDC event, Apple on Tuesday previewed a suite of accessibility features that will be coming "later this year" in its next big iPhone update.

The new "Personal Voice" feature, expected as part of iOS 17, will let iPhones and iPads generate digital reproductions of a user's voice for in-person conversations and on phone, FaceTime and audio calls.

In this article