- Apple previewed a number of accessibility advancements, including new technology that would let users synthesize their voice for use in-person or on the phone.
- The technology is called Live Speech and requires a user to record 15 minutes of audio to generate their voice on device.
- The new technology is part of a broader suite of accessibility features, including improvements targeted at visually or cognitively impaired users.
The new "Personal Voice" feature, expected as part of iOS 17, will let iPhones and iPads generate digital reproductions of a user's voice for in-person conversations and on phone, FaceTime and audio calls.
Apple said Personal Voice will create a synthesized voice that sounds like a user and can be used to connect with family and friends. The feature is aimed at users who have conditions that can affect their speaking ability over time.
Users can create their Personal Voice by recording 15 minutes of audio on their device. Apple said the feature will use local machine-learning technology to maximize privacy.
It's part of a larger suite of accessibility improvements for iOS devices, including a new Assistive Access feature that helps users with cognitive disabilities, and their caretakers, more easily take advantage of iOS devices.
Apple also announced another machine learning-backed technology, augmenting its existing Magnifier feature with a new point-and-speak-backed Detection Mode. The new functionality will combine Camera input, LiDAR input, and machine-learning technology to announce the text on the screen.
Apple typically launches software at WWDC in beta, meaning that the features are first available to developers and to members of the public who want to opt in. Those features will typically remain in beta throughout the summer and launch to the public in the fall when new iPhones hit the market.