Apple plans to roll out new accessibility features for people with cognitive, vision or hearing issues. One of those features will allow a user to replicate their voice.
Apple says its Personal Voice feature is catered to those who are nonspeaking or at risk of losing the ability to speak. Using machine learning, Apple says it will create a "synthesized voice" that sounds like the individual.
How Personal Voice will work
Users will set up Personal Voice by reading text prompts on an iPhone or iPad for 15 minutes.
Apple says its on-device machine learning will capture the person's voice and be able to generate new conversations, using the user's voice, when it's prompted.
Users will type what they want to say and their synthesized voice will be what the person on the other end of the phone call hears.
Privacy concerns
Apple says a user's information, including their voice, will be private and secure. It reminds users that the feature works though on-device machine learning, meaning a person's voice is not automatically sent to an online network. Instead, it remains on the user's iPhone or iPad.
Other accessibility features
Another new feature, Point and Speak, caters to people with vision issues. In the Magnifier app, a user can point their device at an object, such as a microwave, and the device will identify what each button says.
Apple will also roll out its Assistive Access tool. The company says it streamlines a device's home and features "high contrast buttons and large text labels" to lighten the cognitive load for a user.
"We’re excited to share incredible new features that build on our long history of making technology accessible, so that everyone has the opportunity to create, communicate, and do what they love,” said Apple CEO Tim Cook.
Apple did not reveal a date for when the new accessibility features will be rolled out, only saying they will be available later this year.
Trending stories at Scrippsnews.com