Apple is reportedly working on two artificial intelligence (AI)-powered features that could be added to more apps in iOS 18. The Cupertino-based tech giant is said to be preparing a real-time audio transcription and summarization feature that could power its Voice Notes and the Notes application. These features could also appear on iPadOS 18 and macOS 15. Namely, these features as well as the next generation of Apple’s operating systems are expected to be presented at the company’s Worldwide Developers Conference (WWDC) scheduled for June 10.
According to a report by AppleInsider, the iPhone maker is using AI to bring a real-time audio transcription feature that will allow users to read what is being said. Citing people familiar with the matter, the report noted that users will later be able to read, edit, copy and share these transcripts. Additionally, the tech giant is said to be introducing a compression feature as well. These features are reportedly integrated into the Voice Memos app, the Notes app, and more.
Pixel smartphones already ship with a recording app that offers real-time transcriptions and summaries of conversations. One of the more popular features of the smartphone line, people use it to record meetings, important lectures, or take notes on the go. With Apple’s foray into artificial intelligence, the Voice Memos app could also be similarly revamped.
According to the report, the transcriptions will be displayed in the center of the app window, which currently displays a larger interface for recorded audio. A transcription button is also added, in the form of a speech bubble, where touching the bubble will display the transcription for a specific audio recording.
The Notes app is also expected to get this feature, as well as a digest feature that will provide a brief description of the conversation, followed by key points and actions in an easy-to-read format. It is also said that these features will be added in iPadOS 18 and macOS 15.
Apple is rumored to be using AI to significantly improve Siri’s capabilities. According to a recent report, the company’s original virtual assistant will gain conversational speech, contextual language understanding, and the ability to understand and execute complex, multi-step commands.