Google Photos is reportedly adding new functionality that will allow users to check whether or not an image has been generated or enhanced by artificial intelligence (AI). According to the report, the photo and video sharing and storage service is getting new resource ID tags that will reveal the image’s AI information as well as the type of digital source. The Mountain View-based tech giant is likely working on this feature to reduce instances of deepfakes. However, it is unclear how the information will be displayed to users.
Google Photos AI Attribution
Deepfakes have emerged in recent years as a new form of digital manipulation. These are images, videos, audio files or other similar media that are either digitally generated using artificial intelligence or enhanced by various means to spread misinformation or mislead people. For example, actor Amitabh Bachchan recently filed a lawsuit against a company owner for running deepfake video ads in which the actor was seen promoting the company’s products.
According to a report by Android Authority, a new functionality in the Google Photos app will allow users to see if an image in their gallery was created by digital means. The feature was spotted in Google Photos app version 7.3. However, it’s not an active feature, meaning users of the latest version of the app won’t be able to see it yet.
Inside the layout files, the publication found new strings of XML code that point to this development. These are ID resources, which are identifiers assigned to a specific element or resource in the application. One of them reportedly contained the phrase “ai_info”, which is believed to refer to information added to image metadata. This section should be checked if the image was generated by an AI tool that adheres to the transparency protocol.
Additionally, the tag “digital_source_type” is believed to refer to the name of the AI tool or model that was used to generate or enhance the image. This could include names like Gemini, Midjourney and others.
However, it is currently uncertain how Google intends to display this information. Ideally, it could be added to the Extensible Image File Format (EXIF) data embedded in the image so there are fewer ways to mess with it. But the downside of this would be that users won’t be able to see this information immediately unless they go to the metadata page. Alternatively, the app could add a badge to the image to tag AI images, similar to what Meta did on Instagram.