Ray-Ban Meta smart glasses were used by two Harvard engineering students to create an app that can reveal sensitive information about people without them knowing. The students posted a video demonstration on X (formerly known as Twitter) and showcased the app’s capabilities. Notably, the app is not publicly available to users, instead, they made it to highlight the dangers of AI-powered wearables that use discreet cameras that can capture photos and images of people.
The app, called I-Xray, uses artificial intelligence (AI) to recognize faces and then uses the processed visual data to doxx individuals. Doxxing, popular internet slang for “doxing (informally of documents or documents)”, is the act of revealing personal information about someone without their consent.
It was integrated with Ray-Ban Meta smart glasses, but the developers said it will work with any smart glasses with discrete cameras. It uses an AI model similar to PimEyes and FaceCheck for reverse face recognition. The technology can match an individual’s face to publicly available images on the Internet and search URLs.
Another large language model (LLM) is then fed those URLs and automatically generates a query to find out the person’s name, occupation, address, and other similar information. The AI model also looks at publicly available government data such as voter registration databases. Additionally, an online tool called FastPeopleSearch was also used for this.
In a short video demonstration, Harvard students AnhPhu Nguyen and Caine Ardayfio also demonstrated the operation of this application. They could meet strangers with the camera already on and ask them their name, and an AI-powered app could retrieve personal information about the individual from there.
In a Google Docs file, the developers said: “This synergy between LLM and reverse face search enables fully automatic and comprehensive data extraction previously not possible with traditional methods alone.”
The students said they had no intention of making the app publicly available and had only developed it to highlight the risks of AI-powered wearables that could discreetly record people. However, this does not mean that bad actors cannot create a similar application using a similar methodology.