Your face is quickly becoming a key to the digital world. Computers, phones, and even online stores are starting to use your face as a password. But new research from Carnegie Mellon University shows that facial recognition software is far from secure.
In a paper (pdf) presented at a security conference on Oct. 28, researchers showed they could trick AI facial recognition systems into misidentifying faces—making someone caught on camera appear to be someone else, or even unrecognizable as human. With a special pair of eyeglass frames, the team forced commercial-grade facial recognition software into identifying the wrong person with up to 100% success rates. Researchers had the same success tricking software touted by Chinese e-commerce giant Alibaba for use in their “smile-to-pay” feature.
Modern facial recognition software relies on deep neural networks, a flavor of artificial intelligence that learns patterns from thousands and millions of pieces of information. When shown millions of faces, the software learns the idea of a face, and how to tell different ones apart.
As the software learns what a face looks like, it leans heavily on certain details—like the shape of the nose and eyebrows. The Carnegie Mellon glasses don’t just cover those facial features, but instead are printed with a pattern that is perceived by the computer as facial details of another person.
In a test where researchers built a state-of-the-art facial recognition system, a white male test subject wearing the glasses appeared as actress Milla Jovovich with 87.87% accuracy. An Asian female wearing the glasses tricked the algorithm into seeing a Middle Eastern man with the same accuracy. Other notable figures whose faces were stolen include Carson Daly, Colin Powell, and John Malkovich. Researchers used about 40 images of each person to generate the glasses used to identify as them.