Security researchers continue to find ways around biometric-based security features, including a new attack that can defeat face authentication systems.
You might be careful about posting photos of yourself online, either refraining from it or setting the images to private, but your “friends” might post pictures of you online. It wouldn’t matter if those pictures of you are low quality or there were as few as three publicly available photos of you, researchers from the University of North Carolina have developed a virtual reality-based attack that can reproduce your face well enough to trick face authentication systems.
In “Virtual U: Defeating Face Liveness Detection by Building Virtual Models from Your Public Photos” (pdf), the researchers called “the ability of an adversary to recover an individual’s facial characteristics through online photos” an “immediate and very serious threat.” The team devised an attack that can bypass “existing defenses of liveness detection and motion consistency.”
They argued that “VR-based spoofing attacks constitute a fundamentally new class of attacks that point to serious weaknesses in camera-based authentication systems: Unless they incorporate other sources of verifiable data, systems relying on color image data and camera motion are prone to attacks via virtual realism.”
They went about it like an attacker or a stalker might, pouring over social media and running image searches. One of the 20 test participants had uploaded only two pictures to social media in the past three years. On average, however, the researchers were able to find between three to 27 photos per person posted online. Then they created 3D models of those faces and patched in any missing areas or textures. Additional tweaks included correcting gaze and adding facial animations such as frowning and smiling.
The researchers explained, “In the VR system, the synthetic 3D face of the user is displayed on the screen of the VR device, and as the device rotates and translates in the real world, the 3D face moves accordingly. To an observing face authentication system, the depth and motion cues of the display exactly match what would be expected for a human face.”
Our approach not only defeats existing commercial systems having liveness detection—it fundamentally undermines the process of liveness detection based on color images, entirely.
The team tried two different tests: one using the photos found online and one using an indoor headshot of each participant. Then they used the 3D renders against the following five face authentication systems: KeyLemon, Mobius, True Key, BioID and 1U. Every system failed when presented with 3D renders created from the indoor head shots. When spoofing the faces from social media photos, the attack had varying success rates. They noted that the failure to spoof 1U and a lower success rate on BioID “was directly related to the poor usability of those systems.”
Facial authentication systems could defeat the VR attack if features such as “random projection of light patterns, detection of minor skin tone fluctuations related to pulse, and the use of illuminated infrared (IR) sensors” were added.
The research team concluded:
Takeaway: In our opinion, it is highly unlikely that robust facial authentication systems will be able to operate using solely web/mobile camera input. Given the widespread nature of high-resolution personal online photos, today’s adversaries have a goldmine of information at their disposal for synthetically creating fake face data. Moreover, even if a system is able to robustly detect a certain type of attack—be it using a paper printout, a 3D-printed mask or our proposed method—generalizing to all possible attacks will increase the possibility of false rejections and therefore limit the overall usability of the system. The strongest facial authentication systems will need to incorporate non-public imagery of the user that cannot be easily printed or reconstructed (e.g., a skin heat map from special IR sensors).
At a minimum, it is imperative that face authentication systems be able to reject synthetic faces with low-resolution textures, as we show in our evaluations. Of more concern, however, is the increasing threat of virtual reality, as well as computer vision, as an adversarial tool. It appears to us that the designers of face authentication systems have assumed a rather weak adversarial model wherein attackers may have limited technical skills and be limited to inexpensive materials. This practice is risky, at best. Unfortunately, VR itself is quickly becoming commonplace, cheap and easy-to-use.
Moreover, VR visualizations are increasingly convincing, making it easier and easier to create realistic 3D environments that can be used to fool visual security systems. As such, it is our belief that authentication mechanisms of the future must aggressively anticipate and adapt to the rapid developments in the virtual and online realms.