by Nichole Sullivan
Most people can identify a familiar face in a photo, even if the picture is taken at an unusual angle, or in poor lighting. Think about how easily you may recognize someone you know in a social media feed or a photo album. Our brains become trained to recognize the facial features of someone we often see. However, there is a large difference in human ability to correctly identify the face of a familiar person compared to an unfamiliar person.
Similarly, facial identification software can also be trained to identify a specific person. This is done by uploading “training” images of the person in different settings and lighting conditions. The more training images used, the more familiar with the person’s image the software becomes. This training process for computers to recognize and classify objects of interest is a branch of science called computer vision.
Since it is more difficult for humans to recognize unfamiliar faces, that may explain why fake IDs are often effective at passing scrutiny. Could a computer identify a fake ID picture more effectively than a human?
Using open source, research-grade software called Psychomorph, photos of two individuals can be morphed together to produce a single image. The resulting image is an equal or unequal combination of the two individuals’ faces. Open source software means it is available to the public for free. Anyone with internet access can morph two facial photos. This raises the question of how easily an average criminal can produce fake IDs, with easily accessible facial graphing software. Could a morphed photo of two people pass as a valid photo ID for both?
A set psychology research experiments, led by professor Mike Burton who studies face perception, set out to determine how often both humans and computers misidentify a morphed ID image as an unfamiliar person. Human viewers were shown two images – an original photo of an unfamiliar person, and a graphical morph of that person merged with a similar looking person. When the study participants were not expecting to see manipulated images, they misidentified an equally morphed image as the real individual 68% of the time. However, after the participants were told to expect the possibility of morphed images they misidentified the photos only 21% of the time.
In a common theme for both science fiction and computer vision experiments, human performance was compared to computer. For the computer “participant,” a popular mobile phone with facial recognition capabilities was used as a comparison. The phone’s face recognition software was trained with a single image. The image was either a genuine photo of the phone user, a photo of a similar looking person, or a morph of the of the two. The phone misidentified an equally morphed image as the user 27% of the time compared to the 21% for humans who were told to expect manipulated images. However, don’t celebrate the victory of mankind just yet. The computer correctly recognized unequally morphed images as fakes significantly more often than human viewers.
Several conclusions can be drawn from the study. Humans need to be told to look for fakes in order to notice them more effectively, and may be able to identify fake photos equally as well as a computer in that case. However, unequally morphed photos were more difficult for the human participants and computers may be more effective at detecting irregularities in pictures. Lastly, although there are many creative ways to produce a fake ID, facial morph software is not a reliable one – at least not for your average criminal.