False Flag No More: Facing Down Militant Terrorism

The CIA is in the business of reading palms. For the spooks actually using biometrics like hand recognition, law enforcement has already identified suspected terrorists using the technology. For example, Khaled Sheikh Mohammed was convicted for his part in the murder of Daniel Pearl based on evidence from vein recognition tech. The intelligence community appears to be gaining the upper hand in threat analysis and counterterrorism.

In the same spirit of innovation, another useful weapon has been recently added to the arsenal for homeland defense. Crossing the experimental biometrics/security divide, research conducted by computer scientists and engineers at Facebook AI Research and Tel Aviv University has significantly advanced facial recognition technology. In a publication featuring results from experiments on Deepface, first presented at the 2014 IEEE Conference, they describe a deep neutral net (DNN) that achieves around a 97 percent accuracy rate in facial recognition. What’s more, Deepface improved on the next best DNN by more than 27 percent.

Although terrorists might mask their faces in other settings, their weakness domestically is that cities of the industrialized nations are not battlefields; preparation and scouting isn’t conducted with impunity. As the tragic Charlie Hebdo events demonstrated, identification using imagery from prior to the attacks is a subtle yet critical step in responding to national emergencies. Despite the militants’ use of masks, authorities assessed assailant identities by retracing the steps they took, relying on facial recognition and DNNs.

Deep neural nets act as an automated sentry when applied to face detection, face alignment, and face verification. After crafting a complex code bundle that responds to uploaded images, the Deepface system is taught to locate, identify, and categorize the structural patterns of human faces using a sample of images drawn from a population. Smart technologies embedded in phones and modern cameras use similar procedures. First, the DNN locates the face in its field of view with a target box. After focusing, it triangulates fiducial points between easily located features of the face, creating a unique facial map belonging to a single individual. Facebook users have seen this happen when photos are recommended for them.

When the DNN sentry is subsequently presented with input images, it lays the learned fiducial map over that of the identification candidate standing in front of, say, a camera at the entrance to a restricted space. If the major points match up, the DNN produces a yes/no answer regarding the subject’s identity. The researchers showed that as long as we’re confident the DNN draws from a secure server containing fiducial maps of authorized individuals, the candidate up for evaluation will be correctly identified 97.35 percent of the time. Not bad, considering human-only trials reveal we identify people accurately 97.5 percent of the time. Paired with simple measures like personalized passwords and swipe badges, the likelihood of entrance by unauthorized individuals plummets to very low levels.

Variety in teaching the DNN is key. Variations in the training subset of images in terms of lighting, angle, dermis pigment, and expression force the DNN to prepare for imperfect evaluation scenarios. Later, for example, when tracking down militants returning to Europe from foreign conflicts based on battlefield imagery, the DNN would be able to recall the image from its memory centers despite obscured or missing structural facial features. That’s the reason why passport photos are expressionless: it’s the easiest way for systems like Deepface to identify prominent features of a suspect and subsequently match them against low-quality images such as in the Boston marathon bombing.

Good teaching can’t overcome poor input imagery, however. The researchers overcame this by designing the DNN with the ability to “frontalize” an image. Frontalizing— moving the subject’s face to perfect alignment with the camera—permits the DNN to subsequently break the image into regions of evaluation. This is critical to the final identification since some regions differ in spatial stationarity between photos. That is, some facial features move even if the face remains stationary; the angle and placement of the brow is easier to spatially locate in two different photos than is the bridge between the mouth and nose, for example. Without this feature of compartmentizing regions the DNN would never reach an accurate identification.

The masterstroke of the Deepface design is the penultimate representation step, when the separate regions are reconstituted. After jigsawing together the full image from separate regions, emphasizing the fiducial points of relevance, the final step involves feeding the map into a K-way softmax, or database (where K is the number of classes, e.g. nationalities). In a matter of seconds the DNN produces a candidate match.

Biometrics are limited by two obstacles: the sophistication of the technology and where and when it can be applied. On the one hand, battlefield images are of varying quality, and in high supply. On the other, cities are mostly open environments with evident lines of sight, but crowded. Biometric systems such as Deepface, when blended with conflict zone experience and urban law enforcement—and a detective’s intuition—make a powerful gestalt that tips the scales away from would-be attackers in favor of anti-terrorism operations.

Article Source: Y. Taigman, M. Yang M.A. Ranzato, and L. Wolf. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. 2014 IEEE Conference on Computer Vision and Pattern Recognition.

Feature Photo: cc/(Laurent Tine)

Paul Berry
Paul Berry is a staff writer for the Chicago Policy Review and is an MPP student at the Harris School of of Public Policy. He is interested in foreign policy.

Comments are closed.