Biometrics will make Security Robots even smarter

0
1320

With robots, unmanned ground vehicles (UGVs) and drones commercially available and ready for prime time, what’s next for this category? Focus will turn to artificial intelligence (AI) and ways to make these robotic solutions more helpful to the security teams they serve. The ability for them to differentiate between individual humans, for identification and authentication purposes, is fundamental to this next stage of their evolution. The integration of biometric solutions will make that happen.

The applications for biometrics within robots, UGVs and drones can be divided into two distinct categories; those that engage with cooperative subjects and those that don’t.
When dealing with cooperative subjects — typically individuals who want to be permitted access to somewhere — any number of biometric modalities can be used.

Iris recognition makes a lot of sense because it is non-tactile, easy to administer and is extremely accurate. A robot can be equipped with an iris recognition reader and each person needs only to stand in front of it for as little as 2 seconds to be processed. With this capability, a huge range of security-related tasks could be accomplished. For commercial applications, security robots could differentiate between employees and intruders on restricted job-sites.

Universities could post them at doorways to examination rooms to authenticate the identity of students taking tests. Convention centers could monitor entrances to exhibit halls and restrict access to special events. The list of possibilities is endless. However, many applications that employ robots will involve non-cooperative subjects, including almost all applications that use drones. Noncooperative subjects are people who are simply going about their business without directly interacting with the robot, and their biometric data must be collected without their active participation.

This means data can best be obtained by video cameras and, in terms of biometric analysis, facial recognition is the most practical. Provided that a camera is positioned to obtain an unobstructed frontal facial image and is powerful enough to capture it with at least 30 to 50 pixels between the subject’s eyes, a facial analysis engine has enough data to do its job.
A 2015 study conducted by the National Institute of Standards and Technology (NIST) concluded that when operational requirements are met, the accuracy of facial recognition using noncooperative subjects approaches the accuracy of using cooperative subjects. That same report also concluded that obtaining this goal is extremely difficult.

Equipping robots, UGVs or drones with sufficiently powerful cameras is easy. Today’s high resolution imaging is so advanced that some cameras can clearly capture faces from across a football field. Capturing unobstructed facial images is not so easy. There’s no workaround for individuals wearing hats, sunglasses or who are standing behind taller people.
Cellphones have many folks looking down instead of up. And perspective can be tricky. A drone can’t take useful images looking straight down; its camera must be looking at faces at no more than a 30° angle.
This means that a drone flying at a height of 50 feet would need to be at least 86 feet away from its subject, on the horizontal plane, in order to capture a usable image. Even with these operational challenges, there is great interest, particularly from law enforcement and the government sector, in seeing successful integration of facial recognition within drones.
The U.S. Customs and Border Control is seriously investigating ways to incorporate commercially manufactured, lightweight drones equipped with facial recognition technology into their operations.

It’s unclear how quickly a solution can be developed that provides a sufficiently accurate level of matching to accomplish this goal. Current facial recognition technology performs best when matching against smaller databases, say 500 subjects or fewer. Beyond that, the number of false positives begins to increase. There are hundreds of millions of facial images in federal and state databases. Unless the biometric analysis is to be matched against an extremely targeted subset of these images, it is unlikely that meaningful results could be rendered using today’s algorithms.

Nevertheless, the technological march continues, both for increasingly autonomous robotic security solutions and more precise biometric measurement and analysis tools. There’s no doubt that their trajectories are converging. It is, perhaps, a bit ironic that as we rely more on nonhuman helpers for our security needs, it will be their ability to identify the very things they are lacking — the inherent, biologically-based individuality of their human counterparts — that will make them so much more valuable to us.