Tuesday, February 15, 2011

Human Face Recognition, a Presentation by Dr. Shalini Gupta

The Austin Forum, on the evening of January 4, 2011, hosted an interesting presentation by Dr. Shalini Gupta entitled “Digital Human Face Recognition,” which I attended because I find digital face recognition a fascinating technical challenge, an increasingly important social issue, and because I have an interest in a lesser, related problem: automatic face isolation (without regard to identity).

Regrettably, it does not appear to be the practice of the Forum to record video of these presentations, so you'll have to settle for her slides (PDF, 25.4 MB), and various points that seemed important at the time and therefore stuck in my head. Her results were significant, interesting, and some of them were even germane to my face isolation interests.

  • Ironically, for me at least, Dr. Gupta’s presentation did not cover one of the first problems any real-world face recognition system has to solve, and the one in which I was most immediately interested: face isolation.
  • Much of Gupta’s extremely successful “3D AnthroFace” work was performed against the “Texas 3D Face Recognition Database,” which pre-isolated the faces, consistently positioned them within every image, and used significantly higher resolution images than those I have experimented with. Also, since the photos in the recognition database are stereo and/or 3D, they provide significantly more data than 2D images. Their single deficiency, relative to photos I've worked with, is their apparent lack of color. The choice of monochromatic imagery was presumably rooted in a desire to ensure that their algorithm would work in the absence of color information, thus making it compatible with output from monochromatic cameras, like most security cameras.
  • The “Eigenfaces” algorithm, published by Turk and Pentland in 1991, made face recognition truly practical for the first time by allowing a face to be characterized by as few as five numbers, quantifying key differences between the metrics of the observed face, and a prototypical “Eigenface.” Gupta sites it as having achieved a 21% verification rate with a false acceptance rate (FAR) error of 1 in 1,000, although it is not clear what size of database was involved in the test that produced that figure. Presumably, 20 years ago, the database would have been quite limited. Nonetheless, Eigenfaces has apparently been the basis for all subsequent face recognition work, and has been dramatically advanced over the years. It has also become the basis for many other types of automated visual recognition systems; as Gupta put it, there are now Eigenbolts and Eigenscrews, etc. For a great many classes of objects that require visual recognition Eigen images can be produced which allow the Eigenfaces algorithm (and its improved descendants) to be applied essentially unchanged.
  • In the most recent standard industry test of face recognition (“Multi Biometric Evaluation” in 2009/10), which used a database of 3.6 million people and required the fully automated analysis of 8.7 million photos and videos shot in a variety of conditions, ranging from studio shots only marginally more complex than those in the “Texas 3D Face Recognition Database” (though not 3D), to real world video of moving subjects in widely varying photographic conditions, “3D AnthroFace” was not only better than any other technology, but had a recognition rate equal to, or better than, that of humans. However, the means by which the humans were tested was not specified, so it’s hard to know what to make of that claim. (It seems unlikely that any human was asked to review photos of 3.6 million people, and then search for them in 8.7 million photos and videos.)
  • With regard to human face recognition capabilities, Gupta pointed-out that in a study of prison inmates exonerated by DNA tests, 84% had been incorrectly visually identified by human witnesses. So, at least under the conditions in which crimes are committed, investigated and prosecuted, human face recognition can be so poor as to be actively misleading. This isn’t news to many of us, but given its real-world importance, it probably can’t be repeated too often.
  • Face recognition systems depend, as you’d expect, on a database of the faces they’re meant to recognize. The error rates (composed of the false rejection rate, FRR, and the false acceptance rate, FAR) of all extant, and predicted, face recognition systems increase with the number of faces in the database. This problem is regarded as intrinsic to the task, but it is widely believed that the growth in error rates can be reduced by using separate databases for storing the characteristics of faces that can be differentiated by readily identifiable gross characteristics. Race and, I believe, sex were mentioned as candidates for such characteristics. In such a system, the first step in face recognition would be to make that gross identification, and then to select the appropriate database based on it. After that, the existing face recognition approaches would be used within the selected database with significantly reduced error rates. Of course, the error rates continue to scale with database size, so the use of multiple databases only delays the point at which error rates become unacceptable, as face databases (presumably) will only grow in size for most any purpose for the foreseeable future.
  • Despite the huge strides made in digital human face recognition, it is still bedeviled by a number of quite ordinary issues including unconstrained observing environments, human aging, the poses of subjects, variations in illumination, varied facial expressions, and the poor quality of images available from video systems. The latter issue was of particular interest to me, because many of the photos I have dealt with are comparable to images that might be obtained from video in their poor resolution and quality, suggesting that even the best face recognition systems would have had difficulty with some of the same images that have been a challenge to me.
  • Dr. Gupta repeatedly refused to comment on the social implications of face recognition technology, stating that she was concerned only with the technology; what people did with it was not up to her. One wonders what the uniformed police officers, and anyone else in the audience who might have been considering operating a real-world face recognition system, took away from the presentation. While the results of Gupta’s work were truly impressive, as demonstrated in the Multi Biometric Evaluation of 2009/10, the real-world capabilities of all face recognition systems were called into question by her closing acknowledgement that a host of common issues posed major problems (see item above). Her discussion of the problem of error rates increasing as face databases grow only raised more questions. The industry’s anticipated method of mitigating the latter issue, as previously discussed, is to make an initial gross categorization of faces based on a characteristic like race, and then to search within category-specific databases. While this is a sensible technical strategy (if such gross categorization can be performed quickly and reliably), will its eventual developers and users realize that their technology is engaging in automatic racial profiling? Will they also realize that it is doing so because the more one relies on facial recognition technology, the less reliable it becomes? Either issue is significant independently, but, when considered together, they mean that being a member of one of the races that compose the largest of the category-specific databases brings a higher chance of being falsely identified (bad if the database is looking for criminals), and falsely rejected (bad if the database is supposed to grant someone access to their bank account, or confirm to border officials that they are who their passport says they are).

That’s everything I can think of to report. I hope some of it was of interest, and that I’ve done justice to Dr. Gupta’s impressive work.

No comments:

Post a Comment