top of page

NIST Study on Facial Recognition - Too Many False Positives

Police across the country are increasingly making use of facial recognition software to identify suspects in crimes recorded on surveillance cameras. There have been reports of innocent people being arrested because the police relied on face recognition software. One example is the case of Robert Williams held in custody by the Detroit Police, and then later released. See, Sarah Rahal and Mark Hicks, Detroit police work to expunge record of man wrongfully accused with facial recognition, The Detroit News, June 26, 2020, available here.


The National Institute of Standards and Technology conducted a study on efficacy of facial recognition algorithms. See Patrick Grother, Mei Ngan, and Kayee Hanaoka, NISTIR 8280, Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects, December 2019, available at https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf. The study concluded that the algorithms used by different developers vary widely in their accuracy. It made use of mugshots, photos submitted with applications for immigration benefits, and border crossing photographs. More than 18 million photos were reviewed. False positive results were returned far more often than false negatives, and the software exhibited different biases.


"Our main result is that false positive differentials are much larger than those related to false negatives and exist broadly, across many, but not all, algorithms tested. . . . With domestic law enforcement images, the highest false positives are in American Indians, with elevated rates in African American and Asian populations; the relative ordering depends on sex and varies with algorithm. We found false positives to be higher in women than men, and this is consistent across algorithms and datasets. This effect is smaller than that due to race.", Id. at 2.




Comments


bottom of page