Eight wrongful arrests highlight flaws in facial recognition tech
Facial recognition software has mistakenly identified at least eight Americans, leading to their arrest, reports the "Washington Post".
In the United States, at least eight individuals have been wrongfully arrested due to incorrect identification by facial recognition software. According to the "Washington Post", police in the USA utilise artificial intelligence technology to detain suspects, often without additional evidence.
Problems with identification
The newspaper analysed data from police reports, court records, and interviews with officers, prosecutors, and lawyers. The findings suggest the problem could be significantly larger because prosecutors rarely disclose the use of AI, and the law requires this only in seven states. The total number of wrongful arrests caused by AI errors remains unknown.
In the eight identified cases, the police did not undertake basic investigative actions, such as checking alibis, comparing distinguishing features, or analysing DNA and fingerprint evidence. In six cases, they ignored the suspects' alibis, and in two, they overlooked evidence contradicting their assumptions.
In five instances, key evidence was not collected. The "Washington Post" cites an example of a person arrested for attempting to cash a forged cheque, where the police did not even verify the suspect's bank accounts. Physical characteristics of the suspects, which contradicted the AI recognition, were disregarded three times, such as in the case of a heavily pregnant woman accused of car theft.
In six cases, witness statements were not verified. An example includes a situation where a security guard confirmed the identity of a suspect accused of stealing a watch, despite not being present during the incident.
Concerns about the technology
Facial recognition software performs almost perfectly under laboratory conditions, but its effectiveness in real-world scenarios remains questionable. Katie Kinsey from NYU notes the lack of independent tests verifying the accuracy of the technology on unclear surveillance images. Research by neurologists at University College London suggests that users may blindly trust AI decisions, leading to erroneous judgments.
The "Washington Post" emphasises that over-reliance on AI systems can obstruct accurate assessment of situations, which is particularly perilous in the context of the justice system.