The use of face recognition technology by government agencies in the United States has raised concerns about potential harms and discrimination, despite claims of reliability and accuracy by corporate makers and sellers. Testing of facial recognition systems by organizations like the National Institute of Standards and Technology has revealed racial disparities in accuracy, with women of darker complexions being misclassified at higher rates. The reliance on performance scores to assess the effectiveness and safety of facial recognition systems may not fully address the real-world challenges and biases present in their use by law enforcement. For example, false positive rates can vary significantly across different demographic groups, leading to wrongful arrests and other harms. Policymakers should be cautious about using performance scores as a basis for approving or mandating face recognition technology, as they may not account for the discriminatory impact and potential misuse of these systems. Instead, the focus should be on addressing the deeper issues of discriminatory policing and government control that could result from the widespread use of facial recognition technology. The ACLU and other advocacy groups have called for restrictions on the use of facial recognition by government agencies to protect civil rights and privacy. Policymakers should consider these concerns and prioritize the protection of individual rights and liberties when regulating the use of facial recognition technology.
Photo credit
www.aclu.org