Face Recognition Technology: Benefits and Risks
Which factors influence the risks and benefits of using face recognition technologies?
Diverse stakeholders from industry, academia, civil society, and government agencies acknowledge that face recognition technologies’ benefits and risks vary depending on accuracy, broader performance, functional application, use case, and other factors.
Most, if not all, of the benefits and risks that face recognition technologies produce are sociotechnical, “meaning they are influenced by societal dynamics and human behavior.” The National Institute of Standards and Technology explains that sociotechnical risks and benefits “can emerge from the interplay of technical aspects” and “societal factors,” including the context in which users deploy the system. The impacts of this interplay can be challenging to understand and especially difficult to quantify.
Perspectives on face recognition technologies’ risks and benefits may differ depending on whether one considers risks and benefits in absolute terms or relative to alternatives. Such alternatives may include using technologies that leverage other biometric modalities or having people manually perform tasks that face recognition technologies can fully or partially automate.
This fourth (and final) piece in the ongoing series about face recognition technology governance challenges describes how accuracy, broader performance, functional application, and use case impact the sociotechnical benefits and risks that face recognition technologies can produce. The piece also highlights several of face recognition technologies’ data security risks and benefits across accuracy levels, functional applications, and use cases.
Rather than recommending whether or when to use the technologies, this piece explains how different factors impact the technologies’ risks and benefits. In doing so, this piece aims to help policymakers and other stakeholders develop a more comprehensive and nuanced understanding of key issues and facilitate more productive dialogue about legislative and other governance frameworks.
Share
Read Next
Relationship Between Accuracy, Benefits, and Risks
As the previous piece in this series explained, face recognition technologies produce three main types of errors: failures to enroll, false positives, and false negatives. Failure to enroll errors occur when face recognition technologies are unable to generate a face template and therefore cannot perform their verification or identification functions. False positive errors occur when the face recognition technology incorrectly indicates that images of two different individuals show the same person. False negative errors occur when the face recognition technology incorrectly indicates that two images of the same person show different people.
There are technical trade-offs between false positive and false negative error rates. Calibrating “thresholds” to deal with these trade-offs is “a delicate balancing act” that varies “based on many factors including use case.” Each type of error can produce negative consequences, but the severity depends on the functional application and use case, which is why different thresholds may be appropriate for different functional applications and use cases.
- A false positive error that erroneously identifies an individual as someone on a watch list or a false negative error that erroneously fails to identify an individual as a member of a trusted group could impede an authorized individual’s access to a device or location. Depending on the device or location and the alternative identification or identity verification methods available, the severity of the access delay or denial may vary.
- A false positive error that erroneously identifies an individual as a suspect in a criminal investigation or someone on the “no-fly list” could subject that individual to an unnecessary secondary screening or encounter with law enforcement.
- A false positive error that erroneously identifies an individual as a member of an authorized group or a false negative error that erroneously fails to identify an individual as someone on a watch list could allow an unauthorized individual to access a secure device, account, or location. Potential consequences range from enabling fraud, to enabling a violent offender to enter a secure location, to allowing someone to enjoy an event without purchasing a ticket.
- A false negative error that erroneously fails to indicate that a suspect photo may depict an individual in a mugshot database could delay law enforcement officers’ identification and apprehension of a suspect. Depending on the situation, the impact of this delay could be minimal or could result in the suspect being able to commit additional crimes.
- A failure to enroll error may mean that an individual simply must submit a new probe photo or that the individual is unable to use the face recognition system. Depending on the use case (including the availability of alternative identity verification or identification options), the consequences of being unable to use face recognition technology may vary.
Across use cases, it is important to consider several aspects of the relationship between face recognition technology accuracy, benefits, and risks.
- If error rates are higher for members of demographic groups that are already subject to bias and prejudice, then these elevated error rates could compound existing societal inequities. Well-documented examples of such inequities include facing disproportionate surveillance, higher-risk encounters with law enforcement, and greater difficulty accessing important services and resources.
- If using face recognition technologies to help identify or verify individuals’ identities produces more accurate results than using alternative methods, using face recognition technologies can help mitigate the risks associated with false positive and false negative errors.
- Even using highly accurate face recognition technologies can produce risks. Improving the accuracy and speed of problematic practices or systems may cause harm to occur faster and on a larger scale.
Benefits and Risks Across Functional Applications
Different face recognition technology functional applications produce different privacy and security risks and benefits.
Many still image functional applications that enable access control require users to place themselves in a particular position in front of a camera, and users typically are aware of their interaction with face recognition technologies. Several of these applications, such as those allowing individuals to actively choose to use their faces, instead of alphanumeric passwords, to unlock their phones, also require user consent. Requiring user consent–especially opt-in consent–can decrease privacy and civil rights risks. Nonetheless, as noted above, errors enabling an unauthorized user to access a device or facility could pose privacy and security risks. Errors that prevent an authorized user from accessing the device or facility could also cause harm. Operational testing can help ascertain whether still image face recognition technologies produce lower error rates than alternative methods of identity verification.
When law enforcement officers use still image face recognition applications to produce ranked candidate lists that can help generate investigative leads, the officers do so without the consent (and often without the awareness) of the individuals in the probe image and the gallery images. These functional applications do not positively identify suspects in probe images, but individuals in ranked candidate lists may face privacy and civil rights risks and other potential harms from having law enforcement officers investigate and/or contact them. Yet, if face recognition technologies produce more accurate results than processes requiring officers to manually review mugshot photos, using face recognition technologies could reduce unnecessary interactions between law enforcement officers and the public.
Face recognition technologies can also query a database and produce a ranked candidate list much faster than humans can manually review database photos, which means that using these technologies may help close cases faster. Law enforcement groups often stress that closing cases faster promotes public safety and benefits the communities in which the crimes occurred. However, civil rights advocates emphasize that, if particular crimes and communities are disproportionately investigated, closing cases faster can magnify the harmful impacts of bias in law enforcement settings.
Real-time video applications that enable users to continuously track identified individuals’ movements, behaviors, or activities in real time could allow users to surveil individuals over a sustained period without the tracked individuals’ knowledge or consent. Because these applications could enable unlawful surveillance, they pose high privacy and civil rights risks. Real-time video face recognition (particularly face identification) applications can enable such surveillance to occur at a speed and a scale far exceeding the speed and scale at which humans could manually review real-time video footage. Thus, these applications may pose higher privacy and civil rights risks than manual surveillance methods do.
Nevertheless, with appropriate safeguards in place, real-time video face recognition technology applications can produce benefits. For example, these applications can help identify missing and exploited children or human trafficking victims in online video footage and provide early warnings when unauthorized individuals try to enter a secure facility.
Benefits and Risks Across Use Cases
Benefits and risks vary across face recognition use cases, and many different aspects of the use cases can impact benefits and risks. Two of the most notable aspects are the size and quality of the reference gallery and the deployment context (i.e., who is using the technology and why they are using it).
Reference Gallery Size and Quality
Individuals may not always control whether their personal information becomes part of a face recognition technology reference gallery. Including an individual’s face template and other personal information in a face recognition technology reference gallery could enable unknown people–who otherwise may not be able to inconspicuously determine who an individual is–to identify the individual using just an image of the individual’s face.
One face recognition technology vendor has faced particularly intense scrutiny and millions of dollars in fines for developing and deploying a product that uses a probe template from a user-submitted image to query a reference gallery filled with face templates generated from billions of web-scraped images. The vendor’s failure to notify and obtain consent from individuals in web-scraped images prior to using the images to generate a reference gallery is a significant source of concern. The unprecedented size of the reference gallery and the questionable accuracy of the information in the reference gallery have also elicited concern.
Smaller, better-curated galleries create distinct privacy concerns. Mugshot databases are much smaller than the aforementioned technology vendor’s reference gallery and are subject to legal and administrative requirements that can help ensure that the information they contain is accurate and that the images are high-quality. The higher image and information quality in mugshot databases may improve the accuracy of the face recognition technology results. Due to this heightened accuracy and the databases’ smaller sizes, using face recognition technology to query mugshot databases likely impacts fewer people’s privacy than using face recognition technology to query a larger database of web-scraped images. However, civil rights advocates emphasize that risks for people in mugshot databases are especially acute, particularly when law enforcement officers use face recognition technologies to query these databases to help generate investigative leads. Given the overrepresentation of marginalized groups in mugshot databases, using reference galleries derived solely from mugshot photos may exacerbate existing inequities.
State driver’s license databases and federal government passport photo databases likely include data from more people than state mugshot databases but fewer people than the aforementioned web-scraped image gallery. Like mugshot databases, state driver’s license and federal government passport photo databases are subject to legal and administrative oversight that may help ensure they include accurate information and higher-quality images. Unlike mugshot databases, state driver’s license databases and passport photo databases may not have the same overrepresentation of marginalized groups. Therefore, the privacy risks associated with querying reference galleries derived from driver’s license or passport photo databases are likely to fall somewhere between the privacy risks of querying reference galleries derived from billions of web-scraped images and the privacy risks of querying reference galleries derived from state mugshot databases.
Deployment Context
The individual or entity deploying face recognition technology and the purpose the technology is aiming to serve can also impact the use case’s benefits and risks.
Private-sector face recognition technology deployments that enable ticketless entry to entertainment venues, facilitate payments, access medical records, and control access to residential or office buildings, produce different benefits and risks. For example, in an entertainment venue, face recognition technologies could more efficiently confirm that individuals entering the venue have paid for a seat. In addition to potentially reducing wait time, these technologies might enable the venue to allocate more human resources to tasks that improve customer experience or address safety incidents. If the face recognition technologies produce an error, an individual without a ticket may be able to enter the venue (thereby potentially resulting in a financial loss to the venue). It is also possible that an individual with a ticket would be unable to enter the venue (resulting in a financial loss to the ticketholder) or, more likely, would have to seek assistance getting into the venue (resulting in a delay). Nonetheless, if the face recognition technologies are more accurate than ticket scanners or other alternative methods of checking tickets, the face recognition technologies could decrease the ticketholders’ and the venue’s risks of financial loss.
Using face recognition technologies to complete financial transactions, locate medical records, or access a residential or commercial building could also improve efficiency and reduce the need for objects (like keys, credit cards, and insurance cards) that people can lose. However, if the face recognition technologies produce an error, the potential negative consequences in these use cases are likely to be higher than in the entertainment venue use case. Failing to complete a financial transaction or completing a transaction with incorrect account information could result in individuals or organizations losing large sums of money, missing opportunities, or having sensitive account information compromised. Failing to locate medical records or locating the wrong medical records could result in individuals receiving ineffective or damaging treatment. Struggling to access one’s home could pose safety risks. A delay getting into work could result in docked pay (for hourly workers) or disciplinary action. If the face recognition technologies are more accurate and reliable than alternative mechanisms for identifying individuals or verifying their identities, though, using these technologies could decrease the risks of harm that arise from alternative mechanism errors.
Different public-sector face recognition technology deployments also produce different benefits and risks. Government agencies often have access to personal information contained in official records, like birth certificates and social security cards, on which individuals rely to access government benefits and pay taxes. Preventing unauthorized access to personal information in government databases is vital to protecting individual privacy, and accurately identifying taxpayers and benefit recipients is critical to combatting fraud and identity theft. The use of face recognition technologies to help perform these functions, therefore, can produce significant privacy risks (if the face recognition technologies are less accurate and secure than alternative methods) and significant privacy benefits (if the face recognition technologies are more accurate and secure than alternative methods).
Because law enforcement agencies can use face recognition technologies to assist in identifying and locating suspects who may be searched, arrested, and incarcerated, these use cases are especially high-stakes. Especially if face recognition technologies are inaccurate or deployed in a manner that leads to over-policing or surveillance, using face recognition technologies in law enforcement contexts can pose high privacy and civil rights risks, particularly for marginalized communities. However, if properly trained law enforcement officers’ use of high-performing face recognition technologies improves identification accuracy, these technologies can help protect privacy, civil rights, and public safety.
Data Security Benefits and Risks
Depending on how users leverage face recognition systems and manage associated data, the risk of a face recognition system data breach may be lower than, comparable to, or greater than the risk of an alternative identification or identity verification system data breach. Face recognition technology vendors employing best security practices keep face templates and other data encrypted, and recent innovations in homomorphic encryption have enabled face recognition technology users to query the system without decrypting the face template. As a result, even if an unauthorized user gains access to a face recognition system, the likelihood that the user would be able to obtain useful, decrypted data may be low.
Even if an unauthorized user did manage to decrypt the face templates, the templates may not be particularly useful to the data breach perpetrator. Several face recognition technology vendors and industry groups emphasize that reverse-engineering face images from templates is challenging, if not impossible. Additionally, because each vendor’s product generates a different face template for the same individual, compromising one face recognition system would not mean that an individual’s ability to securely use another face recognition system is also in jeopardy. This is one reason why many technologists and cybersecurity experts consider biometric data, like face templates, to be more secure identity credentials than alphanumeric passwords, which are the same across different vendors’ systems.
Nevertheless, if switching to a new product that uses different face templates or changing the data breach victims’ face templates is infeasible, then the practical risks associated with a face recognition technology data breach may be higher. Individual users cannot change their own face templates the way that they can change their own alphanumeric passwords. Consequently, users likely will need more assistance from technology vendors and system operators in the event of a face recognition system data breach than in the event of a data breach involving a traditional alphanumeric password.
Conclusion
The risks and benefits that face recognition technologies can produce vary depending on the vendors’ and users’ data protection practices, the technologies’ accuracy and performance, the functional application, and the use case. Because face recognition technologies’ benefits and risks are sociotechnical in nature, assessing these benefits and risks often entails considering concerns about the fairness, appropriateness, and effectiveness of the systems and processes that face recognition technologies automate or expedite. Thus, even after developing a common understanding of face recognition technologies, their functional applications, how users deploy them, how well they perform, and their risks and benefits, stakeholders may disagree about whether their risks outweigh their benefits.
These disagreements are likely to persist and will continue to pose obstacles for face recognition technology legislation. State legislatures have passed bipartisan face recognition technology legislation, but the pressure to pass federal legislation is unlikely to completely dissipate. Members of Congress need to learn about face recognition technologies’ many terms, definitions, functional applications, use cases, accuracy and performance assessments, and sociotechnical risks and benefits in order to have constructive discussions about face recognition technology policy issues. Through such discussions, reaching bipartisan compromises and passing federal face recognition technology legislation may be possible.
Support Research Like This
With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.
Give NowRelated Articles
Join Our Mailing List
BPC drives principled and politically viable policy solutions through the power of rigorous analysis, painstaking negotiation, and aggressive advocacy.