Pandemic changes facial recognition debate

  • A surveillance camera is seen along Boylston Street in Boston. The Boston City Council voted unanimously on June 24, 2020, to ban the use of facial recognition technology by city government. AP PHOTO

For the Gazette
Published: 3/23/2021 9:32:36 AM

BOSTON — With masks a mandatory accessory to everyday life, the controversial use of facial recognition by law enforcement to find potential crime suspects has only heightened concerns and triggered further scrutiny due to advancements in artificial intelligence.

People might use facial recognition technology to unlock their iPhones or tag their friends in photos on Facebook, but law enforcement can use this technology to predict where crimes might occur, who might commit crimes and who victims could be.

Artificial intelligence was created in the late 20th century. Facial recognition technology can be credited to Woody Bledsoe, a scientist from Oklahoma. He earned his bachelor’s degree in mathematics in 1948 and went to further his studies at the University of California Berkeley.

He and Iben Browning, a co-worker at Sandia Corp., an atomic energy research company, created a program that recognizes patterns. Initially, their creation, named the Bledsoe-Browning N-Tuple Method, was only capable of recognizing letters. Once the system identified a pattern, it could match it with similar patterns and continue to learn.

Since then, the technology has grown and gotten exponentially more complex. New updates from Apple and Japanese company NEC have spawned technology capable of identifying people wearing face masks.

Law enforcement agencies and private companies have a history of using facial recognition to store data, identify people of interest and unlock smart devices. As debate about privacy and what the technology is truly capable of have increased, so too has the public’s concern over police use of this tool.

Ideally, artificial intelligence and facial recognition would ease the burden of the police, keep criminals off the street and keep the public safe, but inconsistencies within the programing don’t always guarantee perfect results.

Joy Buolamwini, a computer scientist from Massachusetts Institute of Technology Media Lab, created the Gender Shades Project as her thesis. She found and coined the term algorithmic bias to reflect how facial recognition programs identified lighter-skinned people more accurately versus those with darker skin. She used over a thousand images of government officials’ faces and ran them through different types of facial recognition software from tech companies IBM, Microsoft and Face++.

Her results established a clear, though slight, bias.

The programs could distinguish between male and female faces and were overall fairly accurate, but had a harder time identifying people with darker skin tones.

“As we tested women with darker and darker skin, the chances of being correctly gendered came close to a coin toss,” Buolamwini said in her overview of the project. “These data-centric technologies are vulnerable to bias and abuse.”

People have already been falsely accused because of errors in artificial intelligence.

The American Civil Liberties Union found inaccuracies within artificial intelligence with the technology Amazon uses. There is also an account of Robert Williams, a Black man who was wrongfully accused of a crime by facial recognition and held in police custody for over a day.

The American Association for the Advancement of Science, a nonprofit dedicated to “advance science, engineering and innovation” held a public talk in 2019 on the responsible use of artificial intelligence. The discussion, directed by Jessica Wyndham, the association’s director of scientific responsibility, human rights and law program, included legal advisers and scientists in the artificial intelligence field.

Developers and distributors of artificial intelligence become knowledgeable through independent testing, said speaker Jonathon Phillips.

As an electronics engineer at the National Institute of Standards and Technology’s Information Technology Laboratory, he said companies develop different methodologies to test accuracy when trying to identify different demographics. He did not comment on the level of accuracy of specific companies.

Speaker Neema Singh Guliani said the technology is fundamentally flawed, since it can’t be 100% accurate, and those flaws may fall disproportionately on certain individuals.

Guliani, who served as senior legislative counsel with the American Civil Liberties Union from 2013 to 2020, said differences between what the technology will recognize or not are often amplified because it’s also being deployed in contexts that have histories of bias and discrimination. There are biases to how often people are stopped and arrested, Singh said.

“We’re seeing some problems that we all worry about,” Singh said. “Discrimination, disparities in the criminal justice system, problems that are enormous already that could just get worse.”

Singh said the risks and threats from artificial intelligence are unique in terms of the threat they pose to civil liberties. She thinks there’s a need for a large societal debate about whether the technology should be used at all.

Guliani said use of the technology by the government or private companies on a national level doesn’t seem to have the public’s best interest in mind, and there is a need for these entities to be more transparent with people about the uses of artificial intelligence.

These issues may take on greater concern in July, when Massachusetts state law will allow law enforcement to use facial recognition only through a written request made by a warrant, submitted to the Registrar of Motor Vehicles. Even then, the images can’t include movement or video data.

Law enforcement agencies must also keep records of any facial recognition searches they conduct and submit the documentation for quarterly review with the Executive Office of Public Safety and Security. This documentation must also be published yearly and before September of each year, on the EOPSS website.

These changes came as a result of a police reform bill first drafted after the murder of George Floyd in 2020 and the start of the Black Lives Matter movement.

Gov. Charlie Baker originally refused to sign the bill if it banned police use of facial recognition technology, eventually signing an amended bill into law on Dec. 31. Use of facial recognition technology is still banned in Boston, but police agencies in other cities can use the technology.

Eileen Qiu writes for the Gazette from the Boston University Statehouse Program.

Sign up for our free email updates
Daily Hampshire Gazette Headlines
Daily Hampshire Gazette Contests & Promotions
Daily Hampshire Gazette Evening Top Reads
Daily Hampshire Gazette Breaking News
Daily Hampshire Gazette Obits
Daily Hampshire Gazette Sports
Daily Hampshire Gazette PM Updates
Daily Hampshire Gazette Weekly Top Stories
Daily Hampshire Gazette Valley Advocate


Support Local Journalism

Subscribe to the Daily Hampshire Gazette, your leading source for news in the Pioneer Valley.

Daily Hampshire Gazette Office

115 Conz Street
Northampton, MA 01061


Copyright © 2021 by H.S. Gere & Sons, Inc.
Terms & Conditions - Privacy Policy