Jim Feldkamp has had a lengthy career serving in the military and working for the Federal Government in a handful of capacities, becoming an expert on topics such as national security and cybersecurity in this time. Here, he discusses recent concerns over police use of facial recognition and what lawmakers are doing to protect civilian freedoms.
Facial recognition is no longer the stuff of science fiction, Jim Feldkamp tells us. It’s used across the globe today to unlock our phones, scan faces of travelers as ticket verifications, screen crowds at large events and concerts, and more. The police have even used facial recognition technology in recent years to crack down on criminals, though it has stirred a lot of unrest with the public.
“There’s no doubt that this technology can help us tackle cold criminal cases and do things like help families find missing children faster, but at a price,” says Jim Feldkamp. “Facial recognition technology isn’t perfect and comes with its fair share of biases, computational mistakes, and concerns over individual freedoms.”
Today, US law enforcement officers use the tech to identify suspects––such as in the case of the Capital Gazette newspaper office shooter––and confirm leads in routine criminal investigations. Screenshots from security footage can be used to search for individuals accused of crimes in computer databases and potentially locate their whereabouts. Some counties report using facial recognition technology as often as a thousand times or more each year.
Jim Feldkamp tells us that officers also use photos either provided by security footage or elsewhere to lookup potential suspects in jail booking databases. The process only takes seconds and can provide a number of leads to police. In the past, manually searching through databases without the technology could take anywhere from days to weeks and months.
There’s a major flaw in the technology, though.
“Facial recognition systems today have a difficult time appropriately identifying racial minorities, women, and youth,” says Jim Feldkamp. “Some reports put their error rates as high as 1 in every 3, which is a stark contrast to the 1 percent error rate for white males.”
Feldkamp believes that people who build these systems––whether they realize it or not––may create algorithms that include their own biases. And beyond this glaring problem, he says agencies have the potential to use the tech to passively spy on people without any reasonable suspicion or consent. All of this has stirred up worry among civilians and lawmakers, many of whom want to ban the technology entirely.
“A new bill introduced in the Senate earlier this week may ban law enforcement agencies from using facial recognition tech,” says James Feldkamp. “The bill proposes a moratorium on using the technology until a commission recommends guidelines and limitations for government use. Because it’s not always accurate, and because there’s so much room for abuse, this new bill will help us avoid encroaching on First Amendment rights and impacting civil liberties. Whether we can actually settle on unified national guidelines, however, will be the real question.”