A 61-year-old grandfather is suing Sunglass Hut’s parent company after the store’s facial recognition technology mistakenly identified him as a robber. Harvey Eugene Murphy Jr. was subsequently held in jail, where he says he was sexually assaulted, according to the lawsuit.
The January 2022 robbery took place at a Sunglass Hut store in Houston, Texas, when two gun-wielding robbers stole thousands of dollars in cash and merchandise.
Houston police identified Murphy as a suspect – even though he was living in California at the time.
When Murphy returned to Texas to renew his driver’s license, he was arrested. He was held in jail, where he says he was sexually assaulted by three men in a bathroom. He says he suffered lifelong injuries.
The Harris County District Attorney’s office in Texas determined Murphy was not involved in the robbery – but the damage was already done while he was in jail, his lawyers said in a news release.
Facial recognition is often used to match faces in surveillance footage – such as video of a store robbery – with images in a database. The system often uses booking photos, but the software can also search driver’s license photos, meaning if you have a license, your picture might have been searched even if you’ve never committed a crime.
Murphy has a criminal record from the 1980s and 1990s, meaning he likely has a booking photo. His lawyers said those offenses were not violent and he has built a new life in the last 30 years, according to the press release.
He is now suing Sunglass Hut’s parent company EssilorLuxottica and Macy’s, a partner of the company. The head of EssilorLuxottica’s loss prevention team said they worked alongside Macy’s and had identified Murphy as the suspect using facial recognition software.
Murphy’s attorneys are arguing that facial recognition is error-prone and low-quality cameras were used, increasing the probability of a mistake in identifying a suspect.
A Sunglass Hut employee identified Murphy as the suspect in a police photo lineup – but Murphy’s lawyers allege the loss prevention team met with her before that, possibly tainting the investigation.
“Mr. Murphy’s story is troubling for every citizen in this country,” said Daniel Dutko, one of the lawyers representing Murphy. “Any person could be improperly charged with a crime based on error-prone facial recognition software just as he was.”
In facial recognition used by law enforcement offices like the FBI, complex mathematical algorithms are used to compare a picture of a suspect’s face to potentially millions of others in a database. But it has its flaws.
In 2023, the Federal Trade Commission banned Rite Aid from using the technology after the company’s faulty system had employees wrongfully accusing shoppers of stealing. In one incident, an 11-year-old girl was stopped and searched by a Rite Aid employee based on a false match.
The FTC said this faulty technology often incorrectly misidentifies Black, Asian, Latino and women consumers.
In 2023, a woman sued the Detroit Police Department after authorities misidentified her as a carjacking suspect using facial recognition technology. Porcha Woodruff, who was eight months pregnant at the time of her wrongful arrest, went to jail after being incorrectly identified in a police lineup. Detroit Police Chief James White says Woodruff’s photo should not been used in the lineup to begin with.
CBS News reached out to EssilorLuxottica for comment is awaiting response. Macy’s declined to comment. Murphy’s lawyers had no additional comment.
Caitlin O’Kane is a New York City journalist who works on the CBS News social media team as a senior manager of content and production. She writes about a variety of topics and produces “The Uplift,” CBS News’ streaming show that focuses on good news.
In the race to rein in artificial intelligence, Western governments have hit a major bump in the road: they all want to win.
Officials from the European Union, the United States and other major economies are competing to write the definitive rules for artificial intelligence, including for the likes of OpenAI’s ChatGPT and Google’s Bard.
Rival summits will be held in the Fall with the aim to reach a coordinated plan between Western governments on how to regulate the emerging technology. But these upcoming events risk entrenching divisions between countries in ways that threaten to undermine efforts to draw up a unified international rulebook on AI. To make matters worse, some of the talks are now getting personal.
“Everyone is committed to making this work,” said a European Commission official involved in negotiations over AI rules. “But right now, there are a lot of egos in the room.”
Western politicians are keen to show voters they are on top of a technology that burst into the public’s consciousness, almost overnight.
AI advocates say the economic opportunities offered by rolling out the technology range from quicker diagnoses of diseases to the development of autonomous vehicles. Skeptics warn AI could lead to a surge in unemployment and — in the very worst scenarios — global armageddon, if automated systems gain uncontrollable power.
Experts argue a common Western rulebook is vital to allow companies that use the technology to operate with ease internationally because AI is inherently a cross-border tool. Common rules would also protect people from Berlin to Boston from the technology’s potential harms, including minority groups potentially suffering discrimination from automated AI tools.
“We really don’t have a systematic global response to what we should do about the many risks,” said Gary Marcus, a psychologist and cognitive scientist at New York University who wants to see greater checks on AI. “Every country is trying to do something on its own.”
While governments in the West argue among themselves, China is pressing ahead with its own rulebook. The Chinese Communist Party says it’s seeking to protect its citizens from the AI’s risks. But Beijing’s critics say its regulation will be designed to serve its authoritarian ends.
Governments in the West worry that China’s totalitarian take on AI, including the technology’s wholesale use for national security purposes, may gain ground across the developing world if they don’t promote their own blueprint as an alternative.
For this article, POLITICO spoke to six Western officials working on the AI summits, who were granted anonymity to discuss the challenges they face.
In September, officials from the G7 group of Western industrialized economies are expected to meet to finalize a blueprint for how to regulate AI, according to two officials with direct knowledge of the talks.
Western officials worry that China’s totalitarian take on AI may gain ground across the developing world if the West does not promote its own blueprint as an alternative | Mark Ralston/AFP via Getty Images
That gathering will then be followed by a more formal summit of G7 leaders, likely in October or November, the officials said. European and U.S. officials hope the G7 work will bolster their joint attempt to limit the risks of generative AI and develop safe ways to use the technology to jumpstart economic growth.
The U.K. has also pitched itself as a world leader on AI safety and is expected to host its own summit, in London in November. British Prime Minister Rishi Sunak views the event as a chance to enhance the country’s role as a global player seven years after the country’s Brexit referendum.
Officials involved in these overlapping AI projects describe a complex diplomatic tussle. International rivalries, diplomatic realpolitik and — above all — fears about how China will promote its own AI rules have complicated preparations for the meetings. Not all Western capitals, particularly within the EU, view Beijing’s stance on AI as contradictory to their own.
Divisions on how best to police the technology have also slowed down the process of reaching agreement. The EU wants to take a more aggressive stance on policing AI, while the U.S., U.K. and Japan would prefer more industry-led commitments. It’s unclear whether these differences can be overcome before the proposed summits later this year.
Egos, not policy
Three Western officials, who spoke on the condition of anonymity to discuss internal deliberations, complained that people’s egos — and not efforts to regulate AI — had taken over discussions linked to the G7 and U.K. summit events.
Since the EU first proposed AI oversight to the G7 work in late April and followed that up with a two-page memo in late May to the U.S., representatives from cooperating governments have been sparring privately to take credit for the West’s plans, the officials added.
That behavior has included adding to the draft G7 document in ways that favored their own stance on AI governance; taking credit, publicly, for the conclusions of the upcoming G7 summit; and dismissing others’ views in often backhanded comments while drafting proposals.
Brussels wants its own AI legislation, which is expected to be completed by December, to form the basis of measures adopted by other leading democracies, according to two European Commission officials involved in that process. That plan involves pushing for mandatory curbs on how AI is deployed in so-called “high-risk” cases like the use of facial recognition technology in law enforcement.
Washington is eager to press its more industry-friendly approach, and the White House published a set of voluntary commitments that Amazon and Microsoft have agreed to support. These non-binding pledges, which include promises to allow outsiders to test the firms’ AI systems for biases and other societal safeguards, are, in part, an effort to get ahead of similar proposals at the heart of the G7’s upcoming summit, according to one U.S. official.
“Any kind of international level agreement will have to be at the level of very vague principles,” said Suresh Venkatasubramanian, a computer scientist at Brown University, who co-wrote the White House’s guidelines for how U.S. agencies should oversee AI. “Everyone wants to do their own thing.”
This system, which has faced pushback from digital rights organizations and United Nations experts, will get its spotlight moment at the 2024 Paris Summer Olympics. In July next year, France will deploy large-scale, real-time, algorithm-supportedvideo surveillance cameras — a first in Europe. (Not included in the plan: facial recognition.)
Last month, the French parliament approved a controversial government plan to allow investigators to track suspected criminals in real-time via access to their devices’ geolocation, camera and microphone. Paris also lobbied in Brussels to be allowed to spy on reporters in the name of national security.
Helping France down the path of mass surveillance: a historically strong and centralized state; a powerful law enforcement community; political discourse increasingly focused on law and order; and the terrorist attacks of the 2010s. In the wake of President Emmanuel Macron’s agenda for so-called strategic autonomy, French defense and security giants, as well as innovative tech startups, have also gotten a boost to help them compete globally with American, Israeli and Chinese companies.
“Whenever there’s a security issue, the first reflex is surveillance and repression. There’s no attempt in either words or deeds to address it with a more social angle,” said Alouette, an activist at French digital rights NGO La Quadrature du Net who uses a pseudonym to protect her identity.
As surveillance and security laws have piled up in recent decades, advocates have lined up on opposite sides. Supporters argue law enforcement and intelligence agencies need such powers to fight terrorism and crime. Algorithmic video surveillance would have prevented the 2016 Nice terror attack, claimed Sacha Houlié, a prominent lawmaker from Macron’s Renaissance party.
Opponents point to the laws’ effect on civil liberties and fear France is morphing into a dystopian society. In June, the watchdog in charge of monitoring intelligence services said in a harsh report that French legislation is not compliant with the European Court of Human Rights’ case law, especially when it comes to intelligence-sharing between French and foreign agencies.
“We’re in a polarized debate with good guys and bad guys, where if you oppose mass surveillance, you’re on the bad guys’ side,” said Estelle Massé, Europe legislative manager and global data protection lead at digital rights NGO Access Now.
A history of surveillance
Both the 9/11 and the Paris 2015 terror attacks have accelerated mass surveillance in France, but the country’s tradition of snooping, monitoring and data collection dates way back — to Napoléon Bonaparte in the early 1800s.
“Historically, France has been at the forefront of these issues, in terms of police files and records. During the First Empire, France’s highly centralized government was determined to square the entire territory,” said Olivier Aïm, a lecturer at Sorbonne Université Celsa who authored a book on surveillance theories. Before electronic devices, paper was the main tool of control because identification documents were used to monitor travels, he explained.
The French emperor revived the Paris Police Prefecture — which exists to this day — and tasked law enforcement with new powers to keep political opponents in check.
In the 1880s, Alphonse Bertillon devised a method of identifying suspects and criminals using biometric features | Peter Macdiarmid/Getty Images
In the 1880s, Alphonse Bertillon, who worked for the Paris Police Prefecture, introduced a new way of identifying suspects and criminals using biometric features — the forerunner of facial recognition. The Bertillon method would then be emulated across the world.
Between 1870 and 1940, under the Third Republic, the police kept a massive file — dubbed the National Security’s Central File — with information about 600,000 people, including anarchists and communists, certain foreigners, criminals, and people who requested identification documents.
After World War II ended, a bruised France moved away from hard-line security discourse until the 1970s. And in the early days of the 21st century, the 9/11 attacks in the United States marked a turning point, ushering in a steady stream of controversial surveillance laws — under both left- and right-wing governments. In the name of national security, lawmakers started giving intelligence services and law enforcement unprecedented powers to snoop on citizens, with limited judiciary oversight.
“Surveillance covers a history of security, a history of the police, a history of intelligence,” Aïm said. “Security issues have intensified with the fight against terrorism, the organization of major events and globalization.”
The rise of technology
In the 1970s, before the era of omnipresent smartphones, French public opinion initially pushed back against using technology to monitor citizens.
In 1974, as ministries started using computers, Le Monde revealed a plan to merge all citizens’ files into a single computerized database, a project known as SAFARI.
The project, abandoned amid the resulting scandal, led lawmakers to adopt robust data protection legislation — creating the country’s privacy regulator CNIL. France then became one of the few European countries with rules to protect civil liberties in the computer age.
However, the mass spread of technology — and more specifically video surveillance cameras in the 1990s — allowed politicians and local officials to come up with new, alluring promises: security in exchange for surveillance tech.
In 2020, there were about 90,000 video surveillance cameras powered by the police and the gendarmerie in France. The state helps local officials finance them via a dedicated public fund. After France’s violent riots in early July — which also saw Macron float social media bans during periods of unrest — Interior Minister Gérald Darmanin announced he would swiftly allocate €20 million to repair broken video surveillance devices.
In parallel, the rise of tech giants such as Google, Facebook and Apple in everyday life has led to so-called surveillance capitalism. And for French policymakers, U.S. tech giants’ data collection has over the years become an argument to explain why the state, too, should be allowed to gather people’s personal information.
“We give Californian startups our fingerprints, face identification, or access to our privacy from our living room via connected speakers, and we would refuse to let the state protect us in the public space?” Senator Stéphane Le Rudulier from the conservative Les Républicains said in June to justify the use of facial recognition on the street.
Strong state, strong statesmen
Resistance to mass surveillance does exist in France at the local level — especially against the development of so-called safe cities. Digital rights NGOs can boast a few wins: In the south of France, La Quadrature du Net scored a victory in an administrative court, blocking plans to test facial recognition in high schools.
Some grassroots movements have opposed surveillance schemes at the local level, but the nationwide legislative push has continued | Ludovic Marin/AFP via Getty Images
At the national level, however, security laws are too powerful a force, despite a few ongoing cases before the European Court of Human Rights. For example, France has de facto ignored multiple rulings from the EU top court that deemed mass data retention illegal.
Often at the center of France’s push for more state surveillance: the interior minister. This influential office, whose constituency includes the law enforcement and intelligence community, is described as a “stepping stone” toward the premiership — or even the presidency.
“Interior ministers are often powerful, well-known and hyper-present in the media. Each new minister pushes for new reforms, new powers, leading to the construction of a never-ending security tower,” said Access Now’s Massé.
Under Socialist François Hollande, Manuel Valls and Bernard Cazeneuve both went from interior minister to prime minister in, respectively, 2014 and 2016. Nicolas Sarkozy, Jacques Chirac’s interior minister from 2005 to 2007, was then elected president. All shepherded new surveillance laws under their tenure.
In the past year, Darmanin has been instrumental in pushing for the use of police drones, even going against the CNIL.
For politicians, even at the local level, there is little to gain electorally by arguing against expanded snooping and the monitoring of public space. “Many on the left, especially in complicated cities, feel obliged to go along, fearing accusations of being soft [on crime],” said Noémie Levain, a legal and political analyst at La Quadrature du Net. “The political cost of reversing a security law is too high,” she added.
It’s also the case that there’s often little pushback from the public. In March,on the same day a handful of French MPs voted to allow AI-powered video surveillance cameras at the 2024 Paris Olympics, about 1 million people took to the streets to protest against … Macron’s pension reform.
Sovereign cameras
For politicians, France’s industrial competitiveness is also at stake. The country is home to defense giants that dabble in both the military and civilian sectors, such as Thalès and Safran. Meanwhile, Idemia specializes in biometrics and identification.
“What’s accelerating legislation is also a global industrial and geopolitical context: Surveillance technologies are a Trojan horse for artificial intelligence,” said Caroline Lequesne Rot, an associate professor at the Côte d’Azur University, adding that French policymakers are worried about foreign rivals. “Europe is caught between the stranglehold of China and the U.S. The idea is to give our companies access to markets and allow them to train.”
In 2019, then-Digital Minister Cédric O told Le Monde that experimenting with facial recognition was needed to allow French companies to improve their technology.
France’s surveillance apparatus will be on full display at the 2024 Olympic Games | Patrick Kovarik/AFP via Getty Images
For the video surveillance industry — which made €1.6 billion in France in 2020 — the 2024 Paris Olympics will be a golden opportunity to test their products and services and showcase what they can do in terms of AI-powered surveillance.
XXII — an AI startup with funding from the armed forces ministry and at least some political backing — has already hinted it would be ready to secure the mega sports event.
“If we don’t encourage the development of French and European solutions, we run the risk of later becoming dependent on software developed by foreign powers,” wrote lawmakers Philippe Latombe, from Macron’s allied party Modem, and Philippe Gosselin, from Les Républicains, in a parliamentary report on video surveillance released in April.
“When it comes to artificial intelligence, losing control means undermining our sovereignty,” they added.
At 25 airports in the U.S. and Puerto Rico, the TSA is expanding a controversial digital identification program that uses facial recognition. This comes as the TSA and other divisions of Homeland Security are under pressure from lawmakers to update technology and cyber security. Kris Van Cleave reports.
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
The owner of New York’s famed Madison Square Garden and Radio City Music Hall has been using face-recognition technology to keep some lawyers out of its venues. Now, New York’s top prosecutor is warning MSG Entertainment that its policy may violate the law, according to a letter from the state attorney general’s office.
The Jan. 25 letter notes that the ban on individuals entering the buildings, as well as MSG’s use of facial recognition technology to enforce it, may violate civil rights laws. Banning lawyers who represent clients in litigation against the company could discourage attorneys from taking on cases, including sexual harassment or job discrimination claims against MSG.
“MSG Entertainment cannot fight their legal battles in their own arenas,” Attorney General Letitia James, a Democrat, said in a statement. “Madison Square Garden and Radio City Music Hall are world-renowned venues and should treat all patrons who purchased tickets with fairness and respect.”
MSG operates Madison Square Garden, Radio City Music Hall and the Beacon Theatre on Manhattan’s Upper West Side. The venue operator has a policy of barring any lawyer who works for a firm involved in litigation with MSG from attending sports games, comedy shows and other events. That policy covers some 90 law firms, potentially affecting “thousands of lawyers,” according to James.
The lawyer ban first surfaced in October, when Larry Hutcher, a 50-year season tickets holder for the New York Knicks, found himself blocked from entering the NBA team’s home arena at Madison Square Garden and told that his seats were revoked. Hutcher later sued MSG over the policy, claiming he was “summarily discarded by MSG without warning solely because he fulfilled his ethical duties to his clients.”
Since Hutcher’s suit, other lawyers have come forward with stories about being blocked from concerts, sporting events and shows including the Rockettes’ Christmas Spectacular.
Dolan weighs in
MSG Entertainment denies that its policy was discriminatory.
The policy “does not unlawfully prohibit anyone from entering our venues and it is not our intent to dissuade attorneys from representing plaintiffs in litigation against us. We are merely excluding a small percentage of lawyers only during active litigation,” the company said in a statement.
The statement also said that the partial lawyer ban “has never applied to attorneys representing plaintiffs who allege sexual harassment or employment discrimination.”
MSG Entertainment CEO James Dolan defended the ban on Thursday in an interview with Good Day New York.
“If somebody sues you, that’s confrontational,” he said. “If you’re being sued, you don’t have to welcome the person into your home.”
However, civil liberties groups warn that these bans, based on facial-recognition technology with known flaws, would have a chilling effect on speech and Americans’ access to public spaces.
“Who will actually go to court against the country’s largest companies if they can retaliate this way?” Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, said in a statement. “If New Yorkers can be banned from a Rangers game, they can be banned from the grocery store or the pharmacy. These technologies are ripe for abuse, and it’s long past time that the city and state ban them.”
The New York legislature this week introduced a bill that would prevent sports venues from barring entry to people holding a valid ticket.
PARIS — The French data protection authority’s president Marie-Laure Denis warned Tuesday against using facial recognition as part of the 2024 Paris Summer Olympics security toolkit.
“The members of the CNIL’s college call on parliamentarians not to introduce facial recognition, that is to say the identification of people on the fly in the public space,” she told Franceinfo.
The French government is seeking to ramp up France’s arsenal of surveillance powers to ensure the safety of the millions of tourists expected for the 2024 Paris Summer Olympics. The plans include AI-powered cameras for the first time — but not facial recognition.
The Senate’s plenary session starts to vote today on the law introducing the new powers. Senators are divided between those who want to add privacy safeguards and those who want to push the surveillance and security arsenal further, mainly by introducing facial recognition.
“The amendment [to include facial recognition] was rejected in the Senate’s law committee, but it can come back [in the plenary session],” the CNIL’s chief cautioned.
Civil liberties NGOs such as La Quadrature du Net and the Human Rights League are currently campaigning against the experimental AI-powered surveillance cameras. Denis however tried to assuage concerns.
The CNIL will monitor algorithmic training to ensure there is no bias and that footage of people is deleted in due time, she said. The experiment will “not necessarily” become permanent, she added.
PARIS — France is seeking to massively expand its arsenal of surveillance powers and tools to secure the millions of tourists expected for the 2024 Paris Summer Olympics.
Among the plans are large-scale, real-time camera systems supported by an algorithm to spot suspicious behavior, including unsupervised luggage and alarming crowd movements like stampedes. Senators on Wednesday will vote on a law introducing the new powers, which are supposed to be temporary, with some lawmakers pushing to allow controversial facial-recognition technology.
The stakes are high: The government badly wants to avoid “failures” like the ones that dented its reputation during the Champions League final last summer, and the trauma of the 2015 Paris terror attacks still looms large over the country.
But the plans are already causing an uproar among privacy campaigners. “The Olympic Games are used as a pretext to pass measures the [security technology] industry has long been waiting for,” said Bastien Le Querrec from digital rights NGO La Quadrature du Net, who’s leading a campaign against algorithmic video surveillance.
TheFrench government already backtracked on deploying facial recognition after lawmakers within President Emmanuel Macron’s majority party raised concerns. It was also forced by the country’s data protection authority and top administrative court to build in more privacy safeguards.
For now, the law would allow for “experimentation” with the surveillance systems, and the trial is supposed to end in June 2025 — 10 months after the sports competition wraps up.
Critics, however, fear the law will lead to unwanted surveillance in the long term.
One key question is what will happen to the AI-powered devices once the Olympic Games are over, especially since the legislation mentions not only sports events but also “festive” and “cultural” gatherings. In the past, Le Querrec warned, security measures initially designed to be temporary — for example, under the state of emergency that followed the 2015 attacks — ended up becoming permanent.
Whether the tech survives the Olympics will depend on how the final law is written, according to Francisco Klauser, a professor at the University of Neuchâtel, who has written about surveillance and sporting events.
“In the history of mega-events, there is always a legacy,” he said. Countries staging major events are under “extraordinary circumstances and time pressure” that often mean systems get deployed that otherwise “would have been debated much more heavily,” he added.
For the 2024 Olympics, France already has the cameras but will need to buy the software to analyze footage, an official from the interior ministry told POLITICO.
MP Philippe Latombe said that French companies such as Atos, Idemia, XXII and Datakalab would be able to provide certain software items | Joel Saget/AFP via Getty Images
Philippe Latombe, an MP from the centrist Macron-allied party Modem, said that French companies such as Atos, Idemia, XXII and Datakalab, among others, would be able to provide such tech. The lawmaker is co-chairing a fact-finding mission on video surveillance in public spaces.
After the Senate votes on the law to allow “experimentations” with the surveillance systems, the legislation will go to the National Assembly, and lawmakers in both chambers are expected to fight over the balance between privacy and security.
Time is already running out, Latombe warned, as algorithms will need to be trained on datasets for months before the Olympics kick off.