Don’t Trust Governments With A.I. Facial Recognition Technology

Affirmative: Ronald Bailey

Joanna Andreasson

Do you want the government always to know where you are, what you are doing, and with whom you are doing it? Why not? After all, you’ve nothing to worry about if you’re not doing anything wrong. Right?

That’s the world that artificial intelligence (A.I.), coupled with tens of millions of video cameras in public and private spaces, is making possible. Not only can A.I.-amplified surveillance identify you and your associates, but it can track you using other biometric characteristics, such as your gait, and even identify clues to your emotional state.

While advancements in A.I. certainly promise tremendous benefits as they transform areas such as health care, transportation, logistics, energy production, environmental monitoring, and media, serious concerns remain about how to keep these powerful tools out of the hands of state actors who would abuse them.

“Nowhere to hide: Building safe cities with technology enablers and AI,” a report by the Chinese infotech company Huawei, explicitly celebrates this vision of pervasive government surveillance. Selling A.I. as “its Safe City solution,” the company brags that “by analyzing people’s behavior in video footage, and drawing on other government data such as identity, economic status, and circle of acquaintances, AI could quickly detect indications of crimes and predict potential criminal activity.”

Already China has installed more than 500 million surveillance cameras to monitor its citizens’ activities in public spaces. Many are facial recognition cameras that automatically identify pedestrians and drivers and compare them against national photo and license tag ID registries and blacklists. Such surveillance detects not just crime but political protests. For example, Chinese police recently used such data to detain and question people who participated in COVID-19 lockdown protests.

The U.S. now has an estimated 85 million video cameras installed in public and private spaces. San Francisco recently passed an ordinance authorizing police to ask for access to private live feeds. Real-time facial recognition technology is being increasingly deployed at American retail stores, sports arenas, and airports.

“Facial recognition is the perfect tool for oppression,” argue Woodrow Hartzog, a professor at Boston University School of Law, and Evan Selinger, a philosopher at the Rochester Institute of Technology. It is, they write, “the most uniquely dangerous surveillance mechanism ever invented.” Real-time facial recognition technologies would essentially turn our faces into ID cards on permanent display to the police. “Advances in artificial intelligence, widespread video and photo surveillance, diminishing costs of storing big data sets in the cloud, and cheap access to sophisticated data analytics systems together make the use of algorithms to identify people perfectly suited to authoritarian and oppressive ends,” they point out.

More than 110 nongovernmental organizations have signed the 2019 Albania Declaration calling for a moratorium on facial recognition for mass surveillance. U.S. signatories urging “countries to suspend the further deployment of facial recognition technology for mass surveillance” include the Electronic Frontier Foundation, the Electronic Privacy Information Center, Fight for the Future, and Restore the Fourth.

In 2021, the Office of the United Nations High Commissioner for Human Rights issued a report noting that “the widespread use by States and businesses of artificial intelligence, including profiling, automated decision-making and machine-learning technologies, affects the enjoyment of the right to privacy and associated rights.” The report called on governments to “impose moratoriums on the use of potentially high-risk technology, such as remote real-time facial recognition, until it is ensured that their use cannot violate human rights.”

That’s a good idea. So is the Facial Recognition and Biometric Technology Moratorium Act, introduced in 2021 by Sen. Ed Markey (D–Mass.) and others, which would make it “unlawful for any Federal agency or Federal official, in an official capacity, to acquire, possess, access, use in the United States—any biometric surveillance system; or information derived from a biometric surveillance system operated by another entity.”

This year the European Digital Rights network issued a critique of how the European Union’s proposed AI Act would regulate remote biometric identification. “Being tracked in a public space by a facial recognition system (or other biometric system)…is fundamentally incompatible with the essence of informed consent,” the report points out. “If you want or need to enter that public space, you are forced to agree to being subjected to biometric processing. That is coercive and not compatible with the aims of the…EU’s human rights regime (in particular rights to privacy and data protection, freedom of expression and freedom of assembly and in many cases non-discrimination).”

If we do not ban A.I.-enabled real-time facial-recognition surveillance by government agents, we run the risk of haplessly drifting into turnkey totalitarianism.

A.I. Isn’t Much Different From Other Software

Negative: Robin Hanson

Back in 1983, at the ripe age of 24, I was dazzled by media reports of amazing progress in artificial intelligence (A.I.). Not only could new machines diagnose as well as doctors, they said, but they seemed “almost” ready to displace humans wholesale! So I left graduate school and spent nine years doing A.I. research.

Those forecasts were quite wrong, of course. So were similar forecasts about the machines of the 1960s, 1930s, and 1830s. We are just bad at judging such timetables, and we often mistake a clear view for a short distance. Today we see a new generation of machines, and similar forecasts. Alas, we are still probably many decades from human-level A.I.

But what if this time really is different? What if we are actually close? It could make sense to try to protect human beings from losing their jobs to A.I.s, by arranging for “robots took your job” insurance. Similarly, many might want to insure against the scenario where a booming A.I. economic sector grows much faster than others.

Of course it makes sense to subject A.I.s to the same sort of regulations as people when they take on similar roles. For example, regulations could prevent A.I.s from giving medical advice when insufficiently expert, from stealing intellectual property, or from helping students cheat on exams.

Some people, however, want us to regulate the A.I.s themselves, and much more than we do comparable human beings. Many have seen science fiction stories where cold, laser-eyed robots hunt down and kill people, and they are freaked out. And if the very idea of metal creatures with their own agendas seems to you a sufficient reason to limit them, I don’t know what I can say to change your mind.

But if you are willing to listen to reason, let’s ask: Are A.I.s really that dangerous? Here are four arguments that suggest we don’t have good reasons to regulate A.I.s more now than similar human beings.

First, A.I. is basically math and software, and these are among our least regulated industries. We mainly only regulate them when they control dangerous systems, like banks, planes, missiles, medical devices, or social media.

Second, new software systems are generally lab-tested and field-monitored in great detail. More so, in fact, than are most other things in our world, as doing so is cheaper for software. Today we design, create, modify, test, and field A.I.s pretty much the same way we do other software. Why would A.I. risk be higher?

Third, out-of-control software that fails to do as advertised, or that does other harmful things, mainly hurts the firms that sell it and their customers. But regulation works best when it prevents third parties from getting hurt.

Fourth, regulation is often counterproductive. Regulation to prevent failures works best when we have a clear idea of typical failure scenarios, and of their detailed contexts. And such regulation usually proceeds by trial and error. Since today we hardly have any idea of what could go wrong with future A.I.s., today looks too early for regulation.

The main argument that I can find in favor of extra regulation of A.I.s imagines the following worst-case scenario: An A.I. system might suddenly and unexpectedly, within an hour, say, “foom”—i.e., explode in power from being only smart enough to manage one building to being able to easily conquer the entire world, including all other A.I.s.

Is such an explosion even possible? The idea is that the A.I. might try to improve itself, and then it might find an especially effective series of changes to suddenly increase its abilities by a factor of billions or more. No computer system, or any other system really, has ever done such a thing. But in theory this remains possible.

Wouldn’t such an outcome just empower the firm that made this A.I.? But worriers also assume this A.I. is not just a computer system that does some tasks well but is a full “agent” with its own identity, history, and goals, including desires to survive and control resources. Firms don’t need to make their A.I.s into agents to profit from them, and yes, such an agent A.I. should start out with priorities that are well-aligned with its creator firm. But A.I. worriers add one last element: The A.I.’s values might, in effect, change radically during this foom explosion process to become unrecognizable afterward. Again, it is a possibility.

Thus some fear that any A.I., even the very weak ones we have today, might without warning turn agentlike, explode in abilities, and then change radically in values. If so, we would get an A.I. god with arbitrary values, who may kill us all. And since the only time to prevent this is before the A.I. explodes, worriers conclude that either all A.I. must be strongly regulated now, or A.I. progress must be greatly slowed.

To me, this all seems too extreme a scenario to be worth worrying about much now. Your mileage may vary.

What about a less extreme scenario, wherein a firm just loses control of an agent-like A.I. that doesn’t foom? Yes, the firm would be constantly testing its A.I.’s priorities and adjusting to keep them well aligned. And when A.I.s were powerful, the firm might use other A.I.s to help. But what if the A.I. got clever, deceived its maker about its values, and then found a way to slip out of its maker’s control?

That sounds to me a lot like a military coup, whereby a nation loses control of its military. That’s bad for a nation, and each nation should try to watch out for and prevent such coups. But when there are many nations, such an outcome is not especially bad for the rest of the world. And it’s not something that one can do much to prevent long before one has the foggiest idea of what the relevant nations or militaries might look like.

A.I. software isn’t that much different from other software. Yes, future A.I.s may display new failure modes, and we may then want new control regimes. But why try to design those now, so far in advance, before we know much about those failure modes or their usual contexts?

One can imagine crazy scenarios wherein today is the only day to prevent Armageddon. But within the realm of reason, now is not the time to regulate A.I.

 

Subscribers have access to Reason‘s whole May 2023 issue now. These debates and the rest of the issue will be released throughout the month for everyone else. Consider subscribing today!

Ronald Bailey

Source link

You May Also Like

Republicans introduce resolution condemning UN agreement to shut down fossil fuels

FIRST ON FOX: House Republicans introduced a resolution Wednesday condemning the United…

Republicans Are Trying To Get Americans Killed By Calling For Shootdown Of Chinese Spy Balloon

The Pentagon is warning that shooting down the Chinese spy balloon could…

How Eva met Francesco: The golden couple at the heart of Europe’s Qatargate scandal

Press play to listen to this article Voiced by artificial intelligence. BRUSSELS…

Why no one’s giving Tim Ryan a realistic shot to win

Ohio was once the fulcrum of American politics. For most of the…