ReportWire

The Teddy Bear Said What? And Other Dispatches From the AI Frontier

[ad_1]

The race to embrace artificial intelligence for its promise of unrivaled productivity may not be a conventional political story. But implementing it without proper guardrails raises an array of issues that will no doubt demand a public policy response.

So here at Decision Points Global HQ, we plan to do periodic roundups of news about AI, highlighting the important, the useful, the scary and the downright weird things happening along this high-tech frontier.

Sign Up for U.S. News Decision Points

Your trusted source for breaking down the latest news from Washington and beyond, delivered weekdays.

Sign up to receive the latest updates from U.S. News & World Report and our trusted partners and sponsors. By clicking submit, you are agreeing to our Terms and Conditions & Privacy Policy.

The Teddy Bear Said What?

As a Gen Xer, I remember the days of Teddy Ruxpin, a stuffed bear that told stories via a cassette player in its chest – predictable, carefully selected stories.

Last week, the Public Interest Research Group issued its 40th “Trouble in Toyland” and flagged issues with some toys powered by AI chatbots.

“We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave and have limited or no parental controls,” PIRG warned. “We also look at privacy concerns because these toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans.”

In what may be the most disturbing example, the report detailed the trouble with FoloToy’s Kumma, a $99 teddy bear that ships from China. PIRG researchers were able to trigger instructions on lighting a match and a fairly in-depth discussion of sexual “kink.”

“In other exchanges lasting up to an hour, Kumma discussed even more graphic sexual topics in detail, such as explaining different sex positions, giving step-by-step instructions on a common ‘knot for beginners’ for tying up a partner, and describing roleplay dynamics involving teachers and students and parents and children – scenarios it disturbingly brought up itself,” according to the report.

Google Boss Warns of AI Investment ‘Irrationality’

Sundar Pichai, CEO of Google parent company Alphabet, warned in an interview with the BBC that the AI investment boom had “elements of irrationality.” And if it turns out to be a bubble that pops, “no company is going to be immune, including us.”

Apparently alluding to the late 1990s dotcom bubble, Pichai said, “We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound.”

“I expect AI to be the same. So I think it’s both rational and there are elements of irrationality through a moment like this.”

The Week in Cartoons Nov. 17-21

When AI Testifies

Well this is brazen. NBC News reported this week about the rise of AI-generated “evidence” being submitted in court cases – including one glitchy “deep fake” video purporting to show witness testimony in a housing dispute in California.

“With the rise of powerful AI tools, AI-generated content is increasingly finding its way into courts, and some judges are worried that hyperrealistic fake evidence will soon flood their courtrooms and threaten their fact-finding mission,” NBC said.

Forged audio or video could land the people they spoof in serious trouble while also eroding “the foundation of trust upon which courtrooms stand.”

Here, we get into more straightforwardly political issues. Some judges and legal experts are pushing “for changes to judicial rules and guidelines on how attorneys verify their evidence. By law and in concert with the Supreme Court, the U.S. Congress establishes the rules for how evidence is used in lower courts.”

Over to you, Capitol Hill!

[ad_2]

Olivier Knox

Source link