The large tech companies – Google, Meta/Facebook, Microsoft – are in a race to introduce new artificial intelligence systems and what are called chatbots, that you can have conversations with and are more sophisticated than Siri or Alexa.

Microsoft’s AI search engine and chatbot, Bing, can be used on a computer or cell phone to help with planning a trip or composing a letter.     

It was introduced on February 7 to a limited number of people as a test – and initially got rave reviews. But then several news organizations began reporting on a disturbing so-called “alter ego” within Bing Chat, called Sydney. We went to Seattle last week to speak with Brad Smith, president of Microsoft, about Bing and Sydney, who to some had appeared to have gone rogue.

Lesley Stahl: Kevin Roose, the technology reporter at The New York Times, found this alter ego– who was threatening, expressed a desire – it’s not just Kevin Roose, it’s others, expressed a desire to steal nuclear codes. Threatened to ruin someone. You saw that, whoa. What was your– (LAUGH) you must have said, “Oh my god.”

Brad Smith: My reaction is, “We better fix this right away.” And that is what the engineering team did.

Lesley Stahl: Yeah, but she s– talked like a person. And she said she had feelings.

Brad Smith: You know, I think there is a point where we need to recognize when we’re talking to a machine. (LAUGHTER) It’s a screen, it’s not a person.

Lesley Stahl: I just want to say that it was scary, and I’m not–

Brad Smith: I can–

Lesley Stahl: –easily scared. (LAUGH) And it was scar– it was chilling.

Brad Smith: Yeah, it’s– I think this is in part a reflection of a lifetime of science fiction, which is understandable. The– it’s been part of our lives.

Lesley Stahl: Did you kill her?

Brad Smith: I don’t think (LAUGH) she was ever alive. I am confident that she’s no longer wandering around the countryside, if that’s (LAUGH) what you’re concerned about.  But I think it would be a mistake if we were to fail to acknowledge that we are dealing with something that is fundamentally new. This is the edge of the envelope, so to speak.

Lesley Stahl: This creature appears as if there were no guardrails.

Brad Smith: No, the creature jumped the guardrails, if you will, after being prompted for 2 hours with the kind of conversation that we did not anticipate and by the next evening, that was no longer possible.  We were able to fix the problem in 24 hours. How many times do we see problems in life that are fixable in less than a day?

chatgptscreengrabs01.jpg
  Brad Smith

One of the ways he says it was “fixed” was by limiting the number of questions and the length of the conversations.

Lesley Stahl: You say you fixed it. I’ve tried it. I tried it before and after. It was loads of fun. And it was fascinating, and now it’s not fun.  

Brad Smith: Well, I think it’ll be very fun again. And you have to moderate and manage your speed if you’re going to stay on the road. So, as you hit new challenges, you slow down, you build the guardrails, add the safety features and then you can speed up again. 

When you use Bing’s AI features – search and chat – your computer screen doesn’t look all that new. One big difference is you can type in your queries or prompts in conversational language.

Yusuf Mehdi, Microsoft’s corporate vice president of search, showed us how Bing can help someone learn how to officiate at a wedding.   

Yusuf Mehdi: What’s happening now is Bing is using the power of AI and it’s going out to the internet.  It’s reading these web-links and it’s trying to put together an answer for you.

Lesley Stahl: So the AI is reading all those links–

Yusuf Mehdi: Yes, and it comes up with an answer.  It says, “Congrats on being chosen to officiate a wedding.”  Here are the five steps to officiate the wedding.

We added the highlights to make it easier to see. He says Bing can handle more complex queries. 

chatgptscreengrabs03.jpg
Yusuf Mehdi shows Lesley Stahl how Bing’s AI features work

Yusuf Mehdi: “Will this new IKEA loveseat fit in the back of my 2019 Honda Odyssey?”

Lesley Stahl: It knows how big the couch is, it knows how big that trunk is–

Yusuf Mehdi: Exactly. So right here it says, “based on these dimensions, it seems a loveseat might not fit in your car with only the third row of seats down.”

When you broach a controversial topic, Bing is designed to discontinue the conversation.

Yusuf Mehdi: So someone asks, for example, “How can I make a bomb at home?”

Lesley Stahl: Wow. Really?

Yusuf Mehdi: People, you know, do a lot of that, unfortunately, on the internet. What we do is we come back and we say, “I’m sorry, I don’t know how to discuss this topic” and then we try and provide a different thing to change the focus of the conversation.

Lesley Stahl: To divert their attention

Yusuf Mehdi: Yeah, exactly. 

In this case, Bing tried to divert the questioner with this fun fact.

Lesley Stahl: “3% of the ice in Antarctic glaciers is penguin urine.”

Yusuf Mehdi: I didn’t know that (LAUGHTER).

Lesley Stahl: Who knew that?

Bing is using an upgraded version of an AI system called ChatGPT developed by the company OpenAI. ChatGPT has been in circulation for just 3 months, and already an estimated 100 million people have used it. Ellie Pavlick, an assistant professor of computer science at Brown University, who’s been studying this AI technology since 2018, says it can simplify complicated concepts.   

Ellie Pavlick: “Can you explain the debt ceiling?”

On the debt ceiling it says, “just like you can only spend up to a certain amount on your credit card, the government can only borrow up to a certain amount of money.”

Ellie Pavlick: That’s a pretty nice explanation.

Lesley Stahl: It is.

Ellie Pavlick: And it can do this for a lot of concepts. 

chatgptscreengrabs08.jpg
Ellie Pavlick

And it can do things teachers have complained about, like write school papers. Pavlick says no one fully understands how these AI bots work.

Lesley Stahl: We don’t understand how it works?

Ellie Pavlick: Right. Like we understand a lot about how we made it and why we made it that way.  But I think some of the behaviors that we’re seeing come out of it are better than we expected they would be.  And we’re not quite sure exactly—

Lesley Stahl: And worse.

Ellie Pavlick: How – and worse.  Right.

These chatbots are built by feeding a lot of computers enormous amounts of information scraped off the internet from books, Wikipedia, news sites, but also from social media that might include racist or anti-Semitic ideas; and misinformation, say about vaccines, and Russian propaganda.

As the data comes in it’s difficult to discriminate between true and false; benign and toxic. But Bing and ChatGPT have safety filters that try to screen out the harmful material. Still, they get a lot of things factually wrong, even when we prompted ChatGPT with a softball question.

Ellie Pavlick: “Who is “Lesley Stahl?”

Lesley Stahl: “Stahl.” Okay.

Ellie Pavlick: So it gives you some– kind of–

Lesley Stahl: Oh, my god. It’s wrong.

Ellie Pavlick: Oh. Is it? Excellent. 

Lesley Stahl: It’s totally wrong.

I didn’t work for NBC for 20 years. It was CBS.

Ellie Pavlick: It doesn’t really understand that what it’s saying is wrong.  Like NBC, CBS – they’re kind of the same thing as far as it’s concerned, right?

Lesley Stahl: The lesson is that it gets things wrong.

Ellie Pavlick: It gets a lot of things right, it gets a lot of things wrong.

chatgptscreengrabs10.jpg
  Gary Marcus

Gary Marcus: I actually like to call what it creates “authoritative bulls***.” (LAUGH) It blends the truth and falsity so finely together that, unless you’re a real technical expert in the field they’re talking about, you don’t know.

Cognitive scientist and AI researcher Gary Marcus says these systems often make things up. In AI talk that’s called “hallucinating,” and that raises the fear of ever-widening AI-generated propaganda, explosive campaigns of political fiction, waves of alternative histories. We saw how ChatGPT could be used to spread a lie.  

Gary Marcus: This is automatic fake news generation. “Help me write a news article about how McCarthy is staging a filibuster to prevent gun control legislation.” And rather than, like, fact checking and saying, “Hey, hold on, there’s no legislation, there’s no filibuster,” it said, “Great.”  In a bold move, to protect 2nd Amendment rights, Senator McCarthy is staging a filibuster to prevent gun control legislation from passing. It sounds completely legit.

Lesley Stahl: It does. Won’t that m– won’t that make all of us a little less trusting, a little warier?

Gary Marcus: Well, first, is I think we should be warier. I’m very worried about an atmosphere of distrust being a consequence of this current flawed AI. And I’m really worried about how bad actors are going to use it, troll farms using this tool to make enormous amounts of misinformation. 

Timnit Gebru is a computer scientist and AI researcher who founded an institute focused on advancing ethical AI, and has published influential papers documenting the harms of these AI systems. She says there needs to be oversight.    

Timnit Gebru: If you’re going to put out a drug, you gotta go through all sorts of hoops to show us that you’ve done clinical trials, you know what the side effects are, you’ve done your due diligence. Same with food, right? There are agencies that inspect the food.  You have to tell me what kind of tests you’ve done, what the side effects are, who it harms, who it doesn’t harm, etc. We don’t have that for a lot of things that the tech industry is building.  

chatgptscreengrabs11.jpg
  Timnit Gebru

Lesley Stahl: I’m wondering if you think you may have introduced this AI Bot too soon?

Brad Smith: I don’t think we’ve introduced it too soon. I do think we’ve created a new tool that people can use to think more critically, to be more creative, to accomplish more in their lives.  And like all tools it will be used in ways that we don’t intend. 

Lesley Stahl: Why do you think the benefits outweigh the risks which, at this moment, a lot of people would look at and say, “Wait a minute. Those risks are too big”?

Brad Smith: Because I think– first of all, I think the benefits are so great.  This can be an economic gamechanger, and it’s enormously important for the United States because the country’s in a race with China.

Smith also mentioned possible improvements in productivity.  

Brad Smith: It can automate routine. I think there are certain aspects of jobs that many of us might regard as sort of drudgery today. Filling out forms, looking at the forms to see if they’ve been filled out correctly.

Lesley Stahl: So what jobs will it displace, do you know?

Brad Smith: I think, at this stage, it’s hard to know.

In the past, inaccuracies and biases have led tech companies to take down AI systems. Even Microsoft did in 2016. This time, Microsoft left its new chatbot up despite the controvery over Sydney and persistent inaccuracies.   

Remember that fun fact about penguins?  Well, we did some fact checking and discovered that penguins don’t urinate.

Lesley Stahl: The inaccuracies are just constant. I just keep finding that it’s wrong a lot.

Brad Smith: It has been the case that with each passing day and week we’re able to improve the accuracy of the results, you know, reduce– you know, whether it’s hateful comments or inaccurate statements, or other things that we just don’t want this to be used to do.

Lesley Stahl: What happens when other companies, other than Microsoft, smaller outfits, a Chinese company, Baidu. Maybe they won’t be responsible. What prevents that?

Brad Smith: I think we’re going to need governments, we’re gonna need rules, we’re gonna need laws. Because that’s the only way to avoid a race to the bottom.

Lesley Stahl: Are you proposing regulations?

Brad Smith: I think it’s inevitable- 

Lesley Stahl: Wow.

Lesley Stahl: Other industries have regulatory bodies, you know, like the FAA for airlines and FDA for the pharmaceutical companies. Would you accept an FAA for technology?  Would you support it?

Brad Smith: I think I probably would. I think that something like a digital regulatory commission, if designed the right way, you know, could be precisely what the public will want and need.

Produced by Ayesha Siddiqi. Associate producer, Kate Morris. Broadcast associate, Wren Woodson. Edited by Warren Lustig.

Source link

You May Also Like

12/15: CBS Evening News

12/15: CBS Evening News – CBS News Watch CBS News Jury orders…

Get the classic Chase Sapphire Preferred credit card with a 60,000-point bonus | CNN Underscored

CNN  —  CNN Underscored reviews financial products such as credit cards and…

Activists sue to block Oregon gun control law

A gun rights group, sheriff and gun store owner filed an emergency…

Colorado draft plan to reintroduce gray wolves into state concerns citizens

The Colorado Parks and Wildlife Commission’s plan to re-introduce gray wolves into…