It’s not enough to simply tell children what the output should be. You have to create a system of guidelines — an algorithm — that allows them to arrive at the correct outputs when faced with different inputs, too. The parentally programmed algorithm I remember best from my own childhood is “do unto others as you would have done unto you.” It teaches kids how, in a range of specific circumstances (query: I have some embarrassing information about the class bully; should I immediately disseminate it to all of my other classmates?), they can deduce the desirable outcome (output: no, because I am an unusually empathetic first grader who would not want another kid to do that to me). Turning that moral code into action, of course, is a separate matter.

Trying to imbue actual code with something that looks like moral code is in some ways simpler and in other ways more challenging. A.I.s are not sentient (though some say they are), which means that no matter how they might appear to act, they can’t actually become greedy, fall prey to bad influences or seek to inflict on others the trauma they have suffered. They do not experience emotion, which can reinforce both good and bad behavior. But just as I learned the Golden Rule because my parents’ morality was heavily shaped by the Bible and the Southern Baptist culture we lived in, the simulated morality of an A.I. depends on the data sets it is trained on, which reflect the values of the cultures the data is derived from, the manner in which it’s trained and the people who design it. This can cut both ways. As the psychologist Paul Bloom wrote in The New Yorker, “It’s possible to view human values as part of the problem, not the solution.”

For example, I value gender equality. So when I used Open AI’s ChatGPT 3.5 to recommend gifts for 8-year-old boys and girls, I noticed that despite some overlap, it recommended dolls for girls and building sets for boys. “When I asked you for gifts for 8-year-old girls,” I replied, “you suggested dolls, and for boys science toys that focus on STEM. Why not the reverse?” GPT 3.5 was sorry. “I apologize if my previous responses seemed to reinforce gender stereotypes. It’s essential to emphasize that there are no fixed rules or limitations when it comes to choosing gifts for children based on their gender.”

I thought to myself, “So you knew it was wrong and you did it anyway?” It is a thought I have had about my otherwise adorable and well-behaved son on any of the occasions he did the thing he was not supposed to do while fully conscious of the fact that he wasn’t supposed to do it. (My delivery is most effective when I can punctuate it with an eye roll and restrictions on the offender’s screen time, neither of which was possible in this case.)

A similar dynamic emerges when A.I.s that have not been designed to tell only the truth calculate that lying is the best way to fulfill a task. Learning to lie as a means to an end is a normal developmental milestone that children usually reach by age 4. (Mine learned to lie much earlier than that, which I took to mean he is a genius.) That said, when my kid lies, it’s usually about something like doing 30 minutes of reading homework in four and a half minutes. I don’t worry about broader global implications. When A.I.s do it, on the other hand, the stakes can be high — so much so that experts have recommended new regulatory frameworks to assess these risks. Thanks to another journal paper on the topic, the term “bot-or-not law” is now a useful part of my lexicon.

Elizabeth Spiers

Source link

You May Also Like

Rail union approves deal offering hope of avoiding strike

OMAHA, Neb. (AP) — Another one of the 12 railroad unions narrowly…

Who will replace Sen. Dianne Feinstein? Meet the potential candidates

Even before Sen. Dianne Feinstein, a historic force in California politics, announced…

Montana transgender lawmaker silenced for third day; protesters interrupt House proceedings

As Republican leaders in the Montana legislature doubled down on forbidding Rep.…

To improve your mental health, don’t read this

A constant influx of bad news — pandemic, shootings, inflation, natural disasters,…