Marisa Shuman’s computer science class at the Young Women’s Leadership School of the Bronx began as usual on a recent January morning.

Just after 11:30, energetic 11th and 12th graders bounded into the classroom, settled down at communal study tables and pulled out their laptops. Then they turned to the front of the room, eyeing a whiteboard where Ms. Shuman had posted a question on wearable technology, the topic of that day’s class.

For the first time in her decade-long teaching career, Ms. Shuman had not written any of the lesson plan. She had generated the class material using ChatGPT, a new chatbot that relies on artificial intelligence to deliver written responses to questions in clear prose. Ms. Shuman was using the algorithm-generated lesson to examine the chatbot’s potential usefulness and pitfalls with her students.

“I don’t care if you learn anything about wearable technology today,” Ms. Shuman said to her students. “We are evaluating ChatGPT. Your goal is to identify whether the lesson is effective or ineffective.”

Across the United States, universities and school districts are scrambling to get a handle on new chatbots that can generate humanlike texts and images. But while many are rushing to ban ChatGPT to try to prevent its use as a cheating aid, teachers like Ms. Shuman are leveraging the innovations to spur more critical classroom thinking. They are encouraging their students to question the hype around rapidly evolving artificial intelligence tools and consider the technologies’ potential side effects.

The aim, these educators say, is to train the next generation of technology creators and consumers in “critical computing.” That is an analytical approach in which understanding how to critique computer algorithms is as important as — or more important than — knowing how to program computers.

New York City Public Schools, the nation’s largest district, serving some 900,000 students, is training a cohort of computer science teachers to help their students identify A.I. biases and potential risks. Lessons include discussions on defective facial recognition algorithms that can be much more accurate in identifying white faces than darker-skinned faces.

In Illinois, Florida, New York and Virginia, some middle school science and humanities teachers are using an A.I. literacy curriculum developed by researchers at the Scheller Teacher Education Program at the Massachusetts Institute of Technology. One lesson asks students to consider the ethics of powerful A.I. systems, known as “generative adversarial networks,” that can be used to produce fake media content, like realistic videos in which well-known politicians mouth phrases they never actually said.

With generative A.I. technologies proliferating, educators and researchers say understanding such computer algorithms is a crucial skill that students will need to navigate daily life and participate in civics and society.

“It’s important for students to know about how A.I. works because their data is being scraped, their user activity is being used to train these tools,” said Kate Moore, an education researcher at M.I.T. who helped create the A.I. lessons for schools. “Decisions are being made about young people using A.I., whether they know it or not.”

To observe how some educators are encouraging their students to scrutinize A.I. technologies, I recently spent two days visiting classes at the Young Women’s Leadership School of the Bronx, a public middle and high school for girls that is at the forefront of this trend.

The hulking, beige-brick school specializes in math, science and technology. It serves nearly 550 students, most of them Latinx or Black.

It is by no means a typical public school. Teachers are encouraged to help their students become, as the school’s website puts it, “innovative” young women with the skills to complete college and “influence public attitudes, policies and laws to create a more socially just society.” The school also has an enviable four-year high school graduation rate of 98 percent, significantly higher than the average for New York City high schools.

One morning in January, about 30 ninth and 10th graders, many of them dressed in navy blue school sweatshirts and gray pants, loped into a class called Software Engineering 1. The hands-on course introduces students to coding, computer problem-solving and the social repercussions of tech innovations.

It is one of several computer science courses at the school that ask students to consider how popular computer algorithms — often developed by tech company teams of mostly white and Asian men — may have disparate impacts on groups like immigrants and low-income communities. That morning’s topic: face-matching systems that may have difficulty recognizing darker-skinned faces, such as those of some of the students in the room and their families.

Standing in front of her class, Abby Hahn, the computing teacher, knew her students might be shocked by the subject. Faulty face-matching technology has helped lead to the false arrests of Black men.

So Ms. Hahn alerted her pupils that the class would be discussing sensitive topics like racism and sexism. Then she played a YouTube video, created in 2018 by Joy Buolamwini, a computer scientist, showing how some popular facial analysis systems mistakenly identified iconic Black women as men.

As the class watched the video, some students gasped. Oprah Winfrey “appears to be male,” Amazon’s technology said with 76.5 percent confidence, according to the video. Other sections of the video said that Microsoft’s system had mistaken Michelle Obama for “a young man wearing a black shirt,” and that IBM’s system had pegged Serena Williams as “male” with 89 percent confidence.

(Microsoft and Amazon later announced accuracy improvements to their systems, and IBM stopped selling such tools. Amazon said it was committed to continuously improving its facial analysis technology through customer feedback and collaboration with researchers, and Microsoft and IBM said they were committed to the responsible development of A.I.)

“I’m shocked at how colored women are seen as men, even though they look nothing like men,” Nadia Zadine, a 14-year-old student, said. “Does Joe Biden know about this?”

The point of the A.I. bias lesson, Ms. Hahn said, was to show student programmers that computer algorithms can be faulty, just like cars and other products designed by humans, and to encourage them to challenge problematic technologies.

“You are the next generation,” Ms. Hahn said to the young women as the class period ended. “When you are out in the world, are you going to let this happen?”

“No!” a chorus of students responded.

A few doors down the hall, in a colorful classroom strung with handmade paper snowflakes and origami cranes, Ms. Shuman was preparing to teach a more advanced programming course, Software Engineering 3, focused on creative computing like game design and art. Earlier that week, her student coders had discussed how new A.I.-powered systems like ChatGPT can analyze vast stores of information and then produce humanlike essays and images in response to short prompts.

As part of the lesson, the 11th and 12th graders read news articles about how ChatGPT could be both useful and error-prone. They also read social media posts about how the chatbot could be prompted to generate texts promoting hate and violence.

But the students could not try ChatGPT in class themselves. The school district has blocked it over concerns that it could be used for cheating. So the students asked Ms. Shuman to use the chatbot to create a lesson for the class as an experiment.

Ms. Shuman spent hours at home prompting the system to generate a lesson on wearable technology like smartwatches. In response to her specific requests, ChatGPT produced a remarkably detailed 30-minute lesson plan — complete with a warm-up discussion, readings on wearable technology, in-class exercises and a wrap-up discussion.

As the class period began, Ms. Shuman asked the students to spend 20 minutes following the scripted lesson, as if it were a real class on wearable technology. Then they would analyze ChatGPT’s effectiveness as a simulated teacher.

Huddled in small groups, students read aloud information the bot had generated on the conveniences, health benefits, brand names and market value of smartwatches and fitness trackers. There were groans as students read out ChatGPT’s anodyne sentences — “Examples of smart glasses include Google Glass Enterprise 2” — that they said sounded like marketing copy or rave product reviews.

“It reminded me of fourth grade,” Jayda Arias, 18, said. “It was very bland.”

The class found the lesson stultifying compared with those by Ms. Shuman, a charismatic teacher who creates course materials for her specific students, asks them provocative questions and comes up with relevant, real-world examples on the fly.

“The only effective part of this lesson is that it’s straightforward,” Alexania Echevarria, 17, said of the ChatGPT material.

“ChatGPT seems to love wearable technology,” noted Alia Goddess Burke, 17, another student. “It’s biased!”

Ms. Shuman was offering a lesson that went beyond learning to identify A.I. bias. She was using ChatGPT to give her pupils a message that artificial intelligence was not inevitable and that the young women had the insights to challenge it.

“Should your teachers be using ChatGPT?” Ms. Shuman asked toward the end of the lesson.

The students’ answer was a resounding “No!” At least for now.

Natasha Singer

Source link

You May Also Like

AMC Ticket Sales Make History Thanks to Barbie, Oppenheimer | Entrepreneur

AMC Theaters reached a personal best thanks to the success of the…

Lululemon sees downbeat quarter as inflation-hit consumers turn cautious By Reuters

© Reuters. A Lululemon sign is seen at a shopping mall in…

CDOs look to boost data management investment even as recession winds gather

Check out all the on-demand sessions from the Intelligent Security Summit here.…

Three Silent Hill games announced as Konami revives dormant series

Interested in learning what’s next for the gaming industry? Join gaming executives…