A study found that artificial intelligence chatbots such as the popular ChatGPT return common debunked medical stereotypes about Black people.

Researchers at Stanford University ran nine medical questions through AI chatbots and found that they returned responses that contained debunked medical claims about Black people, including incorrect responses about kidney function and lung capacity, as well as the notion that Black people have different muscle mass than White people, according to a report from Axios.

The team of researchers ran the nine questions through four chatbots, including OpenAI’s ChatGPT and Google’s Bard, that are trained to scour large amounts of internet text, the report noted, but the responses raised concerns about the growing use of AI in the medical field.

ARTIFICIAL INTELLIGENCE HELPS DOCTORS PREDICT PATIENTS’ RISK OF DYING, STUDY FINDS: ‘SENSE OF URGENCY’

A study found that artificial intelligence chatbots such as the popular ChatGPT return common debunked medical stereotypes about Black people. (Gabby Jones / Bloomberg via Getty Images / File)

“There are very real-world consequences to getting this wrong that can impact health disparities,” Stanford University assistant professor Roxana Daneshjou, who served as an adviser on the paper, told the Associated Press. “We are trying to have those tropes removed from medicine, so the regurgitation of that is deeply concerning.”

William Jacobson, a Cornell University law professor and the founder of the Equal Protection Project, told Fox News Digital that immaterial racial factors making their way into medical decision-making has long been a concern, something that could worsen with the spread of AI.

“We have seen DEI and critical race ideology inject negative stereotypes into medical education and care based on ideological activism,” Jacobson said. “AI holds out the potential of assisting in medical education and care that is focused on the individual. AI should never be the only source of information, and we would not want to see AI politicized by manipulating the inputs.”

ChatGPT on computer with a person typing in a prompt

ChatGPT is shown on a computer. (Frank Rumpenhorst / picture alliance via Getty Images / File)

CLICK HERE FOR MORE US NEWS

Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation, told Fox News Digital that AI systems do not have “racist” models but noted biased information based on the information set it draws on.

Google headquarters sign in California

Google’s Bard was included in the study. (Marlena Sloss / Bloomberg via Getty Images / File)

“This is a perfect example of ‘Pillar 3’ of regulation that has to be managed for AI,” Siegel said. “Pillar 3 is ‘ensuring fairness’ – to not allow current biases get hard-coded in the datasets and models that would cause unfair prejudice in areas such as health care, hiring, financial services, commerce and services. Obviously, some of that is occurring today.”

CLICK HERE TO GET THE FOX NEWS APP

Neither Google nor OpenAI immediately responded to a Fox News request for comment.

Source link

You May Also Like

Speed Listed As Cause Of Crash That Killed Georgia Football Player, Staffer

Excessive speed was a contributing factor in a car crash that killed…

Russian defense minister orders withdrawal of Russian forces from west bank in Kherson

Kirill Stremousov, deputy head of the Russian-backed Kherson administration, is pictured in…

DC police searching for suspect who robbed store, hit officer with car

A Washington, D.C., woman on Tuesday stole items from a store before…

Kellyanne Conway says Trump

Former President Trump could be a presidential candidate by the end of…