Generative AI is generating a lot of interest from both the public and investors. But they are overlooking a fundamental risk.

When ChatGPT launched in November, allowing users to submit questions to a chatbot and get AI-produced responses, the internet went into a frenzy. Thought leaders proclaimed that the new technology could transform sectors from media to healthcare (it recently passed all three parts of the U.S. Medical Licensing Examination).

Microsoft has already invested billions of dollars into its partnership with creator OpenAI, aiming to deploy the technology on a global scale, such as integrating it into the search engine Bing. Undoubtedly executives hope this will help the tech giant, which has lagged in search, catch up to market leader Google.

ChatGPT is just one type of generative AI. Generative AI is a type of artificial intelligence that, when given a training dataset, is capable of generating new data based on it, such as images, sounds, or in the case of the chatbot, text. Generative AI models can produce results much more rapidly than humans, so tremendous value can be created. Imagine, for instance, a movie production environment in which AI generates elaborate new landscapes and characters without relying on the human eye.

Some limitations of generative AI

However, generative AI is not the answer for every situation or industry. When it comes to games, video, images and even poems, it can produce interesting and useful output. But when dealing with mission-critical applications, situations where errors are very costly, or where we don’t want bias, it can be very dangerous.

Take, for example, a healthcare facility in a remote area with limited resources, where AI is used to improve diagnosis and treatment planning. Or a school where a single teacher can provide personalized training to different students based on their unique skill levels through AI-directed lesson planning.

These are situations where, on the surface, generative AI might seem to create value but in fact, would lead to a host of complications. How do we know the diagnoses are correct? What about the bias that may be ingrained in educational materials?

Generative AI models are considered “black box” models. It is impossible to understand how they come up with their outputs, as no underlying reasoning is provided. Even professional researchers often struggle to comprehend the inner workings of such models. It is notoriously difficult, for example, to determine what makes an AI correctly identify an image of a matchstick.

As a casual user of ChatGPT or another generative model, you may well have even less of an idea of what the initial training data consisted of. Ask ChatGPT where its data comes from, and it will tell you simply that it was trained on a “diverse set of data from the Internet.”

The perils of AI-generated output

This can lead to some dangerous situations. Because you can’t understand the relationships and the internal representations that the model has learned from the data or see which features of the data are most important to the model, you can’t understand why a model is making certain predictions. That makes it difficult to detect — or correct — errors or biases in the model.

Internet users have already recorded cases where ChatGPT produced wrong or questionable answers, ranging from failing at chess to generating Python code determining who should be tortured.

And these are just the cases where it was obvious that the answer was wrong. By some estimates, 20% of ChatGPT answers are made-up. As AI technology improves, it’s conceivable that we could enter a world where confident AI chatbots produce answers that seem right, and we can’t tell the difference.

Many have argued that we should be excited but proceed with caution. Generative AI can provide tremendous business value; therefore, this line of argument goes, we should, while being aware of the risks, focus on ways to use these models in practical situations — perhaps by supplying them with additional training in hopes of reducing the high false-answer or “hallucination” rate.

However, training may not be enough. By simply training models to produce our desired outcomes, we could conceivably create a situation where AIs are rewarded for producing outcomes their human judges deem successful — incentivizing them to purposely deceive us. Hypothetically, this could escalate into a situation where AIs learn to avoid getting caught and build sophisticated models to this end, even, as some have predicted, defeating humanity.

White-boxing the problem

What is the alternative? Rather than focusing on how we train generative AI models, we can use models like white-box or explainable ML. In contrast to black-box models such as generative AI, a white-box model makes it easy to understand how the model makes its predictions and what factors it takes into account.

White-box models, while they may be complex in an algorithmic sense, are easier to interpret, because they include explanations and context. A white-box version of ChatGPT might tell you what it thinks the right answer is, but quantify how confident it is that it is, in fact, the right answer (is it 50% confident or 100%?). It would also inform you how it came by that answer (i.e. what data inputs it was based on) and allow you to see other versions of the same answer, enabling the user to decide whether the results can be trusted.

This might not be necessary for a simple chatbot. However, in a situation where a wrong answer can have major repercussions (education, manufacturing, healthcare), having such context can be life-changing. If a doctor is using AI to make diagnoses but can see how confident the software is in the result, the situation is far less dangerous than if the doctor is simply basing all their decisions on the output of a mysterious algorithm.

The reality is that AI will play a major role in business and society going forward. However, it’s up to us to choose the right kind of AI for the right situation.

Berk Birand is founder & CEO of Fero Labs.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Berk Birand, Fero Labs

Source link

You May Also Like

Watch live as Sam Bankman-Fried speaks with DealBook.

Sam Bankman-Fried, the founder of the collapsed cryptocurrency exchange FTX, is set…

Signature Bank failure due to ‘poor management,’ US FDIC report says By Reuters

© Reuters. FILE PHOTO: A woman walks past a Signature Bank location…

Education Department overhauls federal student loan system, aiming to make it ‘simpler, fairer and more accountable’

Chip Somodevilla | Getty Images News | Getty Images The U.S. Department…

Activist investors propose candidate to become GAM chief

Receive free GAM Holding AG updates We’ll send you a myFT Daily…