Sam Altman’s recent employment saga and speculation about OpenAI’s groundbreaking Q* model have renewed public interest in the possibilities and risks of artificial general intelligence (AGI).

AGI could learn and execute intellectual tasks comparably to humans. Swift advancements in AI, particularly in deep learning, have stirred optimism and apprehension about the emergence of AGI. Several companies, including OpenAI and Elon Musk’s xAI, aim to develop AGI. This raises the question: Are current AI developments leading toward AGI? 

Perhaps not.

Limitations of deep learning

Deep learning, a machine learning (ML) method based on artificial neural networks, is used in ChatGPT and much of contemporary AI. It has gained popularity due to its ability to handle different data types and its reduced need for pre-processing, among other benefits. Many believe deep learning will continue to advance and play a crucial role in achieving AGI.

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

However, deep learning has limitations. Large datasets and expensive computational resources are required to create models that reflect training data. These models derive statistical rules that mirror real-world phenomena. Those rules are then applied to current real-world data to generate responses.

Deep learning methods, therefore, follow a logic focused on prediction; they re-derive updated rules when new phenomena are observed. The sensitivity of these rules to the uncertainty of the natural world makes them less suitable for realizing AGI. The June 2022 crash of a cruise Robotaxi could be attributed to the vehicle encountering a new situation for which it lacked training, rendering it incapable of making decisions with certainty.

The ‘what if’ conundrum

Humans, the models for AGI, do not create exhaustive rules for real-world occurrences. Humans typically engage with the world by perceiving it in real-time, relying on existing representations to understand the situation, the context and any other incidental factors that may influence decisions. Rather than construct rules for each new phenomenon, we repurpose existing rules and modify them as necessary for effective decision-making. 

For example, if you are hiking along a forest trail and come across a cylindrical object on the ground and wish to decide your next step using deep learning, you need to gather information about different features of the cylindrical object, categorize it as either a potential threat (a snake) or non-threatening (a rope), and act based on this classification.

Conversely, a human would likely begin to assess the object from a distance, update information continuously, and opt for a robust decision drawn from a “distribution” of actions that proved effective in previous analogous situations. This approach focuses on characterizing alternative actions in respect to desired outcomes rather than predicting the future — a subtle but distinctive difference.

Achieving AGI might require diverging from predictive deductions to enhancing an inductive “what if..?” capacity when prediction is not feasible.

Decision-making under deep uncertainty a way forward?

Decision-making under deep uncertainty (DMDU) methods such as Robust Decision-Making may provide a conceptual framework to realize AGI reasoning over choices. DMDU methods analyze the vulnerability of potential alternative decisions across various future scenarios without requiring constant retraining on new data. They evaluate decisions by pinpointing critical factors common among those actions that fail to meet predetermined outcome criteria.

The goal is to identify decisions that demonstrate robustness — the ability to perform well across diverse futures. While many deep learning approaches prioritize optimized solutions that may fail when faced with unforeseen challenges (such as optimized just-in-time supply systems did in the face of COVID-19), DMDU methods prize robust alternatives that may trade optimality for the ability to achieve acceptable outcomes across many environments. DMDU methods offer a valuable conceptual framework for developing AI that can navigate real-world uncertainties.

Developing a fully autonomous vehicle (AV) could demonstrate the application of the proposed methodology. The challenge lies in navigating diverse and unpredictable real-world conditions, thus emulating human decision-making skills while driving. Despite substantial investments by automotive companies in leveraging deep learning for full autonomy, these models often struggle in uncertain situations. Due to the impracticality of modeling every possible scenario and accounting for failures, addressing unforeseen challenges in AV development is ongoing.

Robust decisioning

One potential solution involves adopting a robust decision approach. The AV sensors would gather real-time data to assess the appropriateness of various decisions — such as accelerating, changing lanes, braking — within a specific traffic scenario.

If critical factors raise doubts about the algorithmic rote response, the system then assesses the vulnerability of alternative decisions in the given context. This would reduce the immediate need for retraining on massive datasets and foster adaptation to real-world uncertainties. Such a paradigm shift could enhance AV performance by redirecting focus from achieving perfect predictions to evaluating the limited decisions an AV must make for operation.

Decision context will advance AGI

As AI evolves, we may need to depart from the deep learning paradigm and emphasize the importance of decision context to advance towards AGI. Deep learning has been successful in many applications but has drawbacks for realizing AGI.

DMDU methods may provide the initial framework to pivot the contemporary AI paradigm towards robust, decision-driven AI methods that can handle uncertainties in the real world.

Swaptik Chowdhury is a Ph.D. student at the Pardee RAND Graduate School and an assistant policy researcher at nonprofit, nonpartisan RAND Corporation.

Steven Popper is an adjunct senior economist at the RAND Corporation and professor of decision sciences at Tecnológico de Monterrey.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Swaptik Chowdhury, RAND Corporation

Source link

You May Also Like

McDonald's Just Made a Bittersweet Announcement, and 42 Million Americans Will Be Affected

There are a lot of statistics in this article. Did you know…

What makes HDFC Bank and HDFC merger a win-win bet? CFO S Vaidyanathan explains

HDFC Bank on April 4 announced that the housing finance major HDFC…

Hotelier Writes Off San Francisco, Citing ‘Major Challenges’

Park Hotels & Resorts, the operator of two of the most prominent…

3 Simple Ways to Keep Your Millennial Employees Engaged

Opinions expressed by Entrepreneur contributors are their own. “I quit” are two…