Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


On May 1, The New York Times reported that Geoffrey Hinton, the so-called “Godfather of AI,” had resigned from Google. The reason he gave for this move is that it will allow him to speak freely about the risks of artificial intelligence (AI). 

His decision is both surprising and unsurprising. The former since he has devoted a lifetime to the advancement of AI technology; the latter given his growing concerns expressed in recent interviews. 

There is symbolism in this announcement date. May 1 is May Day, known for celebrating workers and the flowering of spring. Ironically, AI and particularly generative AI based on deep learning neural networks may displace a large swath of the workforce. We are already starting to see this impact, for example, at IBM.

AI replacing jobs and approaching superintelligence?

No doubt others will follow as the World Economic Forum sees the potential for 25% of jobs to be disrupted over the next five years, with AI playing a role. As for the flowering of spring, generative AI could spark a new beginning of symbiotic intelligence — of man and machine working together in ways that will lead to a renaissance of possibility and abundance.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Alternatively, this could be when AI advancement begins to approach superintelligence, possibly posing an exponential threat. 

It is these types of worries and concerns that Hinton wants to speak about, and he could not do that while working for Google or any other corporation pursuing commercial AI development. As Hinton stated in a Twitter post: “I left so that I could talk about the dangers of AI without considering how this impacts Google.”

Geoffrey Hinton’s tweet on May 1, 2023

Mayday

Perhaps it is only a play on words, but the announcement date conjures another association: Mayday, a commonly used distress signal used when there is an immediate and grave danger. A mayday signal is to be used when there is a genuine emergency, as it is a priority call to respond to a situation. Is the timing of this news merely coincidental, or is this meant to symbolically add to its significance? 

According to the Times article, Hinton’s immediate concern is the ability of AI to produce human-quality content in text, video and images and how that capability can be used by bad actors to spread misinformation and disinformation such that the average person will “not be able to know what is true anymore.” 

He also now believes we are much closer to the time when machines will be more intelligent than the smartest people. This point has been much discussed, and most AI experts have viewed this as being far into the future, perhaps 40 years or more.

The list included Hinton. By contrast, Ray Kurzweil, a former director of engineering for Google, has claimed for some time that this moment will arrive in 2029 when AI easily passes the Turing Test. Kurzweil’s views on this timeline had been an outlier — but no longer. 

According to Hinton’s May Day interview: “The idea that this stuff [AI] could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Those 30 to 50 years could have been used to prepare companies, governments, and societies through governance practices and regulations, but now the wolf is nearing the door. 

Artificial general intelligence

A related topic is the discussion about artificial general intelligence (AGI), the mission for OpenAI and DeepMind and others. AI systems in use today mostly excel in specific, narrow tasks, such as reading radiology images or playing games. A single algorithm cannot excel at both types of tasks. In contrast, AGI possesses human-like cognitive abilities, such as reasoning, problem-solving and creativity, and would, as a single algorithm or network of algorithms, perform a wide range of tasks at human level or better across different domains. 

Much like the debate about when AI will be smarter than humans — at least for specific tasks — predictions vary widely about when AGI will be achieved, ranging from just a few years to several decades or centuries or possibly never. These timeline predictions are also advancing due to new generative AI applications such as ChatGPT based on Transformer neural networks. 

Beyond the intended purposes of these generative AI systems, such as creating convincing images from text prompts or providing human-like text answers in response to queries, these models possess the remarkable ability to exhibit emergent behaviors. This means the AI can exhibit novel, intricate, and unexpected behaviors.

For example, the ability of GPT-3 and GPT-4 — the models underpinning ChatGPT — to generate code is considered an emergent behavior since this capability was not part of the design specification. This feature instead emerged as a byproduct of the model’s training.  The developers of these models cannot fully explain just how or why these behaviors develop. What can be deduced is that these capabilities emerge from large-scale data, the transformer architecture, and the powerful pattern recognition capabilities the models develop. 

Timelines speed up, creating a sense of urgency

It is these advances that are recalibrating timelines for advanced AI. In a recent CBS News interview, Hinton said he now believes that AGI could be achieved in 20 years or less. He added: We “might be” close to computers being able to come up with ideas to improve themselves. “That’s an issue, right? We have to think hard about how you control that.” 

Early evidence of this capability can be seen with the nascent AutoGPT, an open-source recursive AI agent. In addition to anyone being able to use it, this means that it can autonomously use the results it generates to create new prompts, chaining these operations together to complete complex tasks.

In this way, AutoGPT could potentially be used to identify areas where the underlying AI models could be improved and then generate new ideas for how to improve them. Not only that, but as The New York Times columnist Thomas Friedman notes, open source code can be exploited by anyone. He asks: “What would ISIS do with the code?”

It is not a given that generative AI specifically — or the overall effort to develop AI will lead to bad outcomes. However, the acceleration of timelines for more advanced AI brought about by generative AI has created a strong sense of urgency for Hinton and others, clearly leading to his mayday signal.

Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Gary Grossman, Edelman

Source link

You May Also Like

Can Chewy Fetch Double Digit Gains in 2023?

MarketBeat.com – MarketBeat Online pet products and services retailer Chewy, Inc. (NASDAQ:…

Global Regulatory Brief: Digital finance, November edition | Insights | Bloomberg Professional Services

UK FPC executive outlines operational resilience and cyber risk regulatory agenda External…

Leaked US war files contain ‘fictitious information’, says Ukraine

The leak of classified US and Nato documents on Ukraine’s military and…

Sensex, Nifty: Key factors that may influence Dalal Street this week

The coming week is likely to be a volatile one for local…