Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Elon Musk has brought on Dan Hendrycks, a machine learning researcher who serves as the director of the nonprofit Center for AI Safety, as an advisor to his new startup, xAI.

Hendrycks, whose organization sponsored a Statement on AI Risk in May, which was signed by the CEOs of OpenAI, DeepMind, Anthropic and hundreds of other AI experts, receives over 90% of its funding by Open Philanthropy, a nonprofit run by a prominent couple (Dustin Moskovitz and Cari Tuna) in the controversial Effective Altruism (EA) movement. EA is defined by the Center for Effective Altruism as “an intellectual project, using evidence and reason to figure out how to benefit others as much as possible.” According to numerous EA adherents, the paramount concern facing humanity revolves around averting a catastrophic scenario where an AGI created by humans eradicates our species.

Musk’s appointment of Hendrycks is significant because it is the clearest sign yet that four of the world’s most famous and well-funded AI research labs — OpenAI, DeepMind, Anthropic and now xAI — are bringing these kinds of existential risk, or x-risk, ideas about AI systems to the mainstream public.

Many AI experts have complained about x-risk focus

That is the case even though many top AI researchers and computer scientists do not agree that this “doomer” narrative deserves so much attention.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

For example, Sara Hooker, head of Cohere for AI, told VentureBeat in May that x-risk “was a fringe topic.” And Mark Riedl, professor at the Georgia Institute of Technology, said that existential threats are “often reported as fact,” which he added “goes a long way to normalizing, through repetition, the belief that only scenarios that endanger civilization as a whole matter and that other harms are not happening or are not of consequence.”

NYU AI researcher and professor Kyunghyun Cho agreed, telling VentureBeat in June that he believes these “doomer narratives” are distracting from the real issues, both positive and negative, posed by today’s AI.

“I’m disappointed by a lot of this discussion about existential risk; now they even call it literal ‘extinction,’” he said. “It’s sucking the air out of the room.”

Other AI experts have also pointed out, both publicly and privately, that they are concerned by the companies’ publicly-acknowledged ties to the EA community — which is supported by tarnished tech figures like FTX’s Sam Bankman-Fried — as well as various TESCREAL movements such as longtermism and transhumanism.

“I am very aware of the fact that the EA movement is the one that is actually driving the whole thing around AGI and existential risk,” Cho told VentureBeat. “I think there are too many people in Silicon Valley with this kind of savior complex. They all want to save us from the inevitable doom that only they see and they think only they can solve.”

Timnit Gebru, in a Wired article last year, pointed out that SBF was was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform. Other billionaires who have contributed big money to EA and x-risk causes include Elon MuskVitalik ButerinBen DeloJaan TallinnPeter Thiel, and Dustin Muskovitz.

As a result, she wrote, “all of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on ‘beneficial artificial general intelligence’ that will bring techno utopia for humanity. This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it.”

Here is a rundown of where this tech quartet stands when it comes to AGI, x-risk and Effective Altruism:

xAI: ‘Understand the true nature of the universe”

Mission: Engineer an AGI to “understand the universe”

Focus on AGI and x-risk: Elon Musk, who helped found OpenAI in 2015, reportedly left the startup because he felt it wasn’t doing enough to develop AGI safely. He also played a key role in convincing AI leaders to sign the to sign Hendrycks’ Statement on AI Risk that says “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Musk developed xAI, he has said, because he believes a smarter AGI will be less likely to destroy humanity. “The safest way to build an A.I. is actually to make one that is maximally curious and truth-seeking,” he said in a recent Twitter Spaces talk.

Ties to Effective Altruism: Musk himself has claimed that writings about EA by one of its originators, philosopher William MacAskill, “is a close match for my philosophy.” As for Hendrycks, according to a recent Boston Globe interview, he “claims he was never an EA adherent, even if he brushed up against the movement,” and says “AI safety is a discipline that can, and does, stand apart from effective altruism.” Still, Hendrycks receives funding from Open Philanthropy and has said he became interested in AI safety because of his participation in 80,000 Hours, a career exploration program associated with the EA movement.

OpenAI: ‘Creating safe AGI that benefits all of humanity’

Mission: In 2015, OpenAI was founded with a mission to “ensure that artificial general intelligence benefits all of humanity.” OpenAI’s website notes: “We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”

Focus on AGI and x-risk: Since its founding, OpenAI has never wavered from its AGI-focused mission. It has posted many blog posts over the past year with titles like “Governing Superintelligence,” “Our Approach to AI Safety,” and “Planning for AGI and Beyond”. Earlier this month, OpenAI announced a new “superalignment team” with a goal to “solve the core technical challenges of superintelligence alignment in four years.” The company said its co-founder and chief scientist Ilya Sutskever will make this research his core focus, and the company said it would dedicate 20% of its compute resources to its superalignment team. One team member recently called it the “notkilleveryoneism” team:

Ties to Effective Altruism: In March 2017, OpenAI received a grant of $30 million from Open Philanthropy. In 2020, MIT Technology Review’s Karen Hao reported that “the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of effective altruism.” These days, the company’s head of alignment, Jan Leike, who leads the superalignment team, reportedly identifies with the EA movement. And while OpenAI CEO Sam Altman has criticized EA in the past, particularly in the wake of the Sam Bankman-Fried scandal, he did complete the 80,000 Hours course, which was created by EA originator William MacAskill.

Google DeepMind: ‘Solving intelligence to advance science and benefit humanity’

Mission: “To unlock answers to the world’s biggest questions by understanding and recreating intelligence itself.”

Focus on AGI and x-risk: DeepMind was founded in 2010 by Demis Hassabis, Shane Legg and Mustafa Suleyman, and in 2014 the company was acquired by Google. In 2023, DeepMind merged with Google Brain to form Google DeepMind. Its AI research efforts, which have often focused on reinforcement learning through game challenges such as its AlphaGo program, has always had a strong focus on an AGI future: “By building and collaborating with AGI we should be able to gain a deeper understanding of our world, resulting in significant advances for humanity,” it says on the company website. A recent interview with CEO Hassabis in the Verge said that “Demis is not shy that his goal is building an AGI, and we talked through what risks and regulations should be in place and on what timeline.”

Ties to Effective Altruism: DeepMind researchers like Rohin Shan and Sebastian Farquar identify as Effective Altruists, while Hassabis has spoken at EA conferences and groups from DeepMind have attended the Effective Altruism Global Conference. Pushmeet Kohli, principal scientist and research team leader at DeepMind, has also been interviewed about AI safety on the 80,000 Hours podcast.

Anthropic: ‘AI research and products that put safety at the frontier’

Mission: According to Anthropic’s website, its mission is to “ensure transformative AI helps people and society flourish. Progress this decade may be rapid, and we expect increasingly capable systems to pose novel challenges. We pursue our mission by building frontier systems, studying their behaviors, working to responsibly deploy them, and regularly sharing our safety insights. We collaborate with other projects and stakeholders seeking a similar outcome.”

Focus on AGI and x-risk: Anthropic was founded in 2021 by several former employees at OpenAI who objected to OpenAI’s direction (such as its relationship with Microsoft) — including Dario Amodei, who served as OpenAI’s vice president of research and is now Anthropic’s CEO. According to a recent in-depth New York Times article called “Inside the White-Hot Center of AI Doomerism,” Anthropic employees are very concerned about x-risk: “Many of them believe that AI models are rapidly approaching a level where they might be considered artificial general intelligence, or AGI, the industry term for human-level machine intelligence. And they fear that if they’re not carefully controlled, these systems could take over and destroy us.”

Ties to Effective Altruism: Anthropic has some of the clearest ties to the EA community of any of the big AI labs. “No major AI lab embodies the EA ethos as fully as Anthropic,” said the New York Times piece. “Many of the company’s early hires were effective altruists, and much of its start-up funding came from wealthy EA-affiliated tech executives, including Dustin Moskovitz, a co-founder of Facebook, and Jaan Tallinn, a co-founder of Skype.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Sharon Goldman

Source link

You May Also Like

The Stock Market’s Drop Was the Worst Since March. Is It September or Something More Sinister?

The Stock Market’s Drop Was the Worst Since March. Is It September…

‘Tracers in the Dark’ explores the growth of illicit commerce with cryptocurrency

NPR’s Steve Inskeep speaks with Andy Greenberg about crytocurrency crime. Greenberg is…

5 ways to use predictive insights to get the most from your data

Join us on November 9 to learn how to successfully innovate and…

Riot to cross-promote Arcane S2 in League of Legends thru 2024

Do you want to get the latest gaming industry news straight to…