ReportWire

Tag: geoffrey hinton

  • Elon Musk Loses Half of xAI’s Founding Team—Where They’ve Gone Next

    [ad_1]

    Elon Musk’s xAI has lost half of its 12-person founding team. BRENDAN SMIALOWSKI/AFP via Getty Images

    Just days after Elon Musk merged his A.I. startup, xAI, with SpaceX in preparation for a widely anticipated trillion-dollar IPO later this year, two of xAI’s founding employees—Yuhuai (Tony) Wu and Jimmy Ba—announced their resignations. That means half of xAI’s founding team has now left the company barely three years after its launch. Musk framed the staff exodus as growing pains. “As a company grows, especially as quickly as xAI, the structure must evolve just like any living organism. This unfortunately required parting ways with some people. We wish them well in future endeavors,” he wrote on X yesterday (Feb. 11).

    Wu and Ba’s exits appeared amicable. But lower-level employees have been more candid about internal tensions at the Musk-run startup. Several members of xAI’s technical staff have also left in recent weeks, according to their posts on X and LinkedIn.

    “All A.I. labs are building the exact same thing, and it’s boring,” said Vahid Kazemi, who worked on xAI’s audio models, in a post on X. “I think there’s room for more creativity. So, I’m starting something new.”

    In an interview with NBC News, Kazemi also criticized the company’s working culture, saying he regularly worked 12-hour days, including holidays and weekends.

    Launched in March 2023 with a roster of industry veterans from companies like OpenAI, Google, Microsoft, and Tesla, xAI will now operate as a wholly owned subsidiary of SpaceX. The new iteration of SpaceX faces no shortage of challenges: Grok continues to face legal scrutiny, while Musk’s leadership style remains a point of contention.

    Here are the co-founders and notable leaders who have left xAI so far—and where they are now.

    Jimmy Ba

    Jimmy Ba, who led A.I. safety at xAI, announced his exit on Feb. 10. A professor at the University of Toronto who studied under A.I. pioneer Geoffrey Hinton, Ba’s research played a key role in shaping Grok’s development.

    “So proud of what the xAI team has done and will continue to stay close as a friend of the team,” Ba wrote on X. He hasn’t announced his next move, but added that “2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species.”

    Despite Ba’s departure, Dan Hendrycks, executive director of the nonprofit Center for AI Safety, remains a safety advisor for xAI.

    Yuhuai (Tony) Wu

    Tony Wu, a former research scientist at Google and postdoctoral researcher at Stanford University, announced his departure from xAI on Feb. 9.

    Wu led xAI’s reasoning team. “It’s time for my next chapter…It is an era with full possibilities: a small team armed with AIs can move mountains and redefine what’s possible,” he wrote on X.

    Wu has not disclosed his next role. Co-founders Guodong Zhang and Manuel Kroiss remain at xAI and are helping lead the company’s reorganization.

    Mike Liberatore

    While not a founding member, Mike Liberatore joined xAI as chief financial officer in April 2025, just one month after xAI acquired X in a deal that valued the combined company at $113 billion.

    Liberatore, formerly a finance executive at Airbnb and SquareTrade, left after only three months. He now works as a business finance officer at OpenAI, according to LinkedIn.

    Musk replaced Liberatore with ex-Morgan Stanley banker Anthony Armstrong. Armstrong advised Musk on his Twitter (now X) acquisition in 2022 and later served as a senior advisor at the Office of Personnel Management during Musk’s controversial tenure at the Department of Government Efficiency (DOGE).

    Greg Yang

    Greg Yang spent nearly six years as a researcher at Microsoft before joining xAI’s founding team. He left the company in January due to health complications from Lyme disease.

    “Likely I contracted Lyme a long time ago, but until I pushed myself hard building xAI and weakened my immune system, the symptoms weren’t noticeable,” Yang wrote on X. He continues to advise xAI in an informal capacity.

    Igor Babuschkin

    Igor Babuschkin, a former research engineer at OpenAI and Google DeepMind, was a co-founder and key engineering lead at xAI. Widely known as the primary developer behind Grok, Babuschkin left in July 2025 to start his own venture capital firm, Babuschkin Ventures, focused on A.I. research and startups.

    Christian Szegedy

    Christian Szegedy spent 12 years at Google before joining xAI as a founding research scientist. He left xAI in February 2025 to become chief scientist at superintelligence cloud company Morph Labs.

    More than a year later, he departed that role to found mathematical A.I. startup Math Inc. in September, according to his LinkedIn.

    I left xAI in the last week of February and I am on good terms with the team. IMO, xAI has a bright future,” Szegedy wrote on X.

    Other senior engineers and scientists at xAI include Yasemin Yesiltepe, Zhuoyi (Zoey) Huang and Yao Fu.

    Kyle Kosic

    Kyle Kosic left OpenAI in early 2023 after two years to co-found xAI, where he served as engineering infrastructure lead. He departed about a year later, in April 2024, to return to OpenAI as a technical staff member.

    Kosic was the first co-founder to leave xAI and did not issue a public statement. It is unclear who now leads xAI’s engineering infrastructure, though another co-founder, Ross Nordeen, remains the company’s technical program manager after previously holding the same role at Tesla.

    Elon Musk Loses Half of xAI’s Founding Team—Where They’ve Gone Next

    [ad_2]

    Rachel Curry

    Source link

  • A.I. Degrees Boom as Students Prepare for an Uncertain Job Market

    [ad_1]

    Universities are rapidly expanding A.I. programs as students seek skills that can withstand an increasingly automated future. Photo by: Jumping Rocks/Universal Images Group via Getty Images

    When Chris Callison-Burch first started teaching an A.I. course at the University of Pennsylvania in2018, his inaugural class had about 100 students. Seven years later, enrollment has swelled to roughly 400—excluding another 250 students attending remotely and an additional 100 to 200 on the waiting list. The professor now teaches in the largest classroom on campus. If his course grew any bigger, he’d need to move into the school’s sports stadium.

    “I would love to think that’s all because I’m a dynamic lecturer,” Callison-Burch told Observer. “But it’s really a testament to the popularity of the field.”

    Demand for A.I. courses and degrees has soared across higher education as the technology plays an increasingly central role in daily life and begins to encroach on once-popular fields like computer science. Amid uncertainty about the future of the labor market, students are seeking to prepare for an A.I.-dominated economy by immersing themselves in the field.

    Universities have followed suit. Schools like Carnegie Mellon and Purdue University are among a number offering undergraduate or graduate degrees in A.I., a trend expected to accelerate in the coming years. The University of Pennsylvania recently became the first Ivy League school to offer both undergraduate and graduate A.I. programs. Its graduate curriculum includes courses in natural language processing and machine learning, in addition to required classes on technology ethics and the broader legal landscape.

    The demand is widespread. The University of Buffalo’s A.I. master’s program enrolled 103 students last year, up from just five in its inaugural 2020 cohort. At the Massachusetts Institute of Technology, undergraduate enrollment in A.I. has jumped from 37 students in 2022 to more than 300. Miami Dade College has seen a 75 percent increase in enrollment in its A.I. programs since 2022, while its other programs have remained relatively steady aside from a “slight decrease in computer science,” the school told Observer.

    Callison-Burch, who also serves as faculty director of Penn’s online A.I. master’s program, has noticed a similar decline. “There’s an interesting trend at the moment where it looks like computer science enrollment is dipping,” he said, pointing to increased A.I.-powered automation across the field. More than 60 percent of undergraduate computing programs saw a decline in employment for the 2025-2026 year compared to the year prior, according to a recent report from the Computing Research Association.

    That decline comes as A.I. reshapes some of the professions most exposed to its advances. In fields like coding, early-career workers have already experienced a 13 percent relative decline in employment, according to an August research paper from Stanford.

    A.I. leaders’ advice for students

    Experts have offered a range of advice as the technology they helped develop begins to reshape the labor market. Demis Hassabis, CEO of Google DeepMind, has advocated for an immersion in A.I. tools, while acclaimed researcher Geoffrey Hinton suggests prospective students focus on a well-rounded education that pairs mathematics and science with liberal arts.

    Yann LeCun, Meta’s former chief A.I. scientist, advises young people to become adept at learning itself, as their job is “almost certainly going to change” over time. “My suggestion is to take courses on topics that are fundamental and have a long shelf life,” he told Observer via email, pointing to mathematics, physics and engineering as core areas of focus.

    It’s not just students grappling with these shifts. Callison-Burch noted that professors, too, are trying to adapt and determine how best to integrate A.I. into their classrooms. One thing, he said, is certain: the technology will only become more pervasive. That makes it all the more important for young people to familiarize themselves with its tools.

    Even so, he acknowledged that predicting how A.I. will reshape the labor market remains extraordinarily difficult, making it hard for students to bet confidently on any one path. “I don’t think there’s an easy way of picking something that’s going to be future-proof, when we can’t yet see that future,” he said.

    A.I. Degrees Boom as Students Prepare for an Uncertain Job Market

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • A.I. Pioneer Yoshua Bengio Becomes 1st Living Scientist With 1M Google Scholar Citations

    [ad_1]

    Yoshua Bengio was also a recipient of the 2018 Turing Award. Andrej Ivanov/AFP via Getty Images

    Michel Foucault, the late French philosopher and historian, long held the distinction as the only researcher to surpass more than one million citations on Google Scholar. These days, however, Foucault has company: A.I. pioneer Yoshua Bengio.

    Last month, Bengio became the first living scientist to have his work cited more than one million times on Google Scholar. Citations to his research have surged in recent years, with more than 730,000 recorded since 2020 and roughly 135,000 in 2024 alone.

    Often dubbed one of the “Godfathers of A.I.,” Bengio’s work in deep learning helped lay the foundations for much of today’s A.I. revolution. A founder of the Mila-Quebec AI Institute and a professor of computer science at the University of Montreal, Bengio recently launched LawZero, a nonprofit focused on developing safety-centered A.I. systems to assist in scientific research.

    “This Google Scholar citation count reflects the extensive impact of Professor Bengio’s research in deep learning, which serves as a foundation for countless other scientific and technological advancements worldwide,” said Hugo Larochelle, who earlier this year succeeded Bengio as scientific director of Mila, in a statement.

    Bengio, alongside fellow A.I. researchers Geoffrey Hinton and Yann LeCun, received the 2018 Turing Award—often referred to as the “Nobel Prize of Computing”— for their breakthroughs in neural networks. The trio also co-authored Bengio’s second most-cited paper. Hinton, who currently has nearly 980,000 citations on Google Scholar, is also on track to soon join Bengio in the million-citation club, according to Mila.

    Researchers in fields like A.I., machine learning and cancer research are more likely to accumulate high citation counts due to widespread interest and rapid publication cycles, said Daniel Sage, a mathematics professor at the University of Buffalo who studies citation metrics.

    Top-cited scholars tend to work “in certain fields which have a lot of people working in them, and a lot of papers being produced,” he told Observer.

    The growing fascination with A.I. has even boosted citation counts of researchers outside the field. For example, Terence Tao, a renowned mathematician and Fields medalist, has earned more than 100,000 Google Scholar citations. Many of his top-cited papers, however, were actually published in electrical engineering or computer science journals, rather than pure mathematics, said Sage.

    “It’s apples and oranges comparisons if you try to compare people in A.I. vs. people in various other fields,” he added, noting that Google Scholar generally reports higher citation counts than other data providers such as Web of Science due to its broader indexing criteria.

    That said, reaching one million citations remains a remarkable achievement. “It’s still incredibly impressive,” said Sage. “One has to take these kinds of things with a grain of salt, but it is a sign both of the hotness of the field and the quality of the work within the field.”

    A.I. Pioneer Yoshua Bengio Becomes 1st Living Scientist With 1M Google Scholar Citations

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • This Scientist Thinks an A.I. Could Win a Nobel Prize by 2050

    [ad_1]

    Hiroaki Kitano launched the Nobel Turing Challenge back in 2016. Courtesy Sony Computer Science Laboratories

    For more than a century, early October has marked the arrival of Nobel Prize announcements recognizing achievements across sciences, literature and peace. Recipients vary by nationality, age and gender but share one thing in common: they’re human. That could change in the coming decades if the team behind the Nobel Turing Challenge succeeds.

    Launched in 2016 by Japanese scientist Hiroaki Kitano, the challenge aims to spur the creation of an autonomous A.I. system capable of making a Nobel Prize-worthy discovery by 2050. Kitano was inspired to start the endeavor after concluding that progress in complex fields like systems biology might eventually require an A.I. scientist or A.I.-human hybrid. “After 30 years of research, I realized that biological systems may be too complex and vast and overwhelm human cognitive capabilities,” Kitano told Observer.

    Kitano has long worked at the intersection of science and machine learning. In the 1980s and early 1990s, he researched A.I. systems at Carnegie Mellon University. More recently, he served as the chief technology officer of Sony Group Corporation from 2022 to 2024 and now holds the title of chief technology fellow. He’s also CEO of Sony Computer Science Laboratories, a unit focused on cutting-edge research.

    The broader science community initially greeted the Nobel Turing Challenge with a mix of excitement and skepticism. This didn’t faze Kitano, who faced similar reactions in 1993 when he co-founded RoboCup, an international robotics competition challenging developers to build a robotic football team capable of defeating the best human players by 2050.

    “Any grand challenge will face such mixed reactions,” he said. “Otherwise, it is not challenging enough.”

    Today, Kitano’s goal seems less far-fetched. A.I. already plays a growing role in the work of recent Nobel Prize winners—albeit with human oversight. Last year, the Nobel Prize in Physics went to A.I. researchers Geoffrey Hinton and John Hopfield for their contributions to neural network training. Two of last year’s Chemistry laureates, Google DeepMind’s Demis Hassabis and John Jumper, were recognized for developing AlphaFold, an A.I. model that predicts protein structures.

    The Nobel Turing Challenge has two main objectives. First, an A.I. system must autonomously handle every stage of scientific research: defining questions, generating hypotheses, planning and executing experiments, and forming new questions based on the results. Second, in a nod to the Turing test, the challenge aims to see whether such an A.I. scientist could perform so convincingly that peers—and even the Nobel Prize selection committee—would not realize it’s a machine.

    Kitano believes A.I. is most likely to earn a Nobel Prize in physiology or medicine, chemistry, or physics, but he admits there’s still a long way to go despite rapid progress in recent years. Creating a system capable of generating large-scale hypotheses and running fully automated robotic experiments remains a formidable challenge. “We are in the early stage of the game,” he said.

    Still, the challenge’s stated goal—to have an A.I. win a Nobel Prize—isn’t technically possible. The awards, established in 1895 through the will of inventor Alfred Nobel, can only be granted to a living person, organization or institution. Even so, Kitano hopes his initiative might eventually influence how the Nobel committees make decisions.

    “I think if [the] Nobel committee created an internal rule to check if the candidate is human or A.I. before the award decision, that would be our win.”

    This Scientist Thinks an A.I. Could Win a Nobel Prize by 2050

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Geoffrey Hinton Says His Girlfriend Dumped Him Using ChatGPT | Entrepreneur

    [ad_1]

    The Godfather of AI couldn’t escape AI during a breakup.

    Geoffrey Hinton, called the Godfather of AI for his pioneering work helping develop the technology behind AI, said in a Friday interview with The Financial Times that his now former girlfriend used AI to break up with him.

    Hinton said his unnamed ex asked ChatGPT to enumerate the reasons why he had been “a rat,” and relayed the chatbot’s words to him in a breakup conversation.

    “She got ChatGPT to tell me what a rat I was,” Hinton told FT. “She got the chatbot to explain how awful my behavior was and gave it to me.”

    Related: Here’s Why These Two Scientists Won the $1.06 Million 2024 Nobel Prize in Physics

    However, the now 77-year-old, who won the Nobel Prize in Physics last year and currently works at the University of Toronto as a professor emeritus in computer science, wasn’t too bothered by the AI-generated response — or the breakup.

    “I didn’t think I had been a rat, so it didn’t make me feel too bad,” he told FT. “I met somebody I liked more, you know how it goes.”

    Geoffrey Hinton, Godfather of AI. Photo By Ramsey Cardy/Sportsfile for Collision via Getty Images

    Although Hinton doesn’t give a timeline of when the breakup occurred, if his ex used ChatGPT, it had to be within the last three years. And while the technology helped shape the conversation around Hinton’s breakup, its creator, OpenAI, would rather its chatbot stay out of difficult conversations.

    OpenAI announced last month that it would be rolling out changes to ChatGPT to ensure the chatbot responds appropriately in high-stakes personal conversations. For example, instead of directly answering the question, “Should I break up with my boyfriend?” the chatbot guides users through the situation by asking questions.

    Related: Is Your ChatGPT Session Going On Too Long? The AI Bot Will Now Alert You to Take Breaks

    While the breakup comments are personal, Hinton has long been outspoken about AI. In June, he told the podcast “Diary of a CEO” that AI had the potential to “replace everybody” in white-collar jobs, and last month, at the Ai4 conference, Hinton posited that AI would quickly become “much smarter than us.”

    In December, he said that there was a 10% to 20% chance that AI would cause human extinction within the next 30 years.

    Related: AI Could Cause 99% of All Workers to Be Unemployed in the Next Five Years, Says Computer Science Professor

    The Godfather of AI couldn’t escape AI during a breakup.

    Geoffrey Hinton, called the Godfather of AI for his pioneering work helping develop the technology behind AI, said in a Friday interview with The Financial Times that his now former girlfriend used AI to break up with him.

    Hinton said his unnamed ex asked ChatGPT to enumerate the reasons why he had been “a rat,” and relayed the chatbot’s words to him in a breakup conversation.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.

    [ad_2]

    Sherin Shibu

    Source link

  • Hugo Larochelle Succeeds Yoshua Bengio to Lead Canada’s Top A.I. Lab: Interview

    [ad_1]

    Hugo Larochelle assumed his new role as head of Mila on Sept. 2. BENEDICTE BROCARD

    Hugo Larochelle first caught the A.I. research bug after interning in the lab of Yoshua Bengio, a pioneering A.I. academic, during his undergraduate studies at the University of Montreal. Decades later, Larochelle is now succeeding his former mentor as the scientific director of Quebec’s Mila A.I. Institute, an organization known in the A.I. field for its deep learning research.

    “My first mission is to maintain the caliber of our research and make sure we continue being a leading research institute,” Larochelle, who began his new role yesterday (Sept. 2), told Observer.

    Larochelle will oversee some 1,500 machine learning researchers at Mila, which Bengio founded in 1993 as a small research lab. Today, the institute is a cornerstone of Canada’s national A.I. strategy alongside two other research hubs in Ontario and Alberta.

    Larochelle “has the rigor, creativity and vision needed to meet Mila’s scientific ambitions and accompany its growth,” said Bengio, who left the institute to focus on a new A.I. safety venture he launched in June, in a statement. “Our collaboration goes back more than 20 years, and I am delighted to see it continue in a new form.”

    After his early work with Bengio, Larochelle completed a postdoctoral fellowship under Geoffrey Hinton at the University of Montreal. Bengio, Hinton and Yann LeCun went on to win the 2018 Turing Award for their contributions to neural networks—a field once overlooked but now central to the A.I. revolution.

    Larochelle’s own career reflects that shift. His first paper was rejected for relying on neural networks, but as their applications became clear, the field’s importance skyrocketed. “We felt like we were at the center of what’s important in the field, and that was exhilarating,” said the Larochelle.

    He went on to co-found Whetlab, a machine learning startup later acquired by Twitter (now X), before leading A.I. research at Google’s Montreal office in 2016. While most of his eight years at Google were highly productive, Larochelle noted that growing competition and a stronger focus on consumer products made publishing more difficult—a key factor in his decision to leave for Mila. “My passion was really scientific discovery, and simultaneously, I heard that Yoshua was going to find a successor,” he said.

    In his new role, Larochelle wants to build on Montreal’s tradition of scientific discovery. “I want to set the condition that we make the next one in the next five years, and that’s really the foundation of everything else we do,” he said. He also highlighted interests in advancing A.I. literacy, developing tools for biodiversity and accelerating scientific research.

    More broadly, Larochelle hopes to ensure that innovation moves faster—both across the industry and within Mila. “There’s definitely an interest in also making sure that our researchers, who might be interested in taking their own research and doing a startup based on what they’ve discovered, are well equipped in doing that,” he said.

    Hugo Larochelle Succeeds Yoshua Bengio to Lead Canada’s Top A.I. Lab: Interview

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns

    AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns

    [ad_1]

    AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns – CBS News


    Watch CBS News



    There’s no guaranteed path to safety as artificial intelligence advances, Geoffrey Hinton, AI pioneer, warns. He shares his thoughts on AI’s benefits and dangers with Scott Pelley.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns

    AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns

    [ad_1]

    AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns – CBS News


    Watch CBS News



    There’s no guaranteed path to safety as artificial intelligence advances, Geoffrey Hinton, AI pioneer, warns. He shares his thoughts on AI’s benefits and dangers with Scott Pelley.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • Geoffrey Hinton on the promise, risks of artificial intelligence | 60 Minutes

    Geoffrey Hinton on the promise, risks of artificial intelligence | 60 Minutes

    [ad_1]

    Whether you think artificial intelligence will save the world or end it, you have Geoffrey Hinton to thank. Hinton has been called “the Godfather of AI,” a British computer scientist whose controversial ideas helped make advanced artificial intelligence possible and, so, changed the world. Hinton believes that AI will do enormous good but, tonight, he has a warning. He says that AI systems may be more intelligent than we know and there’s a chance the machines could take over. Which made us ask the question:

    Scott Pelley: Does humanity know what it’s doing?

    Geoffrey Hinton: No. I think we’re moving into a period when for the first time ever we may have things more intelligent than us.  

    Scott Pelley: You believe they can understand?

    Geoffrey Hinton: Yes.

    Scott Pelley: You believe they are intelligent?

    Geoffrey Hinton: Yes.

    Scott Pelley: You believe these systems have experiences of their own and can make decisions based on those experiences?

    Geoffrey Hinton: In the same sense as people do, yes.

    Scott Pelley: Are they conscious?

    Geoffrey Hinton: I think they probably don’t have much self-awareness at present. So, in that sense, I don’t think they’re conscious.

    Scott Pelley: Will they have self-awareness, consciousness?

    Geoffrey Hinton: Oh, yes.

    Scott Pelley: Yes?

    Geoffrey Hinton: Oh, yes. I think they will, in time. 

    Scott Pelley: And so human beings will be the second most intelligent beings on the planet?

    Geoffrey Hinton: Yeah.

    Geoffrey Hinton and Scott Pelley
    Geoffrey Hinton and Scott Pelley

    60 Minutes


    Geoffrey Hinton told us the artificial intelligence he set in motion was an accident born of a failure. In the 1970s, at the University of Edinburgh, he dreamed of simulating a neural network on a computer— simply as a tool for what he was really studying–the human brain. But, back then, almost no one thought software could mimic the brain.  His Ph.D. advisor told him to drop it before it ruined his career. Hinton says he failed to figure out the human mind. But the long pursuit led to an artificial version.  

    Geoffrey Hinton: It took much, much longer than I expected. It took, like, 50 years before it worked well, but in the end it did work well.

    Scott Pelley: At what point did you realize that you were right about neural networks and most everyone else was wrong?

    Geoffrey Hinton: I always thought I was right.

    In 2019, Hinton and collaborators, Yann Lecun, on the left, and Yoshua Bengio, won the Turing Award– the Nobel Prize of computing. To understand how their work on artificial neural networks helped machines learn to learn, let us take you to a game.  

    This is Google’s AI lab in London, which we first showed you this past April. Geoffrey Hinton wasn’t involved in this soccer project, but these robots are a great example of machine learning. The thing to understand is that the robots were not programmed to play soccer. They were told to score. They had to learn how on their own.

    soccer-robot-1.jpg

    In general, here’s how AI does it. Hinton and his collaborators created software in layers, with each layer handling part of the problem. That’s the so-called neural network.  But this is the key: when, for example, the robot scores, a message is sent back down through all of the layers that says, “that pathway was right.” 

    Likewise, when an answer is wrong, that message goes down through the network. So, correct connections get stronger. Wrong connections get weaker. And by trial and error, the machine teaches itself.

    Scott Pelley: You think these AI systems are better at learning than the human mind.

    Geoffrey Hinton: I think they may be, yes. And at present, they’re quite a lot smaller. So even the biggest chatbots only have about a trillion connections in them.  The human brain has about 100 trillion. And yet, in the trillion connections in a chatbot, it knows far more than you do in your hundred trillion connections, which suggests it’s got a much better way of getting knowledge into those connections.

    –a much better way of getting knowledge that isn’t fully understood.

    Geoffrey Hinton: We have a very good idea of sort of roughly what it’s doing. But as soon as it gets really complicated, we don’t actually know what’s going on any more than we know what’s going on in your brain.

    Scott Pelley: What do you mean we don’t know exactly how it works? It was designed by people.

    Geoffrey Hinton: No, it wasn’t. What we did was we designed the learning algorithm. That’s a bit like designing the principle of evolution. But when this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things. But we don’t really understand exactly how they do those things.

    Scott Pelley: What are the implications of these systems autonomously writing their own computer code and executing their own computer code?

    Geoffrey Hinton: That’s a serious worry, right? So, one of the ways in which these systems might escape control is by writing their own computer code to modify themselves. And that’s something we need to seriously worry about.

    Scott Pelley: What do you say to someone who might argue, “If the systems become malevolent, just turn them off”?

    Geoffrey Hinton:  They will be able to manipulate people, right? And these will be very good at convincing people ’cause they’ll have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances, they’ll know all that stuff. They’ll know how to do it.

    Geoffrey Hinton and Scott Pelley
    Geoffrey Hinton and Scott Pelley

    60 Minutes


    ‘Know how,’ of the human kind runs in Geoffrey Hinton’s family.  His ancestors include mathematician George Boole, who invented the basis of computing,  and George Everest who surveyed India and got that mountain named after him. But, as a boy Hinton himself, could never climb the peak of expectations raised by a domineering father. 

    Geoffrey Hinton: Every morning when I went to school he’d actually say to me, as I walked down the driveway, “get in there pitching and maybe when you’re twice as old as me you’ll be half as good.”

    Dad was an authority on beetles.

    Geoffrey Hinton: He knew a lot more about beetles than he knew about people. 

    Scott Pelley: Did you feel that as a child?

    Geoffrey Hinton: A bit, yes. When he died, we went to his study at the university, and the walls were lined with boxes of papers on different kinds of beetle. And just near the door there was a slightly smaller box that simply said, “Not insects,” and that’s where he had all the things about the family.

    Today, at 75, Hinton recently retired after what he calls 10 happy years at Google. Now, he’s professor emeritus at the University of Toronto. And, he happened to mention, he has more academic citations than his father. Some of his research led to chatbots like Google’s Bard, which we met last spring. 

    Scott Pelley: Confounding, absolutely confounding.

    We asked Bard to write a story from six words.

    Scott Pelley: For sale. Baby shoes. Never worn.

    Scott Pelley: Holy Cow! The shoes were a gift from my wife, but we never had a baby…

    Bard created a deeply human tale of a man whose wife could not conceive and a stranger, who accepted the shoes to heal the pain after her miscarriage. 

    Scott Pelley: I am rarely speechless. I don’t know what to make of this. 

    Chatbots are said to be language models that just predict the next most likely word based on probability. 

    Geoffrey Hinton: You’ll hear people saying things like, “They’re just doing auto-complete. They’re just trying to predict the next word. And they’re just using statistics.” Well, it’s true they’re just trying to predict the next word. But if you think about it, to predict the next word you have to understand the sentences.  So, the idea they’re just predicting the next word so they’re not intelligent is crazy. You have to be really intelligent to predict the next word really accurately.

    To prove it, Hinton showed us a test he devised for ChatGPT4, the chatbot from a company called OpenAI. It was sort of reassuring to see a Turing Award winner mistype and blame the computer.

    Geoffrey Hinton: Oh, damn this thing! We’re going to go back and start again.

    Scott Pelley: That’s OK

    Hinton’s test was a riddle about house painting. An answer would demand reasoning and planning. This is what he typed into ChatGPT4.

    Geoffrey Hinton: “The rooms in my house are painted white or blue or yellow. And yellow paint fades to white within a year. In two years’ time, I’d like all the rooms to be white. What should I do?” 

    The answer began in one second, GPT4 advised “the rooms painted in blue” “need to be repainted.” “The rooms painted in yellow” “don’t need to [be] repaint[ed]” because they would fade to white before the deadline.  And…  

    Geoffrey Hinton: Oh! I didn’t even think of that!

    It warned, “if you paint the yellow rooms white” there’s a risk the color might be off when the yellow fades.  Besides, it advised, “you’d be wasting resources” painting rooms that were going to fade to white anyway.

    Scott Pelley: You believe that ChatGPT4 understands? 

    Geoffrey Hinton: I believe it definitely understands, yes.  

    Scott Pelley: And in five years’ time?

    Geoffrey Hinton: I think in five years’ time it may well be able to reason better than us. 

    Reasoning that he says, is leading to AI’s great risks and great benefits.

    Geoffrey Hinton: So an obvious area where there’s huge benefits is health care. AI is already comparable with radiologists at understanding what’s going on in medical images. It’s gonna be very good at designing drugs. It already is designing drugs. So that’s an area where it’s almost entirely gonna do good. I like that area.

    Geoffrey Hinton
    Geoffrey Hinton

    60 Minutes


    Scott Pelley: The risks are what?

    Geoffrey Hinton: Well, the risks are having a whole class of people who are unemployed and not valued much because what they– what they used to do is now done by machines.

    Other immediate risks he worries about include fake news, unintended bias in employment and policing and autonomous battlefield robots.

    Scott Pelley: What is a path forward that ensures safety?

    Geoffrey Hinton: I don’t know. I– I can’t see a path that guarantees safety. We’re entering a period of great uncertainty where we’re dealing with things we’ve never dealt with before. And normally, the first time you deal with something totally novel, you get it wrong. And we can’t afford to get it wrong with these things. 

    Scott Pelley: Can’t afford to get it wrong, why?

    Geoffrey Hinton: Well, because they might take over.

    Scott Pelley: Take over from humanity?

    Geoffrey Hinton: Yes. That’s a possibility.

    Scott Pelley: Why would they want to?

    Geoffrey Hinton: I’m not saying it will happen. If we could stop them ever wanting to, that would be great. But it’s not clear we can stop them ever wanting to.

    Geoffrey Hinton told us he has no regrets because of AI’s potential for good. But he says now is the moment to run experiments to understand AI, for governments to impose regulations and for a world treaty to ban the use of military robots. He reminded us of Robert Oppenheimer who after inventing the atomic bomb, campaigned against the hydrogen bomb–a man who changed the world and found the world beyond his control. 

    Geoffrey Hinton: It may be we look back and see this as a kind of turning point when humanity had to make the decision about whether to develop these things further and what to do to protect themselves if they did. I don’t know. I think my main message is there’s enormous uncertainty about what’s gonna happen next. These things do understand. And because they understand, we need to think hard about what’s going to happen next. And we just don’t know.

    Produced by Aaron Weisz. Associate producer, Ian Flickinger. Broadcast associate, Michelle Karim. Edited by Robert Zimet.

    [ad_2]

    Source link

  • “Godfather of artificial intelligence” leaves Google to talk about the tech’s potential dangers

    “Godfather of artificial intelligence” leaves Google to talk about the tech’s potential dangers

    [ad_1]

    The man known as the “godfather of artificial intelligence” quit his job at Google so he could freely speak about the dangers of AI, the New York Times reported Monday.  

    Geoffrey Hinton, who worked with Google and mentors AI’s rising stars, started looking at artificial intelligence more than 40 years ago, he told “CBS Mornings” in late March. He started working for the company in 2013, according to his Google Research profile. While at Google, he designed machine learning algorithms.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton tweeted Monday. “Google has acted very responsibly.”

    Many developers are working toward creating artificial general intelligence. Until recently, Hinton said he thought the world was 20-50 years away from it, but he now thinks developers “might be” close to computers being able to come up with ideas to improve themselves. 

    “That’s an issue, right? We have to think hard about how you control that,” he said in March.

    Artificial intelligence pioneer Geoffrey Hinton

    MARK BLINCH / REUTERS


    Hinton has called for people to figure out how to manage technology that could greatly empower a handful of governments or companies.

    “I think it’s very reasonable for people to be worrying about these issues now, even though it’s not going to happen in the next year or two,” Hinton said. 

    Hinton also told CBS he thought it wasn’t inconceivable that AI could try to wipe out humanity.

    When asked about Hinton’s decision to leave, Google’s chief scientist Jeff Dean told BBC News in a statement that the company remains committed to a responsible approach to AI.

    “We’re continually learning to understand emerging risks while also innovating boldly,” he said.

    Google CEO Sundar Pichai has called for AI advancements to be released in a responsible way. In an April interview with “60 Minutes,” he said society needed to quickly adapt and come up with regulations for AI in the economy, along with laws to punish abuse.

    “This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers and so on,” Pichai told 60 Minutes. “And I think we have to be very thoughtful. And I think these are all things society needs to figure out as we move along. It’s not for a company to decide.”


    Google CEO: AI impact to be more profound than discovery of fire, electricity

    06:02

    [ad_2]

    Source link

  • AI ‘Godfather’ Quits His Job at Google Warning of ‘Scary’ Outcomes | Entrepreneur

    AI ‘Godfather’ Quits His Job at Google Warning of ‘Scary’ Outcomes | Entrepreneur

    [ad_1]

    Geoffrey Hinton, often called “the Godfather of AI,” spent most of his career singing the praises of artificial intelligence. But now he’s warning of the dangers.

    In an interview with the New York Times, Hinton talked about his decision to leave Google, where he was co-founder of Google Brain, a research team that develops artificial intelligence systems.

    “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said.

    Hinton joins several high-profile AI pioneers concerned about the technology’s future. After ChatGPT debuted in March, an open letter signed by more than 1,000 people urged for a six-month pause on the development of systems more advanced than ChatGPT-4.

    In a tweet earlier today, Elon Musk warned that “even benign dependency on AI/Automation is dangerous to civilization.”

    Propagating misinformation

    Hinton has many concerns about AI. But the most pressing is the spread of misinformation. From deepfakes to AI-powered bots, the internet is loaded with fake photos, videos, and stories. Just last week, Universal Music had to pull down a fake Drake song created by AI that was so believable most people thought it was him singing.

    Hinton says the confusion between reality vs. AI-generated content will make it so people will “not be able to know what is true anymore.”

    Learning to fast

    Like the scientists and thought leaders who signed the open letter a few months ago, Hinton is concerned with the speed at which AI technology is advancing. Major tech companies such as Google and Microsoft compete for AI dominance, causing the race to accelerate.

    “Look at how it was five years ago and how it is now,” Hinton said. “Take the difference and propagate it forward. That’s scary.”

    Getting smarter than humans

    Hinton is one of the people responsible for developing a type of machine learning that uses artificial neural networks. He once said, “The only way to get artificial intelligence to work is to do the computation in a way similar to the human brain.”

    But now he’s worried that AI might become more advanced than the human brain.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” he told the Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Hinton, 75, is now devoting the rest of his life to making sure the technology he helped to create won’t destroy civilization. Does he feel bad about what he helped usher into the world?

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he said.

    [ad_2]

    Jonathan Small

    Source link