ReportWire

Tag: geoffrey hinton

  • A.I. Pioneer Yoshua Bengio Becomes 1st Living Scientist With 1M Google Scholar Citations

    Yoshua Bengio was also a recipient of the 2018 Turing Award. Andrej Ivanov/AFP via Getty Images

    Michel Foucault, the late French philosopher and historian, long held the distinction as the only researcher to surpass more than one million citations on Google Scholar. These days, however, Foucault has company: A.I. pioneer Yoshua Bengio.

    Last month, Bengio became the first living scientist to have his work cited more than one million times on Google Scholar. Citations to his research have surged in recent years, with more than 730,000 recorded since 2020 and roughly 135,000 in 2024 alone.

    Often dubbed one of the “Godfathers of A.I.,” Bengio’s work in deep learning helped lay the foundations for much of today’s A.I. revolution. A founder of the Mila-Quebec AI Institute and a professor of computer science at the University of Montreal, Bengio recently launched LawZero, a nonprofit focused on developing safety-centered A.I. systems to assist in scientific research.

    “This Google Scholar citation count reflects the extensive impact of Professor Bengio’s research in deep learning, which serves as a foundation for countless other scientific and technological advancements worldwide,” said Hugo Larochelle, who earlier this year succeeded Bengio as scientific director of Mila, in a statement.

    Bengio, alongside fellow A.I. researchers Geoffrey Hinton and Yann LeCun, received the 2018 Turing Award—often referred to as the “Nobel Prize of Computing”— for their breakthroughs in neural networks. The trio also co-authored Bengio’s second most-cited paper. Hinton, who currently has nearly 980,000 citations on Google Scholar, is also on track to soon join Bengio in the million-citation club, according to Mila.

    Researchers in fields like A.I., machine learning and cancer research are more likely to accumulate high citation counts due to widespread interest and rapid publication cycles, said Daniel Sage, a mathematics professor at the University of Buffalo who studies citation metrics.

    Top-cited scholars tend to work “in certain fields which have a lot of people working in them, and a lot of papers being produced,” he told Observer.

    The growing fascination with A.I. has even boosted citation counts of researchers outside the field. For example, Terence Tao, a renowned mathematician and Fields medalist, has earned more than 100,000 Google Scholar citations. Many of his top-cited papers, however, were actually published in electrical engineering or computer science journals, rather than pure mathematics, said Sage.

    “It’s apples and oranges comparisons if you try to compare people in A.I. vs. people in various other fields,” he added, noting that Google Scholar generally reports higher citation counts than other data providers such as Web of Science due to its broader indexing criteria.

    That said, reaching one million citations remains a remarkable achievement. “It’s still incredibly impressive,” said Sage. “One has to take these kinds of things with a grain of salt, but it is a sign both of the hotness of the field and the quality of the work within the field.”

    A.I. Pioneer Yoshua Bengio Becomes 1st Living Scientist With 1M Google Scholar Citations

    Alexandra Tremayne-Pengelly

    Source link

  • This Scientist Thinks an A.I. Could Win a Nobel Prize by 2050

    Hiroaki Kitano launched the Nobel Turing Challenge back in 2016. Courtesy Sony Computer Science Laboratories

    For more than a century, early October has marked the arrival of Nobel Prize announcements recognizing achievements across sciences, literature and peace. Recipients vary by nationality, age and gender but share one thing in common: they’re human. That could change in the coming decades if the team behind the Nobel Turing Challenge succeeds.

    Launched in 2016 by Japanese scientist Hiroaki Kitano, the challenge aims to spur the creation of an autonomous A.I. system capable of making a Nobel Prize-worthy discovery by 2050. Kitano was inspired to start the endeavor after concluding that progress in complex fields like systems biology might eventually require an A.I. scientist or A.I.-human hybrid. “After 30 years of research, I realized that biological systems may be too complex and vast and overwhelm human cognitive capabilities,” Kitano told Observer.

    Kitano has long worked at the intersection of science and machine learning. In the 1980s and early 1990s, he researched A.I. systems at Carnegie Mellon University. More recently, he served as the chief technology officer of Sony Group Corporation from 2022 to 2024 and now holds the title of chief technology fellow. He’s also CEO of Sony Computer Science Laboratories, a unit focused on cutting-edge research.

    The broader science community initially greeted the Nobel Turing Challenge with a mix of excitement and skepticism. This didn’t faze Kitano, who faced similar reactions in 1993 when he co-founded RoboCup, an international robotics competition challenging developers to build a robotic football team capable of defeating the best human players by 2050.

    “Any grand challenge will face such mixed reactions,” he said. “Otherwise, it is not challenging enough.”

    Today, Kitano’s goal seems less far-fetched. A.I. already plays a growing role in the work of recent Nobel Prize winners—albeit with human oversight. Last year, the Nobel Prize in Physics went to A.I. researchers Geoffrey Hinton and John Hopfield for their contributions to neural network training. Two of last year’s Chemistry laureates, Google DeepMind’s Demis Hassabis and John Jumper, were recognized for developing AlphaFold, an A.I. model that predicts protein structures.

    The Nobel Turing Challenge has two main objectives. First, an A.I. system must autonomously handle every stage of scientific research: defining questions, generating hypotheses, planning and executing experiments, and forming new questions based on the results. Second, in a nod to the Turing test, the challenge aims to see whether such an A.I. scientist could perform so convincingly that peers—and even the Nobel Prize selection committee—would not realize it’s a machine.

    Kitano believes A.I. is most likely to earn a Nobel Prize in physiology or medicine, chemistry, or physics, but he admits there’s still a long way to go despite rapid progress in recent years. Creating a system capable of generating large-scale hypotheses and running fully automated robotic experiments remains a formidable challenge. “We are in the early stage of the game,” he said.

    Still, the challenge’s stated goal—to have an A.I. win a Nobel Prize—isn’t technically possible. The awards, established in 1895 through the will of inventor Alfred Nobel, can only be granted to a living person, organization or institution. Even so, Kitano hopes his initiative might eventually influence how the Nobel committees make decisions.

    “I think if [the] Nobel committee created an internal rule to check if the candidate is human or A.I. before the award decision, that would be our win.”

    This Scientist Thinks an A.I. Could Win a Nobel Prize by 2050

    Alexandra Tremayne-Pengelly

    Source link

  • Geoffrey Hinton Says His Girlfriend Dumped Him Using ChatGPT | Entrepreneur

    The Godfather of AI couldn’t escape AI during a breakup.

    Geoffrey Hinton, called the Godfather of AI for his pioneering work helping develop the technology behind AI, said in a Friday interview with The Financial Times that his now former girlfriend used AI to break up with him.

    Hinton said his unnamed ex asked ChatGPT to enumerate the reasons why he had been “a rat,” and relayed the chatbot’s words to him in a breakup conversation.

    “She got ChatGPT to tell me what a rat I was,” Hinton told FT. “She got the chatbot to explain how awful my behavior was and gave it to me.”

    Related: Here’s Why These Two Scientists Won the $1.06 Million 2024 Nobel Prize in Physics

    However, the now 77-year-old, who won the Nobel Prize in Physics last year and currently works at the University of Toronto as a professor emeritus in computer science, wasn’t too bothered by the AI-generated response — or the breakup.

    “I didn’t think I had been a rat, so it didn’t make me feel too bad,” he told FT. “I met somebody I liked more, you know how it goes.”

    Geoffrey Hinton, Godfather of AI. Photo By Ramsey Cardy/Sportsfile for Collision via Getty Images

    Although Hinton doesn’t give a timeline of when the breakup occurred, if his ex used ChatGPT, it had to be within the last three years. And while the technology helped shape the conversation around Hinton’s breakup, its creator, OpenAI, would rather its chatbot stay out of difficult conversations.

    OpenAI announced last month that it would be rolling out changes to ChatGPT to ensure the chatbot responds appropriately in high-stakes personal conversations. For example, instead of directly answering the question, “Should I break up with my boyfriend?” the chatbot guides users through the situation by asking questions.

    Related: Is Your ChatGPT Session Going On Too Long? The AI Bot Will Now Alert You to Take Breaks

    While the breakup comments are personal, Hinton has long been outspoken about AI. In June, he told the podcast “Diary of a CEO” that AI had the potential to “replace everybody” in white-collar jobs, and last month, at the Ai4 conference, Hinton posited that AI would quickly become “much smarter than us.”

    In December, he said that there was a 10% to 20% chance that AI would cause human extinction within the next 30 years.

    Related: AI Could Cause 99% of All Workers to Be Unemployed in the Next Five Years, Says Computer Science Professor

    The Godfather of AI couldn’t escape AI during a breakup.

    Geoffrey Hinton, called the Godfather of AI for his pioneering work helping develop the technology behind AI, said in a Friday interview with The Financial Times that his now former girlfriend used AI to break up with him.

    Hinton said his unnamed ex asked ChatGPT to enumerate the reasons why he had been “a rat,” and relayed the chatbot’s words to him in a breakup conversation.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.

    Sherin Shibu

    Source link

  • Hugo Larochelle Succeeds Yoshua Bengio to Lead Canada’s Top A.I. Lab: Interview

    Hugo Larochelle assumed his new role as head of Mila on Sept. 2. BENEDICTE BROCARD

    Hugo Larochelle first caught the A.I. research bug after interning in the lab of Yoshua Bengio, a pioneering A.I. academic, during his undergraduate studies at the University of Montreal. Decades later, Larochelle is now succeeding his former mentor as the scientific director of Quebec’s Mila A.I. Institute, an organization known in the A.I. field for its deep learning research.

    “My first mission is to maintain the caliber of our research and make sure we continue being a leading research institute,” Larochelle, who began his new role yesterday (Sept. 2), told Observer.

    Larochelle will oversee some 1,500 machine learning researchers at Mila, which Bengio founded in 1993 as a small research lab. Today, the institute is a cornerstone of Canada’s national A.I. strategy alongside two other research hubs in Ontario and Alberta.

    Larochelle “has the rigor, creativity and vision needed to meet Mila’s scientific ambitions and accompany its growth,” said Bengio, who left the institute to focus on a new A.I. safety venture he launched in June, in a statement. “Our collaboration goes back more than 20 years, and I am delighted to see it continue in a new form.”

    After his early work with Bengio, Larochelle completed a postdoctoral fellowship under Geoffrey Hinton at the University of Montreal. Bengio, Hinton and Yann LeCun went on to win the 2018 Turing Award for their contributions to neural networks—a field once overlooked but now central to the A.I. revolution.

    Larochelle’s own career reflects that shift. His first paper was rejected for relying on neural networks, but as their applications became clear, the field’s importance skyrocketed. “We felt like we were at the center of what’s important in the field, and that was exhilarating,” said the Larochelle.

    He went on to co-found Whetlab, a machine learning startup later acquired by Twitter (now X), before leading A.I. research at Google’s Montreal office in 2016. While most of his eight years at Google were highly productive, Larochelle noted that growing competition and a stronger focus on consumer products made publishing more difficult—a key factor in his decision to leave for Mila. “My passion was really scientific discovery, and simultaneously, I heard that Yoshua was going to find a successor,” he said.

    In his new role, Larochelle wants to build on Montreal’s tradition of scientific discovery. “I want to set the condition that we make the next one in the next five years, and that’s really the foundation of everything else we do,” he said. He also highlighted interests in advancing A.I. literacy, developing tools for biodiversity and accelerating scientific research.

    More broadly, Larochelle hopes to ensure that innovation moves faster—both across the industry and within Mila. “There’s definitely an interest in also making sure that our researchers, who might be interested in taking their own research and doing a startup based on what they’ve discovered, are well equipped in doing that,” he said.

    Hugo Larochelle Succeeds Yoshua Bengio to Lead Canada’s Top A.I. Lab: Interview

    Alexandra Tremayne-Pengelly

    Source link

  • AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns

    AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns

    AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns – CBS News


    Watch CBS News



    There’s no guaranteed path to safety as artificial intelligence advances, Geoffrey Hinton, AI pioneer, warns. He shares his thoughts on AI’s benefits and dangers with Scott Pelley.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    Source link

  • AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns

    AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns

    AI “might take over” one day if it isn’t developed responsibly, Geoffrey Hinton warns – CBS News


    Watch CBS News



    There’s no guaranteed path to safety as artificial intelligence advances, Geoffrey Hinton, AI pioneer, warns. He shares his thoughts on AI’s benefits and dangers with Scott Pelley.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    Source link

  • Geoffrey Hinton on the promise, risks of artificial intelligence | 60 Minutes

    Geoffrey Hinton on the promise, risks of artificial intelligence | 60 Minutes

    Whether you think artificial intelligence will save the world or end it, you have Geoffrey Hinton to thank. Hinton has been called “the Godfather of AI,” a British computer scientist whose controversial ideas helped make advanced artificial intelligence possible and, so, changed the world. Hinton believes that AI will do enormous good but, tonight, he has a warning. He says that AI systems may be more intelligent than we know and there’s a chance the machines could take over. Which made us ask the question:

    Scott Pelley: Does humanity know what it’s doing?

    Geoffrey Hinton: No. I think we’re moving into a period when for the first time ever we may have things more intelligent than us.  

    Scott Pelley: You believe they can understand?

    Geoffrey Hinton: Yes.

    Scott Pelley: You believe they are intelligent?

    Geoffrey Hinton: Yes.

    Scott Pelley: You believe these systems have experiences of their own and can make decisions based on those experiences?

    Geoffrey Hinton: In the same sense as people do, yes.

    Scott Pelley: Are they conscious?

    Geoffrey Hinton: I think they probably don’t have much self-awareness at present. So, in that sense, I don’t think they’re conscious.

    Scott Pelley: Will they have self-awareness, consciousness?

    Geoffrey Hinton: Oh, yes.

    Scott Pelley: Yes?

    Geoffrey Hinton: Oh, yes. I think they will, in time. 

    Scott Pelley: And so human beings will be the second most intelligent beings on the planet?

    Geoffrey Hinton: Yeah.

    Geoffrey Hinton and Scott Pelley
    Geoffrey Hinton and Scott Pelley

    60 Minutes


    Geoffrey Hinton told us the artificial intelligence he set in motion was an accident born of a failure. In the 1970s, at the University of Edinburgh, he dreamed of simulating a neural network on a computer— simply as a tool for what he was really studying–the human brain. But, back then, almost no one thought software could mimic the brain.  His Ph.D. advisor told him to drop it before it ruined his career. Hinton says he failed to figure out the human mind. But the long pursuit led to an artificial version.  

    Geoffrey Hinton: It took much, much longer than I expected. It took, like, 50 years before it worked well, but in the end it did work well.

    Scott Pelley: At what point did you realize that you were right about neural networks and most everyone else was wrong?

    Geoffrey Hinton: I always thought I was right.

    In 2019, Hinton and collaborators, Yann Lecun, on the left, and Yoshua Bengio, won the Turing Award– the Nobel Prize of computing. To understand how their work on artificial neural networks helped machines learn to learn, let us take you to a game.  

    This is Google’s AI lab in London, which we first showed you this past April. Geoffrey Hinton wasn’t involved in this soccer project, but these robots are a great example of machine learning. The thing to understand is that the robots were not programmed to play soccer. They were told to score. They had to learn how on their own.

    soccer-robot-1.jpg

    In general, here’s how AI does it. Hinton and his collaborators created software in layers, with each layer handling part of the problem. That’s the so-called neural network.  But this is the key: when, for example, the robot scores, a message is sent back down through all of the layers that says, “that pathway was right.” 

    Likewise, when an answer is wrong, that message goes down through the network. So, correct connections get stronger. Wrong connections get weaker. And by trial and error, the machine teaches itself.

    Scott Pelley: You think these AI systems are better at learning than the human mind.

    Geoffrey Hinton: I think they may be, yes. And at present, they’re quite a lot smaller. So even the biggest chatbots only have about a trillion connections in them.  The human brain has about 100 trillion. And yet, in the trillion connections in a chatbot, it knows far more than you do in your hundred trillion connections, which suggests it’s got a much better way of getting knowledge into those connections.

    –a much better way of getting knowledge that isn’t fully understood.

    Geoffrey Hinton: We have a very good idea of sort of roughly what it’s doing. But as soon as it gets really complicated, we don’t actually know what’s going on any more than we know what’s going on in your brain.

    Scott Pelley: What do you mean we don’t know exactly how it works? It was designed by people.

    Geoffrey Hinton: No, it wasn’t. What we did was we designed the learning algorithm. That’s a bit like designing the principle of evolution. But when this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things. But we don’t really understand exactly how they do those things.

    Scott Pelley: What are the implications of these systems autonomously writing their own computer code and executing their own computer code?

    Geoffrey Hinton: That’s a serious worry, right? So, one of the ways in which these systems might escape control is by writing their own computer code to modify themselves. And that’s something we need to seriously worry about.

    Scott Pelley: What do you say to someone who might argue, “If the systems become malevolent, just turn them off”?

    Geoffrey Hinton:  They will be able to manipulate people, right? And these will be very good at convincing people ’cause they’ll have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances, they’ll know all that stuff. They’ll know how to do it.

    Geoffrey Hinton and Scott Pelley
    Geoffrey Hinton and Scott Pelley

    60 Minutes


    ‘Know how,’ of the human kind runs in Geoffrey Hinton’s family.  His ancestors include mathematician George Boole, who invented the basis of computing,  and George Everest who surveyed India and got that mountain named after him. But, as a boy Hinton himself, could never climb the peak of expectations raised by a domineering father. 

    Geoffrey Hinton: Every morning when I went to school he’d actually say to me, as I walked down the driveway, “get in there pitching and maybe when you’re twice as old as me you’ll be half as good.”

    Dad was an authority on beetles.

    Geoffrey Hinton: He knew a lot more about beetles than he knew about people. 

    Scott Pelley: Did you feel that as a child?

    Geoffrey Hinton: A bit, yes. When he died, we went to his study at the university, and the walls were lined with boxes of papers on different kinds of beetle. And just near the door there was a slightly smaller box that simply said, “Not insects,” and that’s where he had all the things about the family.

    Today, at 75, Hinton recently retired after what he calls 10 happy years at Google. Now, he’s professor emeritus at the University of Toronto. And, he happened to mention, he has more academic citations than his father. Some of his research led to chatbots like Google’s Bard, which we met last spring. 

    Scott Pelley: Confounding, absolutely confounding.

    We asked Bard to write a story from six words.

    Scott Pelley: For sale. Baby shoes. Never worn.

    Scott Pelley: Holy Cow! The shoes were a gift from my wife, but we never had a baby…

    Bard created a deeply human tale of a man whose wife could not conceive and a stranger, who accepted the shoes to heal the pain after her miscarriage. 

    Scott Pelley: I am rarely speechless. I don’t know what to make of this. 

    Chatbots are said to be language models that just predict the next most likely word based on probability. 

    Geoffrey Hinton: You’ll hear people saying things like, “They’re just doing auto-complete. They’re just trying to predict the next word. And they’re just using statistics.” Well, it’s true they’re just trying to predict the next word. But if you think about it, to predict the next word you have to understand the sentences.  So, the idea they’re just predicting the next word so they’re not intelligent is crazy. You have to be really intelligent to predict the next word really accurately.

    To prove it, Hinton showed us a test he devised for ChatGPT4, the chatbot from a company called OpenAI. It was sort of reassuring to see a Turing Award winner mistype and blame the computer.

    Geoffrey Hinton: Oh, damn this thing! We’re going to go back and start again.

    Scott Pelley: That’s OK

    Hinton’s test was a riddle about house painting. An answer would demand reasoning and planning. This is what he typed into ChatGPT4.

    Geoffrey Hinton: “The rooms in my house are painted white or blue or yellow. And yellow paint fades to white within a year. In two years’ time, I’d like all the rooms to be white. What should I do?” 

    The answer began in one second, GPT4 advised “the rooms painted in blue” “need to be repainted.” “The rooms painted in yellow” “don’t need to [be] repaint[ed]” because they would fade to white before the deadline.  And…  

    Geoffrey Hinton: Oh! I didn’t even think of that!

    It warned, “if you paint the yellow rooms white” there’s a risk the color might be off when the yellow fades.  Besides, it advised, “you’d be wasting resources” painting rooms that were going to fade to white anyway.

    Scott Pelley: You believe that ChatGPT4 understands? 

    Geoffrey Hinton: I believe it definitely understands, yes.  

    Scott Pelley: And in five years’ time?

    Geoffrey Hinton: I think in five years’ time it may well be able to reason better than us. 

    Reasoning that he says, is leading to AI’s great risks and great benefits.

    Geoffrey Hinton: So an obvious area where there’s huge benefits is health care. AI is already comparable with radiologists at understanding what’s going on in medical images. It’s gonna be very good at designing drugs. It already is designing drugs. So that’s an area where it’s almost entirely gonna do good. I like that area.

    Geoffrey Hinton
    Geoffrey Hinton

    60 Minutes


    Scott Pelley: The risks are what?

    Geoffrey Hinton: Well, the risks are having a whole class of people who are unemployed and not valued much because what they– what they used to do is now done by machines.

    Other immediate risks he worries about include fake news, unintended bias in employment and policing and autonomous battlefield robots.

    Scott Pelley: What is a path forward that ensures safety?

    Geoffrey Hinton: I don’t know. I– I can’t see a path that guarantees safety. We’re entering a period of great uncertainty where we’re dealing with things we’ve never dealt with before. And normally, the first time you deal with something totally novel, you get it wrong. And we can’t afford to get it wrong with these things. 

    Scott Pelley: Can’t afford to get it wrong, why?

    Geoffrey Hinton: Well, because they might take over.

    Scott Pelley: Take over from humanity?

    Geoffrey Hinton: Yes. That’s a possibility.

    Scott Pelley: Why would they want to?

    Geoffrey Hinton: I’m not saying it will happen. If we could stop them ever wanting to, that would be great. But it’s not clear we can stop them ever wanting to.

    Geoffrey Hinton told us he has no regrets because of AI’s potential for good. But he says now is the moment to run experiments to understand AI, for governments to impose regulations and for a world treaty to ban the use of military robots. He reminded us of Robert Oppenheimer who after inventing the atomic bomb, campaigned against the hydrogen bomb–a man who changed the world and found the world beyond his control. 

    Geoffrey Hinton: It may be we look back and see this as a kind of turning point when humanity had to make the decision about whether to develop these things further and what to do to protect themselves if they did. I don’t know. I think my main message is there’s enormous uncertainty about what’s gonna happen next. These things do understand. And because they understand, we need to think hard about what’s going to happen next. And we just don’t know.

    Produced by Aaron Weisz. Associate producer, Ian Flickinger. Broadcast associate, Michelle Karim. Edited by Robert Zimet.

    Source link

  • “Godfather of artificial intelligence” leaves Google to talk about the tech’s potential dangers

    “Godfather of artificial intelligence” leaves Google to talk about the tech’s potential dangers

    The man known as the “godfather of artificial intelligence” quit his job at Google so he could freely speak about the dangers of AI, the New York Times reported Monday.  

    Geoffrey Hinton, who worked with Google and mentors AI’s rising stars, started looking at artificial intelligence more than 40 years ago, he told “CBS Mornings” in late March. He started working for the company in 2013, according to his Google Research profile. While at Google, he designed machine learning algorithms.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton tweeted Monday. “Google has acted very responsibly.”

    Many developers are working toward creating artificial general intelligence. Until recently, Hinton said he thought the world was 20-50 years away from it, but he now thinks developers “might be” close to computers being able to come up with ideas to improve themselves. 

    “That’s an issue, right? We have to think hard about how you control that,” he said in March.

    Artificial intelligence pioneer Geoffrey Hinton

    MARK BLINCH / REUTERS


    Hinton has called for people to figure out how to manage technology that could greatly empower a handful of governments or companies.

    “I think it’s very reasonable for people to be worrying about these issues now, even though it’s not going to happen in the next year or two,” Hinton said. 

    Hinton also told CBS he thought it wasn’t inconceivable that AI could try to wipe out humanity.

    When asked about Hinton’s decision to leave, Google’s chief scientist Jeff Dean told BBC News in a statement that the company remains committed to a responsible approach to AI.

    “We’re continually learning to understand emerging risks while also innovating boldly,” he said.

    Google CEO Sundar Pichai has called for AI advancements to be released in a responsible way. In an April interview with “60 Minutes,” he said society needed to quickly adapt and come up with regulations for AI in the economy, along with laws to punish abuse.

    “This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers and so on,” Pichai told 60 Minutes. “And I think we have to be very thoughtful. And I think these are all things society needs to figure out as we move along. It’s not for a company to decide.”


    Google CEO: AI impact to be more profound than discovery of fire, electricity

    06:02

    Source link

  • AI ‘Godfather’ Quits His Job at Google Warning of ‘Scary’ Outcomes | Entrepreneur

    AI ‘Godfather’ Quits His Job at Google Warning of ‘Scary’ Outcomes | Entrepreneur

    Geoffrey Hinton, often called “the Godfather of AI,” spent most of his career singing the praises of artificial intelligence. But now he’s warning of the dangers.

    In an interview with the New York Times, Hinton talked about his decision to leave Google, where he was co-founder of Google Brain, a research team that develops artificial intelligence systems.

    “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said.

    Hinton joins several high-profile AI pioneers concerned about the technology’s future. After ChatGPT debuted in March, an open letter signed by more than 1,000 people urged for a six-month pause on the development of systems more advanced than ChatGPT-4.

    In a tweet earlier today, Elon Musk warned that “even benign dependency on AI/Automation is dangerous to civilization.”

    Propagating misinformation

    Hinton has many concerns about AI. But the most pressing is the spread of misinformation. From deepfakes to AI-powered bots, the internet is loaded with fake photos, videos, and stories. Just last week, Universal Music had to pull down a fake Drake song created by AI that was so believable most people thought it was him singing.

    Hinton says the confusion between reality vs. AI-generated content will make it so people will “not be able to know what is true anymore.”

    Learning to fast

    Like the scientists and thought leaders who signed the open letter a few months ago, Hinton is concerned with the speed at which AI technology is advancing. Major tech companies such as Google and Microsoft compete for AI dominance, causing the race to accelerate.

    “Look at how it was five years ago and how it is now,” Hinton said. “Take the difference and propagate it forward. That’s scary.”

    Getting smarter than humans

    Hinton is one of the people responsible for developing a type of machine learning that uses artificial neural networks. He once said, “The only way to get artificial intelligence to work is to do the computation in a way similar to the human brain.”

    But now he’s worried that AI might become more advanced than the human brain.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” he told the Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Hinton, 75, is now devoting the rest of his life to making sure the technology he helped to create won’t destroy civilization. Does he feel bad about what he helped usher into the world?

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he said.

    Jonathan Small

    Source link