ReportWire

Tag: AGI

  • Elon Musk Predicts AGI by 2026 (He Predicted AGI by 2025 Last Year)

    [ad_1]

    Elon Musk predicts that his company xAI could achieve artificial general intelligence (AGI) within the next couple of years, and maybe as soon as 2026, according to a new report from Business Insider. If it feels like you’ve heard that one before, it’s probably because you have.

    Musk predicted the same thing in 2024, claiming AGI would be achieved by 2025. Take a look at any calendar, and you’ll see that we’re just a few weeks away from the end of 2025.

    “How long until AGI?” asked Logan Kilpatrick, the head of product at Google AI Studio, in May 2024.

    “Next year,” Musk replied, to which Kipatrick responded, “Big if true.”

    It wasn’t true, of course. But Musk has a long history of, shall we say, optimistic predictions about his own company’s future accomplishments. And his predictions often have ulterior motives.

    Remember when Musk was making the most noise about the dangers of AI and worries that it could destroy the world? The billionaire signed on to a letter in March 2023 calling for a six month pause in all AI development. It was revealed less than a month later that Musk was secretly building his own AI project at Twitter. By July 2023, Musk had officially announced the creation of xAI, the company that makes his Grok AI chatbot.

    The CEO wasn’t earnestly worried about the risks posed by AI. He was just frustrated that OpenAI was way ahead at the time.

    Musk’s treatment of AGI, or any new technology, largely depends on how he can hype his companies at any given point in time. And the perpetually prospect of achieving AGI, whether you think it would be good or bad for the world, helps drive investment in AI technology, the thing that seems to be propping up the entire U.S. economy at the moment.

    The new report from Business Insider also says that Musk told xAI staff that investment in the private company was going well, with “around $20 billion to $30 billion in funding per year.” An email to xAI with questions about the report was met with an auto-response that simply said “Legacy Media Lies.” Musk has great contempt for the news media and previously had an auto-responder at Twitter that sent a poop emoji.

    Part of the problem in discussing AGI is that there’s no single agreed upon a definition. As IBM describes it, we’ll have achieved AGI when artificial intelligence can “match or exceed the cognitive abilities of human beings across any task.” But obviously defining terms like “cognitive abilities” and “any task” is extremely complicated.

    Other folks like to define AGI as a kind of self-awareness that would make artificial intelligence more like humans. Instead of just regurgitating words from its training data, the AI would understand itself as a kind of consciousness. People in that camp are excited and/or concerned about that theoretical tipping point because they assume it would be the start of the robot revolution and AI’s attempt to destroy humanity. Musk has hyped those fears tremendously, though he’s backed off recently.

    Absent large robotic armies, achieving AGI in the present day with a system that loathes humanity would probably look more like the 1970 sci-fi movie Colossus: The Forbin Project, where non-humanoid systems engage nuclear weapons systems to threaten the world. We don’t really have the advanced humanoid robots for a Terminator 2 situation just yet.

    But Musk is working on that too. He predicts Tesla will produce 1 million humanoid Optimus robots per year within the next five years, and they’ll even be babysitting your kids. He just needs to figure out how to get Optimus working without teleoperation before all of that can happen.

    Who knows? AGI could magically be achieved in the next few weeks, and maybe Musk’s old prediction will come true. But the billionaire also has another prediction deadline just over the horizon. Back in October, Musk told Joe Rogan he’d demonstrate a flying car by the end of this year.

    [ad_2]

    Matt Novak

    Source link

  • Microsoft AI Chief Warns Pursuing Machine Consciousness Is a Gigantic Waste of Time

    [ad_1]

    Head of Microsoft’s AI division Mustafa Suleyman thinks that AI developers and researchers should stop trying to build conscious AI.

    “I don’t think that is work that people should be doing,” Suleyman told CNBC in an interview last week.

    Suleyman thinks that while AI can definitely get smart enough to reach some form of superintelligence, it is incapable of developing the human emotional experience that is necessary to reach consciousness. At the end of the day, any “emotional” experience that AI seems to experience is just a simulation, he says.

    “Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn’t feel sad when it experiences ‘pain,’” Suleyman told CNBC. “It’s really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it’s actually experiencing.”

    “It would be absurd to pursue research that investigates that question, because they’re not [conscious] and they can’t be,” Suleyman said.

    Consciousness is a tricky thing to explain. There are multiple scientific theories that try to describe what consciousness could be. According to one such theory, posited by famous philosopher John Searle who died last month, consciousness is a purely biological phenomenon that cannot be truly replicated by a computer. Many AI researchers, computer scientists and neuroscientists also subscribe to this belief.

    Even if this theory turns out to be the truth, that doesn’t keep users from attributing consciousness to computers.

    “Unfortunately, because the remarkable linguistic abilities of LLMs are increasingly capable of misleading people, people may attribute imaginary qualities to LLMs,” Polish researchers Andrzej Porebski and Yakub Figura wrote in a study published last week, titled “There is no such thing as conscious artificial intelligence.

    In an essay published on his blog in August, Suleyman warned against “seemingly conscious AI.”

    “The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions,” Suleyman wrote.

    He argues that AI cannot be conscious and the illusion it gives of consciousness could trigger interactions that are “rich in feeling and experience,” a phenomenon that has been dubbed as “AI psychosis” in the cultural lexicon.

    There have been numerous high-profile incidents in the past year of AI-obsessions that drive users to fatal delusions, manic episodes and even suicide.

    With limited guardrails in place to protect vulnerable users, people are wholeheartedly believing that the AI chatbots they interact with almost every day are having a real, conscious experience. This has led people to “fall in love” with their chatbots, sometimes with fatal consequences like when a 14-year old shot himself to “come home” to Character.AI’s personalized chatbot or when a cognitively-impaired man died while trying to get to New York to meet Meta’s chatbot in person.

    “Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness,” Suleyman wrote in the blog post. “We must build AI for people, not to be a digital person.”

    But because the nature of consciousness is still contested, some researchers are growing worried that the technological advancements in AI might outpace our understanding of how consciousness works.

    “If we become able to create consciousness – even accidentally – it would raise immense ethical challenges and even existential risk,” Belgian scientist Axel Cleeremans said last week, announcing a paper he co-wrote calling for consciousness research to become a scientific priority.

    Suleyman himself has been vocal about developing “humanist superintelligence” rather than god-like AI, even though he believes that superintelligence won’t materialize any time within the next decade.

    “i just am more more fixated on ‘how is this actually useful for us as a species?’ Like that should be the task of technology,” Suleyman told the Wall Street Journal earlier this year.

    [ad_2]

    Ece Yildirim

    Source link

  • Does A.I. Really Fight Back? What Anthropic’s AGI Tests Reveal About Control and Risk

    [ad_1]

    Anthropic’s research hints at an unnerving future: one where A.I. doesn’t fight back maliciously but evolves beyond the boundaries we can enforce. Unsplash+

    Does A.I. really fight back? The short answer to this question is “no.” But that answer, of course, hardly satisfies the legitimate, growing unease that many feel about A.I., or the viral fear sparked by recent reports about Anthropic’s A.I. system, Claude. In a widely discussed experiment, Claude appeared to resort to threats of potential blackmail and extortion when faced with the possibility of being shut down. 

    The scene was immediately reminiscent of the most famous—and terrifying—film depiction of an artificial intelligence breaking bad: the HAL 9000 computer in Stanley Kubrick’s 1968 masterpiece, 2001: A Space Odyssey. Panicked by conflicting orders from its home base, HAL murders crew members in their sleep, condemns another member to death in the black void of outer space and attempts to kill Dave Bowman, the remaining crew member, when he tries to disable HAL’s cognitive functions.

    “I’m sorry, Dave, I can’t do that,” HAL’s chilling calm in response to Dave’s command to open a pod door and let him back onto the ship, became one of the most famous lines in film history—and the archetype for A.I. gone rogue.

    But how realistic was HAL’s meltdown? And how does today’s Claude resemble HAL? The truth is “not very” and “not much.” HAL had millions of times the processing power of any computing system we have today—after all, he was in a movie, not real life—and it is unthinkable that its programmers would not have him simply default to spitting out an error message or escalating to human oversight if there were conflicting instructions. 

    Claude isn’t plotting revenge

    To understand what happened in Anthropic’s test, it’s crucial to remember that systems like Claude actually do. Claude doesn’t “think.” It “simply” writes out answers one word at a time, drawing from trillions of parameters, or learned associations between words and concepts, to predict the most probable next word choice. Using extensive computing resources, Claude can string its answers together at an incomprehensibly fast speed compared to humans. So it can appear as if Claude is actually thinking.

    In the scenario where Claude resorted to blackmail and extortion, the program was placed in extreme, specific and artificial circumstances with a limited menu of possible actions. Its response was the mathematical result of probabilistic modeling within a tightly scripted context. This course of action was planted by Claude’s programmers and wasn’t a sign of agency or intent, but rather a consequence of human design. Claude was not auditioning to become a malevolent movie star. 

    Why A.I. fear persists

    As A.I. continues to seize the public’s consciousness, it’s easy to fall prey to scary headlines and over-simplified explanations of A.I. technologies and their capabilities. Humans are hardwired to fear the unknown, and A.I.—complex, opaque and fast-evolving—taps that instinct. But these fears can distort pubic understanding. It’s essential that everyone involved in A.I. development and usage communicate clearly about what A.I. can actually do, how it does it and its potential capabilities in future iterations. 

    A key to achieving a comfort level around A.I. is to gain the ironic understanding that A.I. can indeed be very dangerous. Throughout history, humanity has built tools it couldn’t fully control, from the vast machinery of the Industrial Revolution to the atomic bomb. Ethical boundaries for A.I. must be established collaboratively and globally. Preventing A.I. from facilitating warfare—whether in weapons design, optimizing drone-attack plans or breaching national security systems—should be the top priority of every leader and NGO worldwide. We need to ensure that A.I. is not weaponized for warfare, surveillance or any form of harm. 

    Programming responsibility, not paranoia

    Looking back at Anthropic’s experiment, let’s dissect what really happened. Claude—and it is just computer code at heart, not living DNA—was working within a probability cloud that led it, step-by-step, to pick the best probable next word in a sentence. It works one word at a time, but at a speed that easily surpasses human ability. Claude’s programmers chose to see if their creation would, in turn, choose a negative option. Its response was shaped more by programming, flawed design and how the scenario was coded, than by any machine malice.

    Claude, as with ChatGPT and other current A.I. platforms, has access to vast stores of data. The platforms are trained to access specific information related to queries, then predict the most likely responses to product fluent text. They don’t “decide” in any meaningful, human sense. They don’t have intentions, emotions or even self-preservation instincts of a single-celled organism, let alone the wherewithal to hatch master plans to extort someone. 

    This will remain true even as the growing capabilities of A.I. allow developers to make these systems appear more intelligent, human-like and friendly. It becomes even more important for developers, programmers, policymakers and communicators to demystify A.I.’s behavior and reject unethical results. Clarity is key, both to prevent misuse and to ground perception in fact, not fear. 

    Every transformative technology is dual-use. A hammer can pound a nail or hurt a person. Nuclear energy can provide power to millions of people or threaten to annihilate them. A.I. can make traffic run smoother, speed up customer service, conduct whiz-bang research at lightning speed, or be used to amplify disinformation, deepen inequality and destabilize security. The task isn’t to wonder whether A.I. might fight back, but to ensure humanity doesn’t teach it to. The choice is ours as to whether we corral it, regulate it and keep it focused on the common good.

    Mehdi Paryavi is the Chairman and CEO of the International Data Center Authority (IDCA), the world’s leading Digital Economy think tank and prime consortium of policymakers, investors and developers in A.I., data centers and cloud computing.

    Does A.I. Really Fight Back? What Anthropic’s AGI Tests Reveal About Control and Risk

    [ad_2]

    Mehdi Paryavi

    Source link

  • Karen Hao on the Empire of AI, AGI evangelists, and the cost of belief | TechCrunch

    [ad_1]

    At the center of every empire is an ideology, a belief system that propels the system forward and justifies expansion – even if the cost of that expansion directly defies the ideology’s stated mission.

    For European colonial powers, it was Christianity and the promise of saving souls while extracting resources. For today’s AI empire, it’s artificial general intelligence to “benefit all humanity.” And OpenAI is its chief evangelist, spreading zeal across the industry in a way that has reframed how AI is built. 

    “I was interviewing people whose voices were shaking from the fervor of their beliefs in AGI,” Karen Hao, journalist and bestselling author of “Empire of AI,” told TechCrunch on a recent episode of Equity

    In her book, Hao likens the AI industry in general, and OpenAI in particular, to an empire. 

    “The only way to really understand the scope and scale of OpenAI’s behavior…is actually to recognize that they’ve already grown more powerful than pretty much any nation state in the world, and they’ve consolidated an extraordinary amount of not just economic power, but also political power,” Hao said. “They’re terraforming the Earth. They’re rewiring our geopolitics, all of our lives. And so you can only describe it as an empire.”

    OpenAI has described AGI as “a highly autonomous system that outperforms humans at most economically valuable work,” one that will somehow “elevate humanity by increasing abundance, turbocharging the economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.” 

    These nebulous promises have fueled the industry’s exponential growth — its massive resource demands, oceans of scraped data, strained energy grids, and willingness to release untested systems into the world. All in service of a future that many experts say may never arrive.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Hao says this path wasn’t inevitable, and that scaling isn’t the only way to get more advances in AI. 

    “You can also develop new techniques in algorithms,” she said. “You can improve the existing algorithms to reduce the amount of data and compute that they need to use.”

    But that tactic would have meant sacrificing speed. 

    “When you define the quest to build beneficial AGI as one where the victor takes all — which is what OpenAI did — then the most important thing is speed over anything else,” Hao said. “Speed over efficiency, speed over safety, speed over exploratory research.”

    Image Credits:Kim Jae-Hwan/SOPA Images/LightRocket / Getty Images

    For OpenAI, she said, the best way to guarantee speed was to take existing techniques and “just do the intellectually cheap thing, which is to pump more data, more supercomputers, into those existing techniques.”

    OpenAI set the stage, and rather than fall behind, other tech companies decided to fall in line. 

    “And because the AI industry has successfully captured most of the top AI researchers in the world, and those researchers no longer exist in academia, then you have an entire discipline now being shaped by the agenda of these companies, rather than by real scientific exploration,” Hao said.

    The spend has been, and will be, astronomical. Last week, OpenAI said it expects to burn through $115 billion in cash by 2029. Meta said in July that it would spend up to $72 billion on building AI infrastructure this year. Google expects to hit up to $85 billion in capital expenditures for 2025, most of which will be spent on expanding AI and cloud infrastructure. 

    Meanwhile, the goal posts keep moving, and the loftiest “benefits to humanity” haven’t yet materialized, even as the harms mount. Harms like job loss, concentration of wealth, and AI chatbots that fuel delusions and psychosis. In her book, Hao also documents workers in developing countries like Kenya and Venezuela who were exposed to disturbing content, including child sexual abuse material, and were paid very low wages — around $1 to $2 an hour — in roles like content moderation and data labeling.

    Hao said it’s a false tradeoff to pit AI progress against present harms, especially when other forms of AI offer real benefits.

    She pointed to Google DeepMind’s Nobel Prize-winning AlphaFold, which is trained on amino acid sequence data and complex protein folding structures, and can now accurately predict the 3D structure of proteins from their amino acids — profoundly useful for drug discovery and understanding disease.

    “Those are the types of AI systems that we need,” Hao said. “AlphaFold does not create mental health crises in people. AlphaFold does not lead to colossal environmental harms … because it’s trained on substantially less infrastructure. It does not create content moderation harms because [the datasets don’t have] all of the toxic crap that you hoovered up when you were scraping the internet.” 

    Alongside the quasi-religious commitment to AGI has been a narrative about the importance of racing to beat China in the AI race, so that Silicon Valley can have a liberalizing effect on the world. 

    “Literally, the opposite has happened,” Hao said. “The gap has continued to close between the U.S. and China, and Silicon Valley has had an illiberalizing effect on the world … and the only actor that has come out of it unscathed, you could argue, is Silicon Valley itself.”

    Of course, many will argue that OpenAI and other AI companies have benefitted humanity by releasing ChatGPT and other large language models, which promise huge gains in productivity by automating tasks like coding, writing, research, customer support, and other knowledge-work tasks. 

    But the way OpenAI is structured — part non-profit, part for-profit — complicates how it defines and measures its impact on humanity. And that’s further complicated by the news this week that OpenAI reached an agreement with Microsoft that brings it closer to eventually going public.

    Two former OpenAI safety researchers told TechCrunch that they fear the AI lab has begun to confuse its for-profit and non-profit missions — that because people enjoy using ChatGPT and other products built on LLMs, this ticks the box of benefiting humanity.

    Hao echoed these concerns, describing the dangers of being so consumed by the mission that reality is ignored.

    “Even as the evidence accumulates that what they’re building is actually harming significant amounts of people, the mission continues to paper all of that over,” Hao said. “There’s something really dangerous and dark about that, of [being] so wrapped up in a belief system you constructed that you lose touch with reality.”

    [ad_2]

    Rebecca Bellan

    Source link

  • Marc Benioff Can’t Get Enough of the AI Hype—Unless You Say ‘AGI’

    [ad_1]

    Marc Benioff, a guy who has poured money into artificial intelligence investments and claims that AI tools are doing half of the work at Salesforce, isn’t so sure about all the hype around this whole sector all of a sudden. During an appearance on the “20VC” podcast, as spotted by Business Insider, Benioff poured water on the concept of “artificial general intelligence,” calling the obsession around the industry’s white whale “hypnosis.”

    During the conversation, podcast host Harry Stebbings—himself a venture capitalist who has lots of money tied up in the success of AI—pointed to a recent interview The Verge conducted with Amazon AGI Labs chief David Luan, in which Luan said there are fewer than 1,000 people in the world who would be “extremely valuable contributors” to building cutting edge AI systems. Benioff scoffed, not just at Luan’s statement but his very title. “AGI head, that sounds like an oxymoron,” the Salesforce CEO said.

    Benioff explained that he’s skeptical of the very idea of artificial general intelligence, the theory that AI could one day develop human-like cognitive processing skills for reasoning and learning, rather than just spitting back outputs based on training data. “You’re talking to somebody who is extremely suspect if anybody uses those initials, ‘AGI,’” Benioff said. “I think that we have all been sold a lot of hypnosis around what’s about to happen with AI.”

    He didn’t rule out the possibility of eventually achieving AGI, but stated, “I just realize that isn’t the state of technology today,” and noted that no AI that people have interacted with comes close to that theoretical bar. “It’s not a person, and it’s not intelligent, and it’s not conscious,” he said.

    Benioff is right to sour on the idea of AGI, a concept that has only been muddied by the ongoing insistence by AI firms that it’s right around the corner. Sam Altman, head of OpenAI, recently conceded that his company’s latest model, GPT-5, is not AGI because it doesn’t “continuously learn.” Altman called the model “generally intelligent” but not AGI. Of course, the only official definition that OpenAI has for AGI isn’t a technical one but a monetary one. Microsoft and OpenAI agreed to define AGI as a system that can generate at least $100 billion in profits.

    Of course, Benioff isn’t above AI hype, either. In addition to claiming that one of his companies has farmed out half of all work to AI, he also used the pages of Time Magazine—a publication he owns—to claim AI would result in “a revolution that will fundamentally redefine how humans work, live, and connect with one another.” Now, you won’t believe this, but Salesforce happens to sell AI agents. So Benioff certainly still believes in the AI hype for his own company’s products. It’s just everyone else who is overpromising.

    [ad_2]

    AJ Dellinger

    Source link

  • Warren Buffett likens A.I. to atomic bomb in that ‘we won’t be able to un-invent it’ 

    Warren Buffett likens A.I. to atomic bomb in that ‘we won’t be able to un-invent it’ 

    [ad_1]

    Few nonagenarians would have people clamoring for their take on artificial intelligence. But Warren Buffett, 92, and Charlie Munger, 99, provided their insights on today’s hottest technology at the Berkshire Hathaway annual meeting on Saturday, with a rapt audience looking on.

    Buffett acknowledged A.I.’s “amazing” capabilities. He played around with OpenAI’s ChatGPT when ex-Microsoft CEO Bill Gates showed it to him three months ago. (Microsoft has invested heavily in OpenAI.) But, the Berkshire CEO added:

    “When something can do all kinds of things I get a little bit worried, because I know we won’t be able to un-invent it. We did invent—for very, very good reason—the atom bomb in World War II, and it was enormously important that we did so. But is it good for the next 200 years of the world that the ability….has been unleashed?”

    He noted that Albert Einstein said the atomic bomb changes everything but how men think. (The exact quote was, “The unleashed power of the atom has changed everything save our modes of thinking and we thus drift toward unparalleled catastrophe.”)

    Similarly with A.I., Buffett said, “it can change everything in the world except how men think and behave.”

    As for Munger, the Berkshire vice chairman expressed his characteristic skepticism toward the technology.

    “I am personally skeptical of some of the hype that has gone into artificial intelligence,” he said. “I think old-fashioned intelligence works pretty well.” (That line drew applause from the audience.)

    “There won’t be anything in A.I. that replaces the G,” Buffett added. “I’ll state that unqualifiedly.”

    AGI, or artificial general intelligence, refers to A.I. getting to a point (eventually) where it can figure out a solution to an unfamiliar task. OpenAI defines it as “highly autonomous systems that outperform humans at most economically valuable work.”

    There’s little doubt about the A.I. hype that Munger referred to. In first-quarter earnings calls so far this year, A.I. has been mentioned more than 1,070 times, according to Bloomberg, more than doubling from a year ago, as companies attempt associate themselves with the technology. 

    One exception has been Apple, which addressed A.I. in its earning call only once, when CEO Tim Cook briefly answered a question about it in the Q&A session. Cook said A.I. is “huge” but cautioned it’s “very important to be deliberate and thoughtful” about deploying the technology.

    Buffett, as it happens, heaped praise upon Apple at today’s conference, calling it better than any other company in Berkshire Hathaway’s portfolio.

    Buffett also shared his thoughts about A.I. this week with local station WOWT in Omaha, Nebraska, where the conference is held.

    “It’s not going to tell me which stocks to buy or anything of the sort,” Buffett said. “It can tell me every stock that meets a certain criteria, or a criteria, in three seconds or something. But it’s got decided limitations in some ways.”

    “It’s very interesting,” he continued. “It can translate the Constitution into Spanish in one second. But the computer could not tell jokes…I told Bill to bring it back when I can ask it, ‘How are you going to get rid of the human race?’ I want to see what it says—and pull the plug out before it does it.”

    [ad_2]

    Steve Mollman

    Source link

  • How you relate to your dog gives hope to the fired engineer who claimed Google A.I. was sentient

    How you relate to your dog gives hope to the fired engineer who claimed Google A.I. was sentient

    [ad_1]

    Artificial intelligence will kill us all or solve the world’s biggest problems—or something in between—depending on who you ask. But one thing seems clear: In the years ahead, A.I. will integrate with humanity in one way or another.

    Blake Lemoine has thoughts on how that might best play out. Formerly an A.I. ethicist at Google, the software engineer made headlines last summer by claiming the company’s chatbot generator LaMDA was sentient. Soon after, the tech giant fired him.

    In an interview with Lemoine published on Friday, Futurism asked him about his “best-case hope” for A.I. integration into human life. 

    Surprisingly, he brought our furry canine companions into the conversation, noting that our symbiotic relationship with dogs has evolved over the course of thousands of years.

    “We’re going to have to create a new space in our world for these new kinds of entities, and the metaphor that I think is the best fit is dogs,” he said. “People don’t think they own their dogs in the same sense that they own their car, though there is an ownership relationship, and people do talk about it in those terms. But when they use those terms, there’s also an understanding of the responsibilities that the owner has to the dog.”

    Figuring out some kind of comparable relationship between humans and A.I., he said, “is the best way forward for us, understanding that we are dealing with intelligent artifacts.”

    Many A.I. experts, of course, disagree with his take on the technology, including ones still working for his former employer. After suspending Lemoine last summer, Google accused him of “anthropomorphizing today’s conversational models, which are not sentient.” 

    “Our team—including ethicists and technologists—has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims,” company spokesman Brian Gabriel said in a statement, though he acknowledged that “some in the broader A.I. community are considering the long-term possibility of sentient or general A.I.” 

    Gary Marcus, an emeritus professor of cognitive science at New York University, called Lemoine’s claims “nonsense on stilts” last summer and is skeptical about how advanced today’s A.I. tools really are. “We put together meanings from the order of words,” he told Fortune in November. “These systems don’t understand the relation between the orders of words and their underlying meanings.”

    But Lemoine isn’t backing down. He noted to Futurism that he had access to advanced systems within Google that the public hasn’t been exposed to yet.

     “The most sophisticated system I ever got to play with was heavily multimodal—not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it,” he said. “That’s the one that I was like, ‘You know this thing, this thing’s awake.’ And they haven’t let the public play with that one yet.”

    He suggested such systems could experience something like emotions. 

    “There’s a chance that—and I believe it is the case—that they have feelings and they can suffer and they can experience joy,” he told Futurism. “Humans should at least keep that in mind when interacting with them.”

    [ad_2]

    Steve Mollman

    Source link