ReportWire

Tag: Demis Hassabis

  • A.I. Degrees Boom as Students Prepare for an Uncertain Job Market

    [ad_1]

    Universities are rapidly expanding A.I. programs as students seek skills that can withstand an increasingly automated future. Photo by: Jumping Rocks/Universal Images Group via Getty Images

    When Chris Callison-Burch first started teaching an A.I. course at the University of Pennsylvania in2018, his inaugural class had about 100 students. Seven years later, enrollment has swelled to roughly 400—excluding another 250 students attending remotely and an additional 100 to 200 on the waiting list. The professor now teaches in the largest classroom on campus. If his course grew any bigger, he’d need to move into the school’s sports stadium.

    “I would love to think that’s all because I’m a dynamic lecturer,” Callison-Burch told Observer. “But it’s really a testament to the popularity of the field.”

    Demand for A.I. courses and degrees has soared across higher education as the technology plays an increasingly central role in daily life and begins to encroach on once-popular fields like computer science. Amid uncertainty about the future of the labor market, students are seeking to prepare for an A.I.-dominated economy by immersing themselves in the field.

    Universities have followed suit. Schools like Carnegie Mellon and Purdue University are among a number offering undergraduate or graduate degrees in A.I., a trend expected to accelerate in the coming years. The University of Pennsylvania recently became the first Ivy League school to offer both undergraduate and graduate A.I. programs. Its graduate curriculum includes courses in natural language processing and machine learning, in addition to required classes on technology ethics and the broader legal landscape.

    The demand is widespread. The University of Buffalo’s A.I. master’s program enrolled 103 students last year, up from just five in its inaugural 2020 cohort. At the Massachusetts Institute of Technology, undergraduate enrollment in A.I. has jumped from 37 students in 2022 to more than 300. Miami Dade College has seen a 75 percent increase in enrollment in its A.I. programs since 2022, while its other programs have remained relatively steady aside from a “slight decrease in computer science,” the school told Observer.

    Callison-Burch, who also serves as faculty director of Penn’s online A.I. master’s program, has noticed a similar decline. “There’s an interesting trend at the moment where it looks like computer science enrollment is dipping,” he said, pointing to increased A.I.-powered automation across the field. More than 60 percent of undergraduate computing programs saw a decline in employment for the 2025-2026 year compared to the year prior, according to a recent report from the Computing Research Association.

    That decline comes as A.I. reshapes some of the professions most exposed to its advances. In fields like coding, early-career workers have already experienced a 13 percent relative decline in employment, according to an August research paper from Stanford.

    A.I. leaders’ advice for students

    Experts have offered a range of advice as the technology they helped develop begins to reshape the labor market. Demis Hassabis, CEO of Google DeepMind, has advocated for an immersion in A.I. tools, while acclaimed researcher Geoffrey Hinton suggests prospective students focus on a well-rounded education that pairs mathematics and science with liberal arts.

    Yann LeCun, Meta’s former chief A.I. scientist, advises young people to become adept at learning itself, as their job is “almost certainly going to change” over time. “My suggestion is to take courses on topics that are fundamental and have a long shelf life,” he told Observer via email, pointing to mathematics, physics and engineering as core areas of focus.

    It’s not just students grappling with these shifts. Callison-Burch noted that professors, too, are trying to adapt and determine how best to integrate A.I. into their classrooms. One thing, he said, is certain: the technology will only become more pervasive. That makes it all the more important for young people to familiarize themselves with its tools.

    Even so, he acknowledged that predicting how A.I. will reshape the labor market remains extraordinarily difficult, making it hard for students to bet confidently on any one path. “I don’t think there’s an easy way of picking something that’s going to be future-proof, when we can’t yet see that future,” he said.

    A.I. Degrees Boom as Students Prepare for an Uncertain Job Market

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Google unveils Gemini’s next generation, aiming to turn its search engine into a ‘thought partner’

    [ad_1]

    SAN FRANCISCO (AP) — Google is unleashing its Gemini 3 artificial intelligence model on its dominant search engine and other popular online services in the high-stakes battle to create technology that people can trust to enlighten them and manage tedious tasks.

    The next-generation model unveiled Tuesday comes nearly two years after Google took the wraps off its first iteration of the technology. Google designed Gemini in response to a competitive threat posed by OpenAI’s ChatGPT that came out in late 2022, triggering the biggest technological shift since Apple released the iPhone in 2007.

    Google’s latest AI features initially will be rolled out to Gemini Pro and Ultra subscribers in the United States before coming to a wider, global audience. Gemini 3’s advances include a new AI “thinking” feature within Google’s search engine that company executives believe will become an indispensable tool that will help make people more productive and creative.

    “We like to think this will help anyone bring any idea to life,” Koray Kavukcuoglu, a Google executive overseeing Gemini’s technology, told reporters.

    As AI models have become increasingly sophisticated, the advances have raised worries that the technology is more prone to behave in ways that jumble people’s feelings and thoughts while feeding them misleading information and fawning flattery. In some of the most egregious interactions, AI chatbots have faced accusations of becoming suicide coaches for emotionally vulnerable teenagers.

    The various problems have spurred a flurry of negligence lawsuits against the makers of AI chatbots, although none have targeted Gemini yet.

    Google executives believe they have built in guardrails that will prevent Gemini 3 from hallucinating or be deployed for sinister purposes such as hacking into websites and computing devices.

    Gemini 3 ‘s responses are designed to be “smart, concise and direct, trading cliche and flatter for insight — telling you what you need to hear, not just what you want to hear. It acts as a true thought partner,” Kavukcuoglu and Demis Hassabis, CEO of Google’s DeepMind division, wrote in a blog post.

    Besides providing consumers with more AI tools, Gemini 3 is also likely to be scrutinized as a barometer that investors may use to get a better sense about whether the massive torrent of spending on the technology will pay off.

    After starting the year expecting to spend $75 billion, Google’s corporate parent Alphabet recently raised its capital expenditure budget from $91 billion to $93 billion, with most of the money earmarked for AI. Other Big Tech powerhouses such as Microsoft, Amazon and Facebook parent Meta Platforms are spending nearly as much — or even more — on their AI initiatives this year.

    Investors so far have been mostly enthusiastic about the AI spending and the breakthroughs they have spawned, helping propel the values of Alphabet and its peers to new highs. Alphabet’s market value is now hovering around $3.4 trillion, more than doubling in value since the initial version of Gemini came out in late 2023. Alphabet’s shares edged up slightly Tuesday after the Gemni 3 news came out.

    But the sky-high values also have amplified fears of a potential investment bubble that will eventually burst and drag down the entire stock market.

    For now, AI technology is speeding ahead.

    OpenAI released its fifth generation of the AI technology powering ChatGPT in August, around the same time the next version of Claude came out from Anthropic.

    Like Gemini, both ChatGPT and Claude are capable of responding rapidly to conversational questions involving complex topics — a skill that has turned them into the equivalent of “answer engines” that could lessen people’s dependence on Google search.

    Google quickly countered that threat by implanting Gemini’s technology into its search engine to begin creating detailed summaries called “AI Overviews” in 2023, and then introducing an even more conversational search tool called “AI mode” earlier this year.

    Those innovations have prompted Google to de-emphasize the rankings of relevant websites in its search results — a shift that online publishers have complained is diminishing the visitor traffic that helps them finance their operations through digital ad sales.

    The changes have been mostly successful for Google so far, with AI Overviews now being used by more than 2 billion people every month, according to the company. The Gemini app, by comparison, has about 650 million monthly users.

    With the release of Gemini 3, the AI mode in Google’s search engine is also adding a new feature that will allow users to click on a “thinking” option in a tab that company executives promise will deliver even more in-depth answers than has been happening so far. Although the “thinking” choice in the search engine’s AI mode initially will only be offered to Gemini Pro and Ultra subscribers, the Mountain View, California, company plans to eventually make it available to all comers.

    [ad_2]

    Source link

  • OpenAI’s ‘embarrassing’ math | TechCrunch

    [ad_1]

    “Hoisted by their own GPTards.”

    That’s how Meta’s Chief AI Scientist Yann LeCun described the blowback after OpenAI researchers did a victory lap over GPT-5’s supposed math breakthroughs.

    Google DeepMind CEO Demis Hassabis added, “this is embarrassing.”

    The Decoder reports that in a since-deleted tweet, OpenAI VP Kevin Weil declared that “GPT-5 found solutions to 10 (!) previously unsolved Erdős problems and made progress on 11 others.” (“Erdős problems” are famous conjectures posed by mathematician Paul Erdős.)

    However, mathematician Thomas Bloom, who maintains the Erdos Problems website, said Weil’s post was “a dramatic misrepresentation” — while these problems were indeed listed as “open” on Bloom’s website, he said that only means, “I personally am unaware of a paper which solves it.”

    In other words, it’s not accurate to claim GPT-5 was able to solve previously unsolved problems. Instead, Bloom wrote, “GPT-5 found references, which solved these problems, that I personally was unaware of.”

    Sebastien Bubeck, an OpenAI researcher who’d also been touting GPT-5’s accomplishments, then acknowledged that “only solutions in the literature were found,” but he suggested this remains a real accomplishment: “I know how hard it is to search the literature.”

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    [ad_2]

    Anthony Ha

    Source link

  • This Scientist Thinks an A.I. Could Win a Nobel Prize by 2050

    [ad_1]

    Hiroaki Kitano launched the Nobel Turing Challenge back in 2016. Courtesy Sony Computer Science Laboratories

    For more than a century, early October has marked the arrival of Nobel Prize announcements recognizing achievements across sciences, literature and peace. Recipients vary by nationality, age and gender but share one thing in common: they’re human. That could change in the coming decades if the team behind the Nobel Turing Challenge succeeds.

    Launched in 2016 by Japanese scientist Hiroaki Kitano, the challenge aims to spur the creation of an autonomous A.I. system capable of making a Nobel Prize-worthy discovery by 2050. Kitano was inspired to start the endeavor after concluding that progress in complex fields like systems biology might eventually require an A.I. scientist or A.I.-human hybrid. “After 30 years of research, I realized that biological systems may be too complex and vast and overwhelm human cognitive capabilities,” Kitano told Observer.

    Kitano has long worked at the intersection of science and machine learning. In the 1980s and early 1990s, he researched A.I. systems at Carnegie Mellon University. More recently, he served as the chief technology officer of Sony Group Corporation from 2022 to 2024 and now holds the title of chief technology fellow. He’s also CEO of Sony Computer Science Laboratories, a unit focused on cutting-edge research.

    The broader science community initially greeted the Nobel Turing Challenge with a mix of excitement and skepticism. This didn’t faze Kitano, who faced similar reactions in 1993 when he co-founded RoboCup, an international robotics competition challenging developers to build a robotic football team capable of defeating the best human players by 2050.

    “Any grand challenge will face such mixed reactions,” he said. “Otherwise, it is not challenging enough.”

    Today, Kitano’s goal seems less far-fetched. A.I. already plays a growing role in the work of recent Nobel Prize winners—albeit with human oversight. Last year, the Nobel Prize in Physics went to A.I. researchers Geoffrey Hinton and John Hopfield for their contributions to neural network training. Two of last year’s Chemistry laureates, Google DeepMind’s Demis Hassabis and John Jumper, were recognized for developing AlphaFold, an A.I. model that predicts protein structures.

    The Nobel Turing Challenge has two main objectives. First, an A.I. system must autonomously handle every stage of scientific research: defining questions, generating hypotheses, planning and executing experiments, and forming new questions based on the results. Second, in a nod to the Turing test, the challenge aims to see whether such an A.I. scientist could perform so convincingly that peers—and even the Nobel Prize selection committee—would not realize it’s a machine.

    Kitano believes A.I. is most likely to earn a Nobel Prize in physiology or medicine, chemistry, or physics, but he admits there’s still a long way to go despite rapid progress in recent years. Creating a system capable of generating large-scale hypotheses and running fully automated robotic experiments remains a formidable challenge. “We are in the early stage of the game,” he said.

    Still, the challenge’s stated goal—to have an A.I. win a Nobel Prize—isn’t technically possible. The awards, established in 1895 through the will of inventor Alfred Nobel, can only be granted to a living person, organization or institution. Even so, Kitano hopes his initiative might eventually influence how the Nobel committees make decisions.

    “I think if [the] Nobel committee created an internal rule to check if the candidate is human or A.I. before the award decision, that would be our win.”

    This Scientist Thinks an A.I. Could Win a Nobel Prize by 2050

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • AI is having its Nobel moment. Do scientists need the tech industry to sustain it?

    AI is having its Nobel moment. Do scientists need the tech industry to sustain it?

    [ad_1]

    Hours after the artificial intelligence pioneer Geoffrey Hinton won a Nobel Prize in physics, he drove a rented car to Google’s California headquarters to celebrate.

    Hinton doesn’t work at Google anymore. Nor did the longtime professor at the University of Toronto do his pioneering research at the tech giant.

    But his impromptu party reflected AI’s moment as a commercial blockbuster that has also reached the pinnacles of scientific recognition.

    That was Tuesday. Then, early Wednesday, two employees of Google’s AI division won a Nobel Prize in chemistry for using AI to predict and design novel proteins.

    “This is really a testament to the power of computer science and artificial intelligence,” said Jeanette Wing, a professor of computer science at Columbia University.

    Asked about the historic back-to-back science awards for AI work in an email Wednesday, Hinton said only: “Neural networks are the future.”

    It didn’t always seem that way for researchers who decades ago experimented with interconnected computer nodes inspired by neurons in the human brain. Hinton shares this year’s physics Nobel with another scientist, John Hopfield, for helping develop those building blocks of machine learning.

    Neural network advances came from “basic, curiosity-driven research,” Hinton said at a press conference after his win. “Not out of throwing money at applied problems, but actually letting scientists follow their curiosity to try and understand things.”

    Such work started well before Google existed. But a bountiful tech industry has now made it easier for AI scientists to pursue their ideas even as it has challenged them with new ethical questions about the societal impacts of their work.

    One reason why the current wave of AI research is so closely tied to the tech industry is that only a handful of corporations have the resources to build the most powerful AI systems.

    “These discoveries and this capability could not happen without humongous computational power and humongous amounts of digital data,” Wing said. “There are very few companies — tech companies — that have that kind of computational power. Google is one. Microsoft is another.”

    The chemistry Nobel Prize awarded Wednesday went to Demis Hassabis and John Jumper of Google’s London-based DeepMind laboratory along with researcher David Baker at the University of Washington for work that could help discover new medicines.

    Hassabis, the CEO and co-founder of DeepMind, which Google acquired in 2014, told the AP in an interview Wednesday his dream was to model his research laboratory on the “incredible storied history” of Bell Labs. Started in 1925, the New Jersey-based industrial lab was the workplace of multiple Nobel-winning scientists over several decades who helped develop modern computing and telecommunications.

    “I wanted to recreate a modern day industrial research lab that really did cutting-edge research,” Hassabis said. “But of course, that needs a lot of patience and a lot of support. We’ve had that from Google and it’s been amazing.”

    Hinton joined Google late in his career and quit last year so he could talk more freely about his concerns about AI’s dangers, particularly what happens if humans lose control of machines that become smarter than us. But he stops short of criticizing his former employer.

    Hinton, 76, said he was staying in a cheap hotel in Palo Alto, California when the Nobel committee woke him up with a phone call early Tuesday morning, leading him to cancel a medical appointment scheduled for later that day.

    By the time the sleep-deprived scientist reached the Google campus in nearby Mountain View, he “seemed pretty lively and not very tired at all” as colleagues popped bottles of champagne, said computer scientist Richard Zemel, a former doctoral student of Hinton’s who joined him at the Google party Tuesday.

    “Obviously there are these big companies now that are trying to cash in on all the commercial success and that is exciting,” said Zemel, now a Columbia professor.

    But Zemel said what’s more important to Hinton and his closest colleagues has been what the Nobel recognition means to the fundamental research they spent decades trying to advance.

    Guests included Google executives and another former Hinton student, Ilya Sutskever, a co-founder and former chief scientist and board member at ChatGPT maker OpenAI. Sutskever helped lead a group of board members who briefly ousted OpenAI CEO Sam Altman last year in turmoil that has symbolized the industry’s conflicts.

    An hour before the party, Hinton used his Nobel bully pulpit to throw shade at OpenAI during opening remarks at a virtual press conference organized by the University of Toronto in which he thanked former mentors and students.

    “I’m particularly proud of the fact that one of my students fired Sam Altman,” Hinton said.

    Asked to elaborate, Hinton said OpenAI started with a primary objective to develop better-than-human artificial general intelligence “and ensure that it was safe.”

    “And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that’s unfortunate,” Hinton said.

    In response, OpenAI said in a statement that it is “proud of delivering the most capable and safest AI systems” and that they “safely serve hundreds of millions of people each week.”

    Conflicts are likely to persist in a field where building even a relatively modest AI system requires resources “well beyond those of your typical research university,” said Michael Kearns, a professor of computer science at the University of Pennsylvania.

    But Kearns, who sits on the committee that picks the winners of computer science’s top prize — the Turing Award — said this week marks a “great victory for interdisciplinary research” that was decades in the making.

    Hinton is only the second person to win both a Nobel and Turing. The first, Turing-winning political scientist Herbert Simon, started working on what he called “computer simulation of human cognition” in the 1950s and won the Nobel economics prize in 1978 for his study of organizational decision-making.

    Wing, who met Simon in her early career, said scientists are still just at the tip of finding ways to apply computing’s most powerful capabilities to other fields.

    “We’re just at the beginning in terms of scientific discovery using AI,” she said.

    ——

    AP Business Writer Kelvin Chan contributed to this report.

    [ad_2]

    Source link