ReportWire

Tag: Thinking Machines Lab

  • A new test for AI labs: Are you even trying to make money? | TechCrunch

    [ad_1]

    We’re in a unique moment for AI companies building their own foundation model.

    First, there is a whole generation of industry veterans who made their name at major tech companies and are now going solo. You also have legendary researchers with immense experience but ambiguous commercial aspirations. There’s a clear chance that at least some of these new labs will become OpenAI-sized behemoths, but there’s also room for them to putter around doing interesting research without worrying too much about commercialization.

    The end result? It’s getting hard to tell who is actually trying to make money. 

    To make things simpler, I’m proposing a kind of sliding scale for any company making a foundation model. It’s a five-level scale where it doesn’t matter if you’re actually making money – only if you’re trying to. The idea here is to measure ambition, not success. 

    Think of it in these terms: 

    • Level 5: We are already making millions of dollars every day, thank you very much. 
    • Level 4: We have a detailed multi-stage plan to become the richest human beings on Earth. 
    • Level 3: We have many promising product ideas, which will be revealed in the fullness of time. 
    • Level 2: We have the outlines of a concept of a plan. 
    • Level 1: True wealth is when you love yourself. 

    The big names are all at Level 5: OpenAI, Anthropic, Gemini, and so on. The scale gets more interesting with the new generation of labs launching now, with big dreams but ambitions that can be harder to read. 

    Crucially, the people involved in these labs can generally choose whatever level they want. There’s so much money in AI right now that no one is going to interrogate them for a business plan. Even if the lab is just a research project, investors will count themselves happy to be involved. If you aren’t particularly motivated to become a billionaire, you might well live a happier life at Level 2 than at Level 5. 

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The problems arise because it isn’t always clear where an AI lab lands on the scale — and a lot of the AI industry’s current drama comes from that confusion. Much of the anxiety over OpenAI’s conversion from a non-profit came because the lab spent years at Level 1, then jumped to Level 5 almost overnight. On the other side, you might argue that Meta’s early AI research was firmly at Level 2, when what the company really wanted was Level 4. 

    With that in mind, here’s a quick rundown of four of the biggest contemporary AI labs, and how they measure up on the scale. 

    Humans& 

    Humans& was the big AI news this week, and part of the inspiration for coming up with this whole scale. The founders have a compelling pitch for the next generation of AI models, with scaling laws giving way to an emphasis on communication and coordination tools.  

    But for all the glowing press, Humans& has been coy about how that would translate into actual monetizable products. It seems it does want to build products; the team just won’t commit to anything specific. The most they’ve said is that they will be building some kind of AI workplace tool, replacing products like Slack, Jira and Google Docs but also redefining how these other tools work at a fundamental level. Workplace software for a post-software workplace! 

    It’s my job to know what this stuff means, and I’m still pretty confused about that last part. But it is just specific enough that I think we can put them at Level 3. 

    Thinking Machines Lab 

    This is a very hard one to rate! Generally, if you have a former CTO and project lead for ChatGPT raising a $2 billion seed round, you have to assume there is a pretty specific roadmap. Mira Murati does not strike me as someone who jumps in without a plan, so coming into 2026, I would have felt good putting TML at Level 4. 

    But then the last two weeks happened. The departure of CTO and co-founder Barret Zoph has gotten most of the headlines, due in part to the special circumstances involved. But at least five other employees left with Zoph, many citing concerns about the direction of the company. Just one year in, nearly half the executives on TML’s founding team are no longer working there. One way to read events is that they thought they had a solid plan to become a world-class AI lab, only to find the plan wasn’t as solid as they thought. Or in terms of the scale, they wanted a Level 4 lab but realized they were at Level 2 or 3. 

    There still isn’t quite enough evidence to justify a downgrade, but it’s getting close. 

    World Labs 

    Fei-Fei Li is one of the most respected names in AI research, best known for establishing the ImageNet challenge that kickstarted contemporary deep learning techniques. She currently holds a Sequoia-endowed chair at Stanford, where she co-directs two different AI labs. I won’t bore you by going through all the different honors and academy positions, but it’s enough to say that if she wanted, she could spend the rest of her life just receiving awards and being told how great she is. Her book is pretty good too! 

    So in 2024, when Li announced she had raised $230 million for a spatial AI company called World Labs, you might think we were operating at Level 2 or lower. 

    But that was over a year ago, which is a long time in the AI world. Since then, World Labs has shipped both a full world-generating model and a commercialized product built on top of it. Over the same period, we’ve seen real signs of demand for world-modeling from both video game and special effects industries — and none of the major labs have built anything that can compete. The result looks an awful lot like a Level 4 company, perhaps soon to graduate to Level 5.

    Safe Superintelligence (SSI) 

    Founded by former OpenAI chief scientist Ilya Sutskever, Safe Superintelligence (or SSI) seems like a classic example of a Level 1 startup. Sutskever has gone to great lengths to keep SSI insulated from commercial pressures, to the point of turning down an attempted acquisition from Meta. There are no product cycles and, aside from the still-baking superintelligent foundation model, there doesn’t seem to be any product at all. With this pitch, he raised $3 billion! Sutskever has always been more interested in the science of AI than the business, and every indication is that this is a genuinely scientific project at heart.  

    That said, the AI world moves fast — and it would be foolish to count SSI out of the commercial realm entirely. On his recent Dwarkesh appearance, Sutskever gave two reasons why SSI might pivot, either “if timelines turned out to be long, which they might” or because “there is a lot of value in the best and most powerful AI being out there impacting the world.” In other words, if the research either goes very well or very badly, we might see SSI jump up a few levels in a hurry. 

    [ad_2]

    Russell Brandom

    Source link

  • Thinking Machines Lab co-founder Andrew Tulloch heads to Meta | TechCrunch

    [ad_1]

    Thinking Machines Lab, the AI startup led by former OpenAI CTO Mira Murati, has lost one of its co-founders to Meta.

    The Wall Street Journal reports that AI researcher Andrew Tulloch announced his departure to employees in a message on Friday. A Thinking Machine Labs spokesperson confirmed Tulloch’s departure to the WSJ, saying he “has decided to pursue a different path for personal reasons.”

    Back in August, the WSJ reported that Mark Zuckerberg’s aggressive AI recruiting blitz included an offer to acquire Thinking Machines Lab — and when that failed, Zuckerberg reportedly tried to lure Tulloch with a compensation package that could have been worth up to $1.5 billion over at least six years. (At the time, a Meta spokesperson said that the WSJ’s description of the offer was “inaccurate and ridiculous.”)

    Tulloch previously worked at OpenAI and Facebook’s AI Research Group.

    [ad_2]

    Anthony Ha

    Source link

  • Thinking Machines Lab wants to make AI models more consistent | TechCrunch

    [ad_1]

    There’s been great interest in what Mira Murati’s Thinking Machines Lab is building with its $2 billion in seed funding and the all-star team of former OpenAI researchers who have joined the lab. In a blog post published on Wednesday, Murati’s research lab gave the world its first look into one of its projects: creating AI models with reproducible responses.

    The research blog post, titled “Defeating Nondeterminism in LLM Inference,” tries to unpack the root cause of what introduces randomness in AI model responses. For example, ask ChatGPT the same question a few times over, and you’re likely to get a wide range of answers. This has largely been accepted in the AI community as a fact — today’s AI models are considered to be non-deterministic systems— but Thinking Machines Lab sees this as a solvable problem.

    The post, authored by Thinking Machines Lab researcher Horace He, argues that the root cause of AI models’ randomness is the way GPU kernels — the small programs that run inside of Nvidia’s computer chips — are stitched together in inference processing (everything that happens after you press enter in ChatGPT). He suggests that by carefully controlling this layer of orchestration, it’s possible to make AI models more deterministic.

    Beyond creating more reliable responses for enterprises and scientists, He notes that getting AI models to generate reproducible responses could also improve reinforcement learning (RL) training. RL is the process of rewarding AI models for correct answers, but if the answers are all slightly different, then the data gets a bit noisy. Creating more consistent AI model responses could make the whole RL process “smoother,” according to He. Thinking Machines Lab has told investors that it plans to use RL to customize AI models for businesses, The Information previously reported.

    Murati, OpenAI’s former chief technology officer, said in July that Thinking Machines Lab’s first product will be unveiled in the coming months, and that it will be “useful for researchers and startups developing custom models.” It’s still unclear what that product is, or whether it will use techniques from this research to generate more reproducible responses.

    Thinking Machines Lab has also said that it plans to frequently publish blog posts, code, and other information about its research in an effort to “benefit the public, but also improve our own research culture.” This post, the first in the company’s new blog series called “Connectionism,” seems to be part of that effort. OpenAI also made a commitment to open research when it was founded, but the company has become more closed off as it’s become larger. We’ll see if Murati’s research lab stays true to this claim.

    The research blog offers a rare glimpse inside one of Silicon Valley’s most secretive AI startups. While it doesn’t exactly reveal where the technology is going, it indicates that Thinking Machines Lab is tackling some of the largest question on the frontier of AI research. The real test is whether Thinking Machines Lab can solve these problems, and make products around its research to justify its $12 billion valuation.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    [ad_2]

    Maxwell Zeff

    Source link