ReportWire

Tag: World Labs

  • Fei-Fei Li and Andrej Karpathy Back a New A.I. Use Case: Simulating Human Behavior

    [ad_1]

    A.I. pioneer Fei-Fei Li is lending her support to Simile’s effort to simulate human behavior at scale. John Nacion/Variety via Getty Images

    Every three months, public companies brace for analyst questions during quarterly earnings calls. But what if firms could predict these queries in advance and rehearse their responses? That’s one of the capabilities touted by Simile, a new A.I. startup spun out of Stanford and backed by acclaimed researcher Fei-Fei Li and OpenAI co-founder Andrej Karpathy.

    Simile emerged from stealth yesterday (Feb. 12) with $100 million in funding from a round led by Index Ventures. Alongside Li and Karpathy, the startup—which hasn’t disclosed its valuation—also counts investors including Quora co-founder Adam D’Angelo and Scott Belsky, a partner at A24 Films.

    Li and Karpathy both have close ties to Simile’s founding team, which includes Stanford researchers Joon Park, Percy Liang and Michael Bernstein. Li is the co-director of Stanford’s Human-Centered A.I. Institute and advised Karpathy during his Ph.D. study at the university. She is widely known for foundational work such as ImageNet, a large-scale image database that helped drive major breakthroughs in computer vision. Karpathy and Bernstein also contributed to that project.

    Simile’s mission of using A.I. to reflect and model societal behavior taps into an underexplored research area, according to Karpathy, who previously worked at OpenAI and Tesla before launching his own education-focused A.I. startup. While large language models typically present a single, cohesive personality, Karpathy argues they are actually trained on data drawn from vast numbers of people. “Why not lean into that statistical power: Why simulate one ‘person’ when you could try to simulate a population?” he wrote in a post on X.

    That idea underpins Simile’s broader goal. The Palo Alto-based startup aims to simulate the real-world effects of major decisions, from public policy to product launches, across virtual populations that mirror human behavior. The team has already tested this concept on a smaller scale through projects like Smallville, a 2023 Stanford experiment in which 25 autonomous A.I. agents interacted in a virtual environment.

    Now, Simile is scaling the approach for business use. After spending the past seven months developing its model, the company is already working with clients on applications ranging from product development to litigation forecasting. CVS Health Corporation, for example, uses Simile to create simulated focus groups, while Gallup uses the platform to build digital polling panels. For earning calls, Simile can predict about 80 percent of the questions that analysts ultimately ask, said Park, the startup’s CEO, during a recent appearance on TBPN.

    At present, Simile’s models are based on data from hundreds of thousands of people who have signed up for its studies. Over time, the company hopes to expand that to simulations representing the world’s entire population of roughly 8 billion people.

    Simile joins a growing wave of A.I. companies focused on using simulation to model real-world scenarios. Much of the existing research in this space has centered on physical systems, such as robotics and autonomous vehicles, through “world model” platforms developed by firms like Google and Nvidia.

    One of the most prominent figures in world models is Li herself. In 2024, she took a leave of absence from Stanford to launch World Labs, a startup that builds 3D digital environments from image and text prompts. The company has raised $230 million to date and is valued at more than $1 billion.

    Fei-Fei Li and Andrej Karpathy Back a New A.I. Use Case: Simulating Human Behavior

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • A new test for AI labs: Are you even trying to make money? | TechCrunch

    [ad_1]

    We’re in a unique moment for AI companies building their own foundation model.

    First, there is a whole generation of industry veterans who made their name at major tech companies and are now going solo. You also have legendary researchers with immense experience but ambiguous commercial aspirations. There’s a clear chance that at least some of these new labs will become OpenAI-sized behemoths, but there’s also room for them to putter around doing interesting research without worrying too much about commercialization.

    The end result? It’s getting hard to tell who is actually trying to make money. 

    To make things simpler, I’m proposing a kind of sliding scale for any company making a foundation model. It’s a five-level scale where it doesn’t matter if you’re actually making money – only if you’re trying to. The idea here is to measure ambition, not success. 

    Think of it in these terms: 

    • Level 5: We are already making millions of dollars every day, thank you very much. 
    • Level 4: We have a detailed multi-stage plan to become the richest human beings on Earth. 
    • Level 3: We have many promising product ideas, which will be revealed in the fullness of time. 
    • Level 2: We have the outlines of a concept of a plan. 
    • Level 1: True wealth is when you love yourself. 

    The big names are all at Level 5: OpenAI, Anthropic, Gemini, and so on. The scale gets more interesting with the new generation of labs launching now, with big dreams but ambitions that can be harder to read. 

    Crucially, the people involved in these labs can generally choose whatever level they want. There’s so much money in AI right now that no one is going to interrogate them for a business plan. Even if the lab is just a research project, investors will count themselves happy to be involved. If you aren’t particularly motivated to become a billionaire, you might well live a happier life at Level 2 than at Level 5. 

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The problems arise because it isn’t always clear where an AI lab lands on the scale — and a lot of the AI industry’s current drama comes from that confusion. Much of the anxiety over OpenAI’s conversion from a non-profit came because the lab spent years at Level 1, then jumped to Level 5 almost overnight. On the other side, you might argue that Meta’s early AI research was firmly at Level 2, when what the company really wanted was Level 4. 

    With that in mind, here’s a quick rundown of four of the biggest contemporary AI labs, and how they measure up on the scale. 

    Humans& 

    Humans& was the big AI news this week, and part of the inspiration for coming up with this whole scale. The founders have a compelling pitch for the next generation of AI models, with scaling laws giving way to an emphasis on communication and coordination tools.  

    But for all the glowing press, Humans& has been coy about how that would translate into actual monetizable products. It seems it does want to build products; the team just won’t commit to anything specific. The most they’ve said is that they will be building some kind of AI workplace tool, replacing products like Slack, Jira and Google Docs but also redefining how these other tools work at a fundamental level. Workplace software for a post-software workplace! 

    It’s my job to know what this stuff means, and I’m still pretty confused about that last part. But it is just specific enough that I think we can put them at Level 3. 

    Thinking Machines Lab 

    This is a very hard one to rate! Generally, if you have a former CTO and project lead for ChatGPT raising a $2 billion seed round, you have to assume there is a pretty specific roadmap. Mira Murati does not strike me as someone who jumps in without a plan, so coming into 2026, I would have felt good putting TML at Level 4. 

    But then the last two weeks happened. The departure of CTO and co-founder Barret Zoph has gotten most of the headlines, due in part to the special circumstances involved. But at least five other employees left with Zoph, many citing concerns about the direction of the company. Just one year in, nearly half the executives on TML’s founding team are no longer working there. One way to read events is that they thought they had a solid plan to become a world-class AI lab, only to find the plan wasn’t as solid as they thought. Or in terms of the scale, they wanted a Level 4 lab but realized they were at Level 2 or 3. 

    There still isn’t quite enough evidence to justify a downgrade, but it’s getting close. 

    World Labs 

    Fei-Fei Li is one of the most respected names in AI research, best known for establishing the ImageNet challenge that kickstarted contemporary deep learning techniques. She currently holds a Sequoia-endowed chair at Stanford, where she co-directs two different AI labs. I won’t bore you by going through all the different honors and academy positions, but it’s enough to say that if she wanted, she could spend the rest of her life just receiving awards and being told how great she is. Her book is pretty good too! 

    So in 2024, when Li announced she had raised $230 million for a spatial AI company called World Labs, you might think we were operating at Level 2 or lower. 

    But that was over a year ago, which is a long time in the AI world. Since then, World Labs has shipped both a full world-generating model and a commercialized product built on top of it. Over the same period, we’ve seen real signs of demand for world-modeling from both video game and special effects industries — and none of the major labs have built anything that can compete. The result looks an awful lot like a Level 4 company, perhaps soon to graduate to Level 5.

    Safe Superintelligence (SSI) 

    Founded by former OpenAI chief scientist Ilya Sutskever, Safe Superintelligence (or SSI) seems like a classic example of a Level 1 startup. Sutskever has gone to great lengths to keep SSI insulated from commercial pressures, to the point of turning down an attempted acquisition from Meta. There are no product cycles and, aside from the still-baking superintelligent foundation model, there doesn’t seem to be any product at all. With this pitch, he raised $3 billion! Sutskever has always been more interested in the science of AI than the business, and every indication is that this is a genuinely scientific project at heart.  

    That said, the AI world moves fast — and it would be foolish to count SSI out of the commercial realm entirely. On his recent Dwarkesh appearance, Sutskever gave two reasons why SSI might pivot, either “if timelines turned out to be long, which they might” or because “there is a lot of value in the best and most powerful AI being out there impacting the world.” In other words, if the research either goes very well or very badly, we might see SSI jump up a few levels in a hurry. 

    [ad_2]

    Russell Brandom

    Source link

  • Fei-Fei Li’s World Labs speeds up the world model race with Marble, its first commercial product | TechCrunch

    [ad_1]

    World Labs, the startup founded by AI pioneer Fei-Fei Li, is launching its first commercial world model product. Marble is now available via freemium and paid tiers that let users turn text prompts, photos, videos, 3D layouts or panoramas into editable, downloadable 3D environments.

    The launch of the generative world model, first released in limited beta preview two months ago, comes a little over a year after World Labs came out of stealth with $230 million in funding, and puts the startup ahead of competitors building world models. World models are AI systems that generate an internal representation of an environment, and can be used to predict future outcomes and plan actions.

    Startups like Decart and Odyssey have released free demos, and Google’s Genie is still in limited research preview. Marble differs from these — and even World Labs’s own real-time model, RTFM — because it creates persistent, downloadable 3D environments rather than generating worlds on-the-fly as you explore. This, the company says, results in less morphing or inconsistency, and lets users export worlds as Gaussian splats, meshes or videos.

    Marble is also the first model of its kind to offer AI-native editing tools and a hybrid 3D editor that lets users block out spatial structures before AI fills in the visual details.  

    Image Credits:World Labs

    “This is a brand new category of model that’s generating 3D worlds, and this is something that’s going to get better over time. It’s something we’ve already improved quite a lot,” Justin Johnson, co-founder of World Labs, told TechCrunch. 

    Last December, World Labs showed how its early models could generate interactive 3D scenes based on a single image. While impressive, the somewhat cartoonish scenes weren’t fully explorable since movements were limited to a small area, and there were occasional rendering errors. 

    In my trial of the beta preview, I found Marble generated impressive worlds from image prompts alone — from game-like environments to photorealistic versions of my living room. Scenes morphed at the edges, though that’s apparently been improved in today’s launch. That said, a world I’d generated in the beta using a single prompt looked better and matched my intent more closely than the same prompt does now. 

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    I haven’t yet tested the editing features, though Johnson says they make Marble practical for near-term gaming, VFX and virtual reality (VR) projects. 

    “One of our main themes for Marble going forward is creative control,” Johnson said. “There should always be a quick pathway to generate something, but you should be able to dive even deeper and get a lot of control over the things that you’re generating. You don’t want the machine to just take the wheel and pull all that creativity away from you.” 

    Marble’s input to output pipeline.Image Credits:World Labs

    Marble’s take on creative control starts with input flexibility. The beta only accepted single images, forcing the model to invent unseen details for a 360-degree view. With the full launch, users can now upload multiple images or short clips to show a space from different angles and have the model generate fairly realistic digital twins. 

    Then we have Chisel, an experimental 3D editor that lets users block out coarse spatial layouts (think walls, boxes, or planes) and then add text prompts to guide the visual style. Marble generates the world, decoupling structure from style — similar to how HTML provides the structure of a website and CSS adds in color. Unlike text-based editing, Chisel lets you directly manipulate objects.  

    Marble’s Chisel feature decouples structure from style. Image Credits:World Labs

    “I can just go in there and grab the 3D block that represents the couch and move it somewhere else,” Johnson said. 

    Another new feature that gives you more editing control is the ability to expand a world.  

    “Once you generate a world, you can expand it up to once,” Johnson said. “When you move to a piece of the world that’s starting to break apart, you can basically tell the model to expand there or generate more world in the vicinity of where you currently are, and then it can add more detail in that region.”

    Users who want to create extremely large spaces can combine multiple worlds with “composer mode.” Johnson demonstrated this for me with two worlds he had already built – a room made of cheese with grape chairs, and another of a futuristic meeting room in space.

    The path to spatial intelligence

    Space ship environment created in Marble with text prompt overlayed. Note how the lights are realistically reflected in the hub’s walls.Image Credits:World Labs/TechCrunch

    Marble is available via four subscription tiers: Free (four generations from text, image, or panorama), Standard ($20/month, 12 generations plus multi-image/video input and advanced editing), Pro ($35/month, 25 generations with scene expansion and commercial rights), and Max ($95/month, all features and 75 generations). 

    Johnson thinks the initial use cases for Marble will be gaming, visual effects for film, and virtual reality.  

    Game developers have mixed feelings about the tech. A recent Game Developers Conference survey found a third of respondents believed generative AI has a negative impact on the games industry – 12% more than the survey indicated year earlier. Intellectual property theft, energy consumption and a decrease in quality from AI-generated content were among the top concerns aired. And last year, a Wired investigation found game studios like Activision Blizzard are using AI to cut corners and combat attrition. 

    In gaming, Johnson sees developers using Marble to generate background environments and ambient spaces and then importing those assets into game engines like Unity or Unreal Engine to add interactive elements, logic and code. 

    “It’s not designed to replace the entire existing pipeline for gaming, but to just give you assets that you can drop into that pipeline,” he said.  

    For VFX work, Marble sidesteps the inconsistency and poor camera control that plague AI video generators, per Johnson. Its 3D assets let artists stage scenes and control camera movements with frame-perfect precision, he said. 

    While Johnson said World Labs isn’t focusing on virtual reality (VR) applications right now, he noted the industry is “starved for content” and excited about the launch. Marble is already compatible with the Vision Pro and Quest 3 VR headsets, and every generated world can be viewed in VR today.

    Marble may also have potential use cases for robotics. Johnson noted that unlike image and video generation, robotics doesn’t have the benefit of a large repository of training data. But with generators like Marble, it becomes easier to simulate training environments.  

    According to a recent manifesto by Fei-Fei Li, CEO and co-founder of World Labs, Marble represents the first step towards creating “a truly spatially intelligent world model.” 

    Li believes “the next generation of world models will enable machines to achieve spatial intelligence on an entirely new level.” If large language models can teach machines to read and write, Li hopes systems like Marble can teach them to see and build. She says the ability to understand how things exist and interact in three-dimensional spaces can eventually help machines make breakthroughs beyond gaming and robotics, and even into science and medicine. 

    “Our dreams of truly intelligent machines will not be complete without spatial intelligence,” Li wrote.

    Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at rebecca.bellan@techcrunch.com or Russell Brandom at russell.brandom@techcrunch.com. For secure communication, you can contact them via Signal at @rebeccabellan.491 and russellbrandom.49.

    [ad_2]

    Rebecca Bellan

    Source link