ReportWire

Tag: openai

  • Ruoming Pang, Meta’s $200M Superintelligence Hire, Jumps to OpenAI After Just 7 Months

    [ad_1]

    Sam Altman reportedly courted Pang for months. Andrew Harnik/Getty Images

    Ruoming Pang, a prominent A.I. researcher recruited by Meta last year with a pay package reportedly worth more than $200 million, has left the company to join OpenAI, The Information reported yesterday (Feb. 25). His departure marks another setback for Mark Zuckerberg’s elite A.I. team and underscores the escalating A.I. talent war. Pang joined Meta Superintelligence Labs (MSL) in July after being poached from Apple. He remained at Meta for only seven months.

    Zuckerberg unveiled MSL in July 2025 as the centerpiece of Meta’s push to develop advanced A.I. systems. The lab quickly became the focus of an aggressive—and costly—hiring spree. Alexandr Wang, founder of Scale AI, now leads the group as Meta’s A.I. chief after Meta acquired 40 percent of his startup. Within MSL, a smaller, more secretive unit known as TBD Lab is tasked with building next-generation foundation models.

    Pang was originally from Shanghai and earned his undergraduate degree from Shanghai Jiao Tong University. He holds a master’s in computer science from the University of Southern California and earned a Ph.D. from Princeton University in 2006. Over the course of his career, Pang has worked on some of the most consequential A.I. systems in the industry, making him one of the more sought-after engineers in the field.

    At Apple, he spent nearly four years as a “senior distinguished engineer,” leading development of the foundation models behind Apple Intelligence. Before Apple, Pang spent roughly 15 years at Google DeepMind as a principal software engineer, where he worked on large-scale machine learning systems, including privacy-preserving technologies and speech recognition.

    OpenAI has not disclosed Pang’s title, scope of responsibilities or the terms of his compensation. The Sam Altman-led company reportedly courted him for months, so the package is likely substantial. OpenAI employees earn roughly $1.5 million in annual salary and equity, according to the Wall Street Journal. Pang is widely expected to continue working on foundation models and superintelligence research.

    For Meta, Pang’s exit complicates Zuckerberg’s ambition to dominate the superintelligence race. The company has successfully recruited high-profile researchers from OpenAI, Google and Anthropic. However, MSL has also seen a steady stream of departures in recent months.

    Among the most prominent was Yann LeCun, Meta’s chief A.I. scientist, who exited at the end of last year after more than a decade at the company. LeCun publicly criticized MSL chief Wang’s lack of experience with A.I. research

    Other departures have been quieter but telling. Ethan Knight joined MSL for only a few weeks before moving to OpenAI last August—a stint so brief it never appeared on his LinkedIn profile. Bert Maher, a software engineer, left after 12 years at Meta to join Anthropic. Avi Verma, who had been expected to join Meta from OpenAI, ultimately backed out.

    Pang’s move is the latest signal that Silicon Valley’s A.I. talent war is intensifying. Even as talk of an A.I. bubble grows louder and tech companies rely on increasingly complex financial structures to sustain lofty valuations, leaders like Zuckerberg, Altman and Anthropic’s Dario Amodei show little sign of restraint. Instead, they are offering compensation packages worth tens or even hundreds of millions of dollars to persuade top researchers that their vision for superintelligence will prevail.

    Ruoming Pang, Meta’s $200M Superintelligence Hire, Jumps to OpenAI After Just 7 Months

    [ad_2]

    Rachel Curry

    Source link

  • The White House wants AI companies to cover rate hikes. Most have already said they would. | TechCrunch

    [ad_1]

    The proliferation of AI data centers plugging into the national electrical grid has helped increase consumer electricity prices, driving up the average national electricity price by more than 6% in the last year.

    That’s not a good look for the incumbents ahead of this fall’s elections, and President Donald Trump addressed the challenge in his State of the Union speech last night.

    “We’re telling the major tech companies that they have the obligation to provide for their own power needs,” Trump said. “They can build their own power plants as part of their factory, so that no one’s prices will go up.”

    The hyperscalers in question don’t need to be told. They have already made public commitments in recent weeks to cover electricity costs by building their own power sources, paying higher rates, or both, part of a broader effort to solve PR problems around data center expansion and win over skeptical communities.

    On January 11, Microsoft announced its policy “to ensure that the electricity cost of serving our datacenters is not passed on to residential customers.” January 26, OpenAI committed to “paying its own way on energy, so that our operations don’t increase your energy prices.” On February 11, Anthropic made the same pledge to “cover electricity price increases that consumers face from our data centers.” Yesterday, Google announced the largest battery project in the world yesterday to support a data center in Minnesota.

    What these commitments means in practice, and who will determine which data centers are responsible for which price increases, remains unknown. The White House has not released the text of the proposed pledge.

    “A handshake agreement with Big Tech over data center costs isn’t good enough,” Arizona Democratic Senator Mark Kelly said on social media. “Americans need a guarantee that energy prices won’t soar and communities have a say.”

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    White House spokesperson Taylor Rodgers said that next week, companies will send representatives to formally sign the pledge at the White House. Amazon, Google, Meta, Microsoft, xAI, Oracle and OpenAI are reportedly among those set to attend. However none of the companies have confirmed their attendance.

    Even if tech companies committ to taking on electricity costs, on-site power plants may not be a panacea—they can still have adverse impacts on the surrounding environment, and will stress supply chains for natural gas, turbines, photovoltaics and batteries, depending on how companies aim to power their compute.

    [ad_2]

    Tim Fernholz

    Source link

  • India’s AI boom pushes firms to trade near-term revenue for users | TechCrunch

    [ad_1]

    Tech giants’ efforts to ramp up AI adoption in India may be about to hit a turning point, as companies end free promotions with hopes to convert the world’s fourth-largest economy into a windfall of paid subscribers.

    India became the world’s largest market for generative AI app downloads in 2025, according to market intelligence firm Sensor Tower, widening its lead over the U.S. as installs jumped 207% year-over-year.

    Companies including OpenAI, Google, and Perplexity rolled out extended free premium offers to accelerate user growth in the price sensitive market. Leading AI firms have also backed India in its push to become a global artificial intelligence hub. A major AI summit in New Delhi last week was attended by leaders including OpenAI’s Sam Altman, Anthropic’s Dario Amodei, and Alphabet CEO Sundar Pichai — a sign of the country’s growing weight in the global AI race.

    Now, some of those early promotional pushes are winding down. Perplexity ended its bundled Pro offer with Indian telco Airtel in January, while OpenAI’s free ChatGPT Go access in India is no longer available, potentially setting the stage for a clearer test of how many newly acquired users convert to paying subscribers.

    Despite strong download growth, India still generates a disproportionately small share of AI app revenue, accounting for about 1% of in-app purchases even as it drives roughly 20% of global GenAI app downloads, according to the Sensor Tower data shared with TechCrunch, highlighting the monetization challenge in one of the industry’s fastest-growing markets.

    GenAI app adoption in India accelerated sharply through 2025, with downloads peaking in September and October at year-over-year growth rates of about 320% and 260%, respectively, according to the data. Yet the surge in usage did not fully translate into revenue gains. In November and December 2025, AI app in-app purchase revenue in India fell 22% and 18% month over month, respectively. ChatGPT’s revenue dropped even more sharply — down 33% and 32% over the same period following the November launch of free sub-$5 ChatGPT Go access — reflecting the near-term impact of aggressive promotional pushes.

    Image Credits:Sensor Tower

    ChatGPT still commands more than 60% of GenAI in-app revenue in India, meaning shifts in its pricing strategy can significantly influence overall market performance.

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    Alongside promotional pushes, Sensor Tower attributed the surge in GenAI app adoption in India last year to a mix of new product launches, including the debut of platforms such as DeepSeek, Grok, and Meta AI, as well as upgrades to major chatbots like ChatGPT, Gemini, Claude, and Perplexity. Viral interest in AI-generated content also helped fuel adoption, with content creation and editing tools accounting for seven of the 20 most downloaded GenAI apps in India in 2025.

    The user surge has been equally pronounced. India accounted for about 19% of the global user base of leading AI assistant apps in 2025, ahead of the U.S. at 10%, Sensor Tower said. ChatGPT continues to dominate the Indian market by monthly active users, though rivals including Google’s Gemini and Perplexity have also seen rapid growth following promotional offers. ChatGPT was the most downloaded GenAI app in India and globally in 2025, according to earlier Sensor Tower data. Earlier this month, OpenAI’s CEO said that the chatbot now has more than 100 million weekly active users in India.

    The promotional push in India reflects a broader strategy by AI firms to reduce pricing friction in a highly value-conscious market, betting that early user adoption and engagement will translate into stronger long-term retention once free access periods expire, said Sneha Pandey, insights analyst at Sensor Tower.

    India’s appeal lies in its massive digital base. The country has more than a billion internet users and around 700 million smartphone owners, making it one of the largest potential markets for AI services globally and a critical battleground for user growth.

    Nonetheless, user engagement in India still trails more mature markets. In 2025, users of leading AI chatbot apps in the U.S. spent about 21% more time per week on the apps than their counterparts in India and logged 17% more sessions on average, per Sensor Tower.

    “AI in-app revenues will likely see meaningful but gradual improvement as users become more deeply integrated into these platforms, making sustained engagement paramount,” Pandey told TechCrunch.

    She added that pricing pressure in India is likely to remain elevated given the country’s young and value-conscious user base, making lower-cost tiers, telecom bundles, and micro-transaction models important for long-term retention.

    ChatGPT remained the clear market leader in India entering 2026, with 180 million monthly active users in January, per Sensor Tower, followed by Google’s Gemini with 118 million, Perplexity with 19 million, and Meta AI with 12 million. The figures underline both the scale of India’s AI opportunity and the growing challenge for firms to convert rapid user adoption into sustained revenue.

    Google, OpenAI, and Perplexity did not respond to requests for comments.

    [ad_2]

    Jagmeet Singh

    Source link

  • Sam Altman gets defensive about AI’s massive electricity usage: ‘It takes a lot of energy to train a human’ | Fortune

    [ad_1]

    OpenAI CEO Sam Altman isn’t worried about AI’s increasingly glaring resource consumption, and argued humans require a lot too. 

    In an on-stage interview at the India AI Impact summit, he went on the defensive after he was asked about ChatGPT’s water needs.

    He dismissed claims that the chatbot uses gallons of water per query as “completely untrue, totally insane,” according to a clip posted by The Indian Express, explaining that data centers powering ChatGPT have largely moved away from water-heavy “evaporative cooling” to prevent overheating.

    Altman was then asked about the electricity needed for AI. In contrast to the issue of water, he claimed it was “fair” to bring up the technology’s energy requirements, saying “We need to move toward nuclear, or wind, or solar [energy] very quickly.”

    But he pointed out that comparing AI’s power needs to humans isn’t exactly apples to apples.

    “It also takes a lot of energy to train a human,” he said, prompting some in the crowd to laugh. “It takes, like, 20 years of life, and all of the food you eat during that time before you get smart.”

    Altman expanded even further by noting that today’s humans wouldn’t even be here were it not for their ancestors dating back hundreds of thousands of years to when modern humans first emerged.

    “Not only that, it took, like, the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science or whatever to produce you,” he added.

    When comparing humans to ChatGPT’s potential, you have to take this context into account, he argued. A fair comparison would be to pit the energy a human uses to answer a query with an AI after it is trained. On that measure “probably, AI has already caught up on an energy efficiency basis measured that way.”

    In a June 2025 blog post, Altman claimed each ChatGPT query takes about 0.34 watt-hours of electricity, or around what an oven uses in about a second. Still, he published this fact before OpenAI released its newest GPT-5 model and its subsequent upgrades. Energy consumption can also vary based on the complexity of a query, for example, answering a question versus creating an image.

    Experts have warned that AI as a whole will  increase its cumulative power and water consumption greatly over the next 20 years or so. Overall, AI’s water usage is set to grow by about 130%, or by about 30 trillion liters (7.9 trillion gallons) of water through 2050, according to a January report by water technology company Xylem and market research firm Global Water Intelligence. 

    Over that same period, rising electricity demands are expected to increase the water use for data centers’ power generation by about 18%, reaching roughly 22.3 trillion liters (5.8 trillion gallons) per year. Meanwhile, the ever more complex chips data centers use will need more water during the manufacturing process, which will skyrocket the amount they require by 600% to 29.3 trillion liters (7.7 trillion gallons) annually from about 4.1 trillion liters (1.8 trillion gallons) today.

    While OpenAI has moved away from evaporative cooling, 56% of all data centers globally still use the method in some form, according to the Xylem and Global Water Intelligence report. 

    OpenAI’s own 800-acre data center complex in Abilene, Texas will reportedly use water, albeit, in a more efficient, closed-loop system that continuously recirculates water to cool the data center, the Texas Tribune reported. The data center will initially use 8 million gallons of water from the city of Abilene to fill its cooling system.

    [ad_2]

    Marco Quiroz-Gutierrez

    Source link

  • With AI, investor loyalty is (almost) dead: At least a dozen OpenAI VCs now also back Anthropic  | TechCrunch

    [ad_1]

    With OpenAI on the verge of finalizing a new $100 billion round, and Anthropic just closing its own monster $30 billion raise, one thing is clear: The concept of investor “loyalty” is only hanging on by a thread. 

    At least a dozen direct investors in OpenAI were announced as backers in Anthropic’s $30 billion raise earlier this month, including Founders Fund, Iconiq, Insight Partners, and Sequoia Capital. 

    Some dual investments are understandable if they come from the hedge fund or asset manager worlds, where their focus is still largely investing in public stocks (competitors or not). These include D1, Fidelity, and TPG.  

    One of these was a bit shocking. Affiliated funds of BlackRock joined in Anthropic’s $30 billion raise even though BlackRock’s senior managing director and board member Adebayo Ogunlesi is also on OpenAI’s board of directors. 

    In that world, it’s true that if various BlackRock funds get a chance to own OpenAI stock, they are likely to take it, never mind the personal association of a member of their senior leadership. (BlackRock runs every type of fund, including mutuals, closed-ends, and ETFs). And we all know the history of OpenAI and Microsoft’s relationship and why Microsoft is hedging its bets. Ditto for Nvidia. 

    But venture capital funds have — until now — operated differently.

    VCs market themselves as “founder friendly” and “helpful,” the idea being that when a VC firm buys a chunk of a startup’s company, the investor will help that startup be successful, particularly against its major rivals. If you are an owner of both OpenAI and Anthropic, who does your loyalty belong to, besides your own investors?  

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    Additionally, startups are private companies. They typically share confidential information with their direct investors on their business status — data that isn’t disclosed publicly the way it is with public companies. In many cases, the VCs also take board seats, which carries another level of fiduciary responsibility to their portfolio companies. 

    What makes this particular case even more interesting is that Sam Altman comes from the world of venture capital, as a former president of Y Combinator. He knows the drill. In 2024, he reportedly gave his investors a list of OpenAI’s rivals that he didn’t want them to back. It largely included companies launched by folks who left OpenAI, including Anthropic, xAI, and Safe Superintelligence. 

    Altman later denied that he told OpenAI investors they would be barred from future rounds if they backed his list of perceived rivals. Altman did admit that he said if they “made non-passive investments,” they would no longer receive OpenAI’s confidential business information, according to documents in the lawsuit between Elon Musk and OpenAI, Business Insider reported

    AI is also breaking the mold because of the record-breaking amounts of money that the largest AI labs are raising as they experience never-before-seen growth (and never-before-seen data center needs). At some point, when the hat is being passed around, the needs are so great and the possibilities of returns are so large, who can be expected to say no? 

    It turns out that not all venture investors have yet slid down the slippery slope. Andreessen Horowitz backs OpenAI but not (yet) Anthropic. Menlo Ventures backs Anthropic but not (yet) OpenAI, for instance.

    In fact, in our admittedly not exhaustive research, we found a dozen investors that appear to only have direct investments in one of these companies, not both. 

    Others include Bessemer Venture Partners, General Catalyst, and Greenoaks. (Note: We originally asked Claude to give us the list of dual investors. It got almost as many entries wrong as it got right, so all this for a very cool tech whose work sometimes remains less trustworthy than an intern’s.)

    Still, as we previously reported, the fact that this longstanding rule has been tossed by some of the most respected firms in the Valley, like Sequoia, is notable. One investor we reached out to simply shrugged and said that as long as the firm doesn’t have a board seat, no one sees the harm in it anymore.  

    Still, conflict-of-interest policies should now become another thing that founders ask about before signing that term sheet, no matter who it’s from. 

    [ad_2]

    Julie Bort

    Source link

  • Sam Altman Defends A.I. Energy Use With Human Comparison, Sparking Debate

    [ad_1]

    Sam Altman challenged critics of A.I.’s water and electricity consumption. Photo by John MacDougall/AFP via Getty Images

    Sam Altman is pushing back on mounting criticism over the environmental toll of A.I. The OpenAI chief has dismissed claims about A.I.’s water consumption as “fake” and drawn comparisons between the electricity required to power A.I. systems and the energy it takes to develop human intelligence.

    Figures suggesting that tools like ChatGPT consume multiple gallons of water per query are “totally insane” and have “no connection to reality,” Altman said in a Feb. 20 interview with The Indian Express on the sidelines of the AI Impact Summit in New Delhi. Last year, Altman claimed that ChatGPT uses 0.000085 gallons of water per query—roughly one-fifteenth of a teaspoon—though he did not explain how he calculated that figure.

    A.I.’s water footprint largely stems from the need for evaporative cooling systems used to keep data center hardware from overheating. But Altman argued that companies like OpenAI are no longer directly managing such cooling processes. Many A.I. developers, he noted, are shifting toward cooling systems that recirculate liquid rather than continually drawing fresh supplies. Meanwhile, tech giants like Microsoft, Meta, Google and Amazon have pledged to replenish more water than they withdraw by 2030.

    Even so, data centers continue to drink up water at a rapid pace. Total A.I.-related water consumption for cooling reached 23.7 cubic kilometers in 2025, a 38 percent increase over 2020, and is expected to more than triple over the next 25 years, according to a January report from Xylem. Despite the industry’s pivot to alternative methods, the report found that 56 percent of data center capacity still relies on some form of evaporative cooling.

    Altman was more measured when it came to electricity usage. “What is fair, though, is the energy consumption,” he said. “We need to move towards nuclear, wind, and solar very quickly.”

    Last April, the International Energy Agency reported that data centers accounted for roughly 1.5 percent of global electricity consumption in 2024. Their power use is rising at a rate more than four times faster than overall electricity demand and is expected to more than double by 2030.

    In response, major tech companies are pursuing data center agreements tied to alternative energy sources, including nuclear power, to ease pressure on grids. Altman, who previously led Y Combinator, has personally invested in nuclear ventures such as Oklo, which is developing small-scale nuclear plants, and Helion, which aims to commercialize nuclear fusion.

    The OpenAI CEO also argued that critics overlook the energy required to develop human intelligence. “People talk about how much energy it takes to train an A.I. model relative to how much it costs a human to do one inference query,” he said. “But it also takes a lot of energy to train a human—it takes, like, 20 years of life and all the food you eat during that time before you get started.”

    A more appropriate comparison, he suggested, would measure the energy used by a fully trained A.I. model to answer a question against that used by a human doing the same task. “Probably A.I. has already caught up on an energy efficiency basis measured that way.”

    The remarks quickly sparked debate online over whether such comparisons are appropriate. “He’s saying a really big spreadsheet and a baby are morally equivalent,” wrote Matt Stoller, research director of the American Economic Liberties Project, in a post on X. Sridhar Vembu, founder and chief scientist of software firm Zoho Corporation, also took issue with the OpenAI chief’s statements. A.I. should “quietly recede into the background” instead of dominating our lives, said the billionaire on X. “I do not want to see a world where we equate a piece of technology to a human being.”

    Sam Altman Defends A.I. Energy Use With Human Comparison, Sparking Debate

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Sam Altman: Know What Else Used a Lot of Energy? Human Civilization

    [ad_1]

    At last week’s India AI Impact Summit in New Delhi, industry leaders convened to discuss the future of artificial intelligence and how best to squeeze it into parts of your life you haven’t even considered. Notably absent was Bill Gates, who dropped out hours before his scheduled keynote over the ongoing scrutiny about his presence in the Epstein Files (though he continues to deny any wrongdoing). While the convention was reportedly a bit chaotic, what with the protests and all, the luminaries from around the tech world present nonetheless kept things upbeat and optimistic, declaring “full steam ahead” on the technological hype train carrying our species and planet off a cliff.

    Also in attendance was OpenAI’s Sam Altman, who earned numerous headlines over the course of the event for his words and antics. His buzz blitzkrieg started on Thursday at a seemingly easy photo-opp layup with Indian Prime Minister Narendra Modi and other AI executives all raising their joined hands in a celebratory display of industry-wide solidarity. Altman and the former colleague and present CEO of Anthropic to his left, Dario Amodei, notably refused to complete the chain and hold each other’s hands, making for an all-too-poignant moment. Altman would continue to make news throughout the summit for his comments on the industry’s “urgent” need for global regulation and his sneaking suspicion that companies might actually be using AI as a scapegoat to whitewash their layoffs.

    Ever the yapper, Altman has bagged yet another round of earned media for an interview with The Indian Express’ Anant Goenka, during which he posited some controversial rebuttals to concerns about AI’s environmental impact.

    Altman started off by saying the claims about ChatGPT consuming “‘17 gallons of water for each query’ or whatever,” are “completely untrue, totally insane, no connection to reality,” before qualifying that, OK, maybe it was a valid concern when his company “used to do evaporative cooling in data centers.”

    He went on to say that there is “fair” concern about the amount of energy data centers eat to crank out the most soulless slop you’ve ever seen, but suggested the onus of responsibility for dealing with AI’s ravenous appetite falls to the energy sector itself, which Altman feels needs to “move towards nuclear or wind and solar very quickly.”

    Altman then stunned the crowd and firmly re-entered the discourse with a mind-blowing truth bomb for those who still felt AI was consuming too much energy.

    “It also takes a lot of energy to train a human,” Altman rejoined euphorically. “It takes like 20 years of life, and all the food you eat before that time, before you get smart. And not only that, it took like the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever to produce you, and then you took whatever you took.”

    It is true that every person and the sum total of human civilization have consumed a sizable amount of energy (and water) to get to where we are today. While the value comparison of a nascent tech industry and its models to the entirety of civilization and human beings may have elicited adulation at the summit, Altman got an icier reception from the internet. Social media quickly took to roasting the remarks as “dystopian” and “deeply antisocial and antihuman.”

    Perhaps further illuminating the backlash, Altman’s energy comments butt up against the frustrating lack of transparency within the industry our collective futures now hinge upon. There are currently no regulations in place requiring data centers to disclose their water and energy consumption. Furthermore, center employees and business partners are typically muzzled by nondisclosure agreements. This has made reporting and research on the true expenditure levels a tricky figure to pin down.

    At least we’ve got Sam to keep us informed while waiting for some clarity about what’s actually going on and being used in those centers.

    [ad_2]

    Justin Caffier

    Source link

  • Sam Altman would like remind you that humans use a lot of energy, too | TechCrunch

    [ad_1]

    OpenAI CEO Sam Altman addressed concerns about AI’s environmental impact this week while speaking at an event hosted by The Indian Express.

    For one thing, Altman — who was in India for a major AI summit — said concerns about AI’s water usage are “totally fake,” though he acknowledged it was a real issue when “we used to do evaporative cooling in data centers.”

    “Now that we don’t do that, you see these things on the internet where, ‘Don’t use ChatGPT, it’s 17 gallons of water for each query’ or whatever,” Altman said. “This is completely untrue, totally insane, no connection to reality.”

    He added that it’s “fair” to be concerned about “the energy consumption — not per query, but in total, because the world is now using so much AI.” In his view, this means the world needs to “move towards nuclear or wind and solar very quickly.”

    There’s no legal requirement for tech companies to disclose how much energy and water they use, so scientists have been trying to study it independently. Data centers have also been connected to rising electricity prices.

    Citing a previous conversation with Bill Gates, the interviewer asked whether it’s accurate to say a single ChatGPT query currently uses the equivalent of 1.5 iPhone battery charges, to which Altman replied, “There’s no way it’s anything close to that much.”

    Altman also complained that many discussions about ChatGPT’s energy usage are “unfair,” especially when they focus on “how much energy it takes to train an AI model, relative to how much it costs a human to do one inference query.”

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    “But it also takes a lot of energy to train a human,” Altman said. “It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you.”

    So in his view, the fair comparison is, “If you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question versus a human? And probably, AI has already caught up on an energy efficiency basis, measured that way.”

    You can watch the full interview below. The conversation about water and energy usage begins at around 26:35.

    [ad_2]

    Anthony Ha

    Source link

  • OpenAI debated calling police about suspected Canadian shooter’s chats | TechCrunch

    [ad_1]

    An 18-year-old who allegedly killed eight people in a mass shooting in Tumbler Ridge, Canada, reportedly used OpenAI’s ChatGPT in ways that alarmed the company’s staff.

    Jesse Van Rootselaar’s chats describing gun violence were flagged by tools that monitor the company’s LLM for misuse and banned in June 2025.

    Staff at the company debated whether or not to reach out to Canadian law enforcement over the behavior but ultimately did not, according to the Wall Street Journal. An OpenAI spokesperson said Van Rootselaar’s activity did not meet the criteria for reporting to law enforcement; the company reached out to Canadian authorities after the incident.

    ChatGPT transcripts weren’t the only concerning part of Van Rootselaar’s digital footprint. She apparently created a game on Roblox, the world simulation platform frequented by children, which simulated a mass shooting at a mall. She also posted about guns on Reddit.

    Van Rootselaar’s instability was also known to local police, who had been called to her family’s home after she started a fire while under the influence of unspecified drugs.

    LLM chatbots built by OpenAI and its competitors have been accused of triggering mental breakdowns in users who lose grip on reality while conversing with digital models. Multiple lawsuits have been filed that cite chat transcripts that encourage people to commit suicide or offer assistance in doing so.

    If you are in a crisis or having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline.

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    [ad_2]

    Tim Fernholz

    Source link

  • PayPal confirms data breach, OpenAI considered alerting police about shooter – Tech Digest

    [ad_1]

    Share


    Some PayPal users
    have started to receive email from the company confirming a data breach that exposed personal information to a threat actor who gained access to PayPal’s systems, leading to some seeing unauthorized transactions on their accounts and the resetting of passwords. Here’s what you need to know. A breach notification letter, as per Bleeping Computer and seen by myself, has confirmed that some PayPal users have been impacted by a data breach after a hacker gained access to PayPal systems on July 1, 2025. Forbes 

    ChatGPT-maker OpenAI has said it considered alerting Canadian police last year about the activities of a person who months later committed one of the worst school shootings in the country’s history. OpenAI said last June the company identified the account of Jesse Van Rootselaar via abuse detection efforts for “furtherance of violent activities”. The San Francisco tech company said on Friday it considered whether to refer the account to the Royal Canadian Mounted Police (RCMP) but determined at the time that the account activity did not meet a threshold for referral to law enforcement. Guardian

    Any time you add a new person to a group chat, they can’t see the previous messages. So, they lack context about what’s already discussed and what everyone is talking about. WhatsApp is looking to solve this problem with its latest feature: Group Message History. The feature does exactly what its name suggests: it allows a new participant in a group chat to view past conversations. This way, they can quickly catch up on what’s being discussed and add their input. Android Police 

    An AI-generated video shows a crowd of young – mostly black – men, wearing balaclavas and padded jackets, slipping down a water slide into a dirty swimming pool with litter bobbing on the surface. The caption describes the scene as a taxpayer-funded water park in Croydon. It is one of a wave of deepfakes showing often absurd scenes of urban decline. Dozens of copycat accounts have begun producing similar content and collectively they have racked up millions of views across TikTok and Instagram Reels…It has been dubbed “decline porn”. BBC 


    Ikea is about to launch a new smart home gadget
    after a listing appeared on the Thread Group’s product database. The Ikea Dubbelkisel Driver will enable users to remotely control integrated lighting and appears to be a newer version of the current Trådfri Driver for wireless control, which offers smart features but isn’t Matter-compatible. Now that the brand is beginning to focus more heavily on Matter-certified devices following its huge 21-product launch last November, this listing gives us a useful insight into what’s coming next. T3.com

    A new medical tampon may be able to detect the earliest signs of ovarian cancer, scientists have said. Researchers in Southampton are to carry out a trial of the new device, which identifies biological signals in vaginal fluid. There are around 7,600 new cases of ovarian cancer in the UK each year. Many of them are diagnosed at an advanced stage. Around 250 women are being recruited for the study, named Violet. Sky News 


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • State to use AI to improve government

    [ad_1]

    BOSTON — Artificial intelligence is being used for everything from guiding self-powered cars and developing life-saving medicines to powering online search engines that help you find a plumber or pick holiday gifts for your family.

    And the machine learning platform could soon be employed by the state government to speed up the processes of getting a state permit, renewing a vehicle registration or detecting fraud in public benefits programs.

    This page requires Javascript.

    Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

    kAm%96 w62=6J 25>:?:DEC2E:@? 2??@F?465 uC:52J E92E :E A=2?D E@ 56A=@J r92Ev!%’D 2CE:7:4:2= :?E6==:86?46 2DD:DE2?E A=2E7@C> 😕 6I64FE:G6 3C2?49 286?4:6D H:E9 E96 8@2= @7 >2<:?8 DE2E6 8@G6C?>6?E H@C< “36EE6C 2?5 72DE6C” 7@C C6D:56?ED]k^Am

    kAm“%9:D 😀 23@FE >2<:?8 8@G6C?>6?E 72DE6C[ >@C6 677:4:6?E[ 2?5 >@C6 67764E:G6 7@C E96 A6@A=6 H6 D6CG6[” v@G] |2FC2 w62=6J D2:5 😕 2 AC6A2C65 DE2E6>6?E]k^Am

    kAmw6C 25>:?:DEC2E:@? D2:5 E96 px C@==@FE H:== 36 :>A=6>6?E65 2D 2 A92D65 2AAC@249 24C@DD E96 6I64FE:G6 3C2?49 “2?5 H:== AC@G:56 2 D276 2?5 D64FC6 6?G:C@?>6?E E92E AC@E64ED DE2E6 52E2]” %96 4@?EC24E H:E9 r92Ev!% H2D ?68@E:2E65 E9C@F89 2 4@>A6E:E:G6 AC@4FC6>6?E AC@46DD[ @77:4:2=D D2:5]k^Am

    kAm~?46 56A=@J65[ |2DD249FD6EED H:== 36 E96 7:CDE DE2E6 E@ 25@AE E96 E649?@=@8J 7@C E96 6?E:C6 c_[___6>A=@J66 6I64FE:G6 3C2?49[ 244@C5:?8 E@ E96 w62=6J 25>:?:DEC2E:@?]k^Am

    kAm%96 C@==@FE @7 E96 ?6H A@=:4J 4@>6D 2D DE2E6 =2H>2<6CD 2C6 4@?D:56C:?8 2 >JC:25 @7 AC@A@D2=D 2:>65 2E 255:?8 8F2C5C2:=D 2C@F?5 FD6 @7 E96 ?6H E649?@=@8J]k^Am

    kAm~?6 AC@A@D2= H@F=5 C6BF:C6 =2C86 2CE:7:4:2= :?E6==:86?46 E649?@=@8J 4@>A2?:6D DF49 2D E96 @?=:?6 492E3@E r92Ev!% E@ C68:DE6C H:E9 E96 DE2E6 pEE@C?6J v6?6C2=’D ~77:46 2?5 5:D4=@D6 :?7@C>2E:@? 23@FE E96:C 2=8@C:E9>D]k^Am

    kAmp?@E96C 3:== 42==D 7@C 32??:?8 “566A72<6D” @C 4@>AFE6C86?6C2E65 >2?:AF=2E:@?D @7 2 A6CD@?’D G@:46 @C =:<6?6DD FD:?8 >249:?6 =62C?:?8 E@ 4C62E6 G:DF2= 2?5 2F5:@ 4@?E6?E E92E 2AA62CD E@ 36 C62=] %96 E649?@=@8J 😀 36:?8 FD65 E@ 86?6C2E6 72<6 :>286CJ 7@C 2?JE9:?8 7C@> “C6G6?86 A@C?” E@ A@=:E:42= >F5D=:?8:?8]k^Am

    kAmx? a_ac[ pEE@C?6J v6?6C2= p?5C62 r2>A36== D@F89E E@ E:89E6? E96 C6:?D @? 2CE:7:4:2= :?E6==:86?46 56G6=@A6CD[ DFAA=:6CD 2?5 FD6CD[ :DDF:?8 ?6H 8F:52?46 E92E H2C?65 E96> ?@E E@ CF? 27@F= @7 E96 DE2E6’D =2HD @? 4@?DF>6C AC@E64E:@?[ 2?E:5:D4C:>:?2E:@? 2?5 52E2 D64FC:EJ]k^Am

    kAm{2DE H66<[ E96 DE2E6 w@FD6 @7 #6AC6D6?E2E:G6D 2AAC@G65 2 A2:C @7 3:A2CE:D2? 3:==D D6EE:?8 ?6H C6DEC:4E:@?D @? E96 FD6 @7 2CE:7:4:2= :?E6==:86?46 😕 A@=:E:42= 42>A2:8?:?8] %96 AC@A@D2=D H@F=5 C6BF:C6 42>A2:8?D E@ 5:D4=@D6 E96 FD6 @7 px 😕 A@=:E:42= 25D 2?5 32? “5646AE:G6” 4@>>F?:42E:@?D 😕 42>A2:8? 25D h_ 52JD 367@C6 2? 6=64E:@?]k^Am

    kAmr92Ev!%[ H9:49 H2D 4C62E65 3J $2? uC2?4:D4@32D65 ~A6?px[ 2? 2CE:7:4:2= :?E6==:86?46 C6D62C49 7:C> 4@7@F?565 3J t=@? |FD<[ 2==@HD FD6CD E@ 6?E6C E96>6D[ AC@>AED 2?5 8F:56=:?6D :?E@ E96 px DJDE6> E92E 4@>6D FA H:E9 2 C6DA@?D6 2D :7 2 9F>2? HC@E6 :E]k^Am

    kAm~? :ED H63D:E6[ E96 4@>A2?J D2JD E96 r92Ev!% 3@E 😀 2 “D276 2?5 FD67F=” px DJDE6> E92E :?E6C24ED 😕 2 “4@?G6CD2E:@?2= H2J” H:E9 FD6CD[ >2<:?8 :E A@DD:3=6 E@ “2?DH6C 7@==@HFA BF6DE:@?D[ 25>:E :ED >:DE2<6D[ 492==6?86 :?4@CC64E AC6>:D6D[ 2?5 C6;64E :?2AAC@AC:2E6 C6BF6DED]”k^Am

    kAmqFE E96 6>6C86?46 @7 px E649?@=@8J 92D 366? DE66A65 😕 4@?EC@G6CDJ[ H:E9 4C:E:4D H2C?:?8 E92E r@?8C6DD 2?5 DE2E6 8@G6C?>6?ED ?665 E@ >@G6 BF:4<=J E@ D6E C68F=2E:@?D 8@G6C?:?8 :ED FD6]k^Am

    kAmw62=6J 25>:?:DEC2E:@? @77:4:2=D D2J E96 C@==@FE @7 r92Ev!% H:== 36 5@?6 H:E9:? 2 “H2==65@77[ D64FC6 6?G:C@?>6?E E92E AC@E64ED DE2E6 52E2 2?5 6?DFC6D E92E 6>A=@J66 492E :?AFED 5@ ?@E EC2:? AF3=:4 px >@56=D]” %96J D2:5 FD6 @7 E96 E649?@=@8J H:== 36 8@G6C?65 3J 4FCC6?E DE2E6 C68F=2E:@?D 2?5 A@=:4:6D[ H9:49 H:== 36 “C68F=2C=J” FA52E65[ @77:4:2=D D2J]k^Am

    kAm“qJ >2<:?8 r92Ev!% 2G2:=23=6 E@ E96 DE2E6 H@C<7@C46[ H6 2C6 6>A@H6C:?8 @FC 6>A=@J66D H:E9 2 D64FC6[ 8@G6C?65 E@@= E92E 42? 6?92?46 D6CG:46 56=:G6CJ H9:=6 >2:?E2:?:?8 E96 9:896DE DE2?52C5D 7@C 52E2 AC:G24J[ D64FC:EJ[ 2?5 E9@F89E7F=[ EC2?DA2C6?E FD286 @7 px[” y2D@? $?J56C[ D64C6E2CJ @7 E96 tI64FE:G6 ~77:46 @7 %649?@=@8J $6CG:46D 2?5 $64FC:EJ[ D2:5 😕 2 DE2E6>6?E]k^Am

    kAm“~FC 7@4FD 😀 ?@E ;FDE 25@AE:?8 px[ 3FE 5@:?8 D@ 😕 2 H2J E92E C67=64ED @FC G2=F6D[ 2?5 DEC6?8E96?D ECFDE H:E9 E96 C6D:56?ED H6 D6CG6]”k^Am

    kAmr9C:DE:2? |] (256 4@G6CD E96 |2DD249FD6EED $E2E69@FD6 7@C }@CE9 @7 q@DE@? |65:2 vC@FAUCDBF@jD ?6HDA2A6CD 2?5 H63D:E6D] t>2:= 9:> 2E k2 9C67lQ>2:=E@i4H256o4?9:?6HD]4@>Qm4H256o4?9:?6HD]4@>k^2m]k^Am

    [ad_2]

    By Christian M. Wade | Statehouse Reporter

    Source link

  • The OpenAI mafia: 18 startups founded by alumni | TechCrunch

    [ad_1]

    Move over, PayPal mafia: There’s a new tech mafia in Silicon Valley. As the startup behind ChatGPT, OpenAI is arguably the biggest AI player in town. The company is reportedly now in talks to finalize a $100 billion deal, valuing the company at more than $850 billion.  

    Many employees have come and gone since the company first launched a decade ago, and some have launched startups of their own. Among these, some have become top rivals (like Anthropic), while others, just on investor interest alone, have managed to raise billions without even launching a product (see, Thinking Machine Labs).  

    In January, Aliisa Rosenthal, OpenAI’s first sales leader, spoke a little bit about this growing network. She, like the other OpenAI alums who did not become founders, decided to become an investor and said she was going to tap into the ex-OpenAI founder network to look for deal flow. We know Peter Deng, OpenAI’s former head of consumer products (and now general partner at Felicis) already has.  

    Below is a roundup of the major startups founded by OpenAI alumni, in alphabetical order. And we are certain this list will grow over time. 

    David Luan — Adept AI Labs 

    David Luan was OpenAI’s engineering VP until he left in 2020. After a stint at Google, in 2021 he co-founded Adept AI Labs, a startup that builds AI tools for employees. The startup last raised $350 million at a valuation north of $1 billion in 2023, but Luan left in late 2024 to oversee Amazon’s AI agents lab after Amazon hired Adept’s founders.

    Dario Amodei, Daniela Amodei, and John Schulman — Anthropic

    Siblings Dario and Daniela Amodei left OpenAI in 2021 to form their own startup, San Francisco-based Anthropic, that has long touted a focus on AI safety. OpenAI co-founder John Schulman joined Anthropic in 2024, pledging to build a “safe AGI.” The company has since become OpenAI’s biggest rival and just raised a $30 billion Series G, nabbing a $380 billion valuation in the process. IPO rumors are also swirling, as the company reportedly prepares for a public listing that could come sometime this year. (OpenAI is also allegedly preparing for an IPO this year and is maybe even trying to beat Anthropic to the public market.) 

    Rhythm Garg, Linden Li, and Yash Patil — Applied Compute  

    Three ex-OpenAI staffers (Rhythm Garg, Linden Li, and Yash Patil) have reportedly raised $20 million for a startup called Applied Compute, as reported by Upstart Media. All three of them worked as technical staff at OpenAI for more than a year before leaving last May to launch the startup, per their LinkedIns. The startup helps enterprises train and deploy custom AI agents. Benchmark led the round, valuing the 10-month-old company at $100 million, Upstart Media reported. 

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    Pieter Abbeel, Peter Chen, and Rocky Duan — Covariant

    The trio all worked at OpenAI in 2016 and 2017 as research scientists before founding Covariant, a Berkeley, California-based startup that builds foundation AI models for robots. In 2024, Amazon hired all three of the Covariant founders and about a quarter of its staff. The quasi-acquisition was viewed by some as part of a broader trend of Big Tech attempting to avoid antitrust scrutiny. 

    Tim Shi — Cresta 

    Tim Shi was an early member of OpenAI’s team, where he focused on building safe artificial general intelligence (AGI), according to his LinkedIn profile. He worked at OpenAI for a year in 2017 but left to found Cresta, a San Francisco-based AI contact center startup that has raised over $270 million from VCs like Sequoia Capital, Andreessen Horowitz, and others, according to a press release.

    Jonas Schneider — Daedalus

    Jonas Schneider led OpenAI’s software engineering for robotics team but left in 2019 to co-found Daedalus, which builds advanced factories for precision components. The San Francisco-based startup raised a $21 million Series A last year with backing from Khosla Ventures, among others.

    Andrej Karpathy — Eureka Labs

    Computer vision expert Andrej Karpathy was a founding member and research scientist at OpenAI, leaving the startup to join Tesla in 2017 to lead its autopilot program. Karpathy is also well-known for his YouTube videos explaining core AI concepts. He left Tesla in 2024 to found his own education technology startup, Eureka Labs, a San Francisco-based startup that is building AI teaching assistants.

    Margaret Jennings — Kindo

    Margaret Jennings worked at OpenAI in 2022 and 2023 until she left to co-found Kindo, which markets itself as an AI chatbot for enterprises. Kindo has raised over $27 million in funding, last raising a $20.6 million Series A in 2024. Jennings left Kindo in 2024 to head product and research at French AI startup Mistral, according to her LinkedIn profile.

    Maddie Hall — Living Carbon

    Maddie Hall worked on “special projects” at OpenAI but left in 2019 to co-found Living Carbon, a San Francisco-based startup that aims to create engineered plants that can suck more carbon out of the sky to fight climate change. Living Carbon raised a $21 million Series A round in 2023, bringing its total funding until then to $36 million, according to a press release.

    Liam Fedus — Periodic Labs  

    Liam Fedus, OpenAI’s VP of post-training research, left the company in March 2025 to team up with his former Google Brain colleague, Ekin Dogus Cubuk, and launch Periodic Labs. The startup seeks to use AI scientists to find new materials, particularly new superconducting materials. It came out of stealth mode in September 2025, armed with a massive $300 million in seed-round funding with backers that included Jezz Bezos, Eric Schmidt, Felicis and Andreessen Horowitz. 

    Aravind Srinivas — Perplexity

    Aravind Srinivas worked as a research scientist at OpenAI for a year until 2022, when he left the company to co-found AI search engine Perplexity. His startup has attracted a string of high-profile investors like Jeff Bezos and Nvidia, although it’s also caused controversy over alleged unethical web scraping. Perplexity, which is based in San Francisco, last reported a raise of $200 million at a $20 billion valuation. 

    Jeff Arnold — Pilot

    Jeff Arnold worked as OpenAI’s head of operations for five months in 2016 before co-founding San Francisco-based accounting startup Pilot in 2017. Pilot, which focused initially on doing accounting for startups, last raised a $100 million Series C in 2021 at a $1.2 billion valuation and has attracted investors like Jeff Bezos. Arnold worked as Pilot’s COO until leaving in 2024 to launch a VC fund.

    Shariq Hashme — Prosper Robotics

    Shariq Hashme worked for OpenAI for 9 months in 2017 on a bot that could play the popular video game Dota, per his LinkedIn profile. After a few years at data-labeling startup Scale AI, he co-founded London-based Prosper Robotics in 2021. The startup says it’s working on a robot butler for people’s homes, a hot trend in robotics that other players like Norway’s 1X and Texas-based Apptronik are also working on.

    Ilya Sutskever — Safe Superintelligence 

    OpenAI co-founder and chief scientist Ilya Sutskever left OpenAI in May 2024 after he was reportedly part of a failed effort to replace CEO Sam Altman. Shortly afterward, he co-founded Safe Superintelligence, or SSI, with “one goal and one product: a safe superintelligence,” he says. Details about what exactly the startup is up to are scant: It has no product and no revenue yet. But investors are clamoring for a piece anyway, and it’s been able to raise $2 billion, with its latest valuation reportedly rising to $32 billion this month. SSI is based in Palo Alto, California, and Tel Aviv, Israel.

    Emmett Shear — Stem AI

    Emmett Shear is the former CEO of Twitch who was OpenAI’s interim CEO in November 2023 for a few days before Sam Altman rejoined the company. Shear launched an AI company, StemAI, in 2024 (though it seems to have since rebranded as Softmax). The company, which appears to be a research company, has attracted funding from Andreessen Horowitz.

    Mira Murati — Thinking Machines Lab 

    Mira Murati, OpenAI’s CTO, left OpenAI to found her own company, Thinking Machines Lab, which emerged from stealth in February 2025. It said at the time (rather vaguely) that it will build AI that’s more “customizable” and “capable.” The San Francisco AI startup, now valued at $12 billion, announced its first product late last year: an API that fine-tunes language models. It recently made headlines when two of its co-founders announced earlier this year that they would return to OpenAI. 

    Kyle Kosic — xAI

    Kyle Kosic left OpenAI in 2023 to become a co-founder and infrastructure lead of xAI, Elon Musk’s AI startup that offers a rival chatbot, Grok. In 2024, however, he hopped back to OpenAI, where he remains. Meanwhile, xAI (which acquired Musk’s social media site X) was purchased by Musk’s SpaceX, giving the coalesce company a valuation of $1.25 trillion. It is looking to go public sometime in June for what could be a historic listing. 

    Angela Jiang — Worktrace AI

    Angela Jiang left OpenAI in 2024, after working as a product manager and on the public policy team. In April 2025, she quietly launched Worktrace, which uses AI to help enterprises make business operations more efficient. It observes employee work patterns and automates workflow, according to the company’s website. The business is backed by Mura Murati, OpenAI’s former CTO, who went on to launch Thinking Labs. It is also backed by OpenAI’s startup fund, in addition to a slew of other OpenAI names, like its chief strategy officer, Jason Kwon. 

    Stealth Startups

    In addition to these startups, a number of other former OpenAI employees have founded startups that are still in stealth mode, according to various updates TechCrunch found on LinkedIn. For instance, it seems that former OpenAI researcher Danilo Hellermark has been working on a generative AI stealth startup for the past few years. He officially left OpenAI at the beginning of 2023. There’s also one apparently in the works from Lucas Negritto, who worked on OpenAI’s technical team and left the company in 2023 after three years. Since then, he’s founded one startup and has been working on another since August 2025, according to his LinkedIn. 

    [ad_2]

    Charles Rollet, Dominic-Madori Davis

    Source link

  • New Research Shows AI Agents Are Running Wild Online, With Few Guardrails in Place

    [ad_1]

    In the last year, AI agents have become all the rage. OpenAI, Google, and Anthropic all launched public-facing agents designed to take on multi-step tasks handed to them by humans. In the last month, an open-source AI agent called OpenClaw took the web by storm thanks to its impressive autonomous capabilities (and major security concerns). But we don’t really have a sense of the scale of AI agent operations, and whether all the talk is matched by actual deployment. The MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) set out to fix that with its recently published 2025 AI Agent Index, which provides our first real look at the scale and operations of AI agents in the wild.

    Researchers found that interest in AI agents has undoubtedly skyrocketed in the last year or so. Research papers mentioning “AI Agent” or “Agentic AI” in 2025 more than doubled the total from 2020 to 2024 combined, and a McKinsey survey found that 62% of companies reported that their organizations were at least experimenting with AI agents.

    With all that interest, the researchers focused on 30 prominent AI agents across three separate categories: chat-based options like ChatGPT Agent and Claude Code; browser-based bots like Perplexity Comet and ChatGPT Atlas; and enterprise options like Microsoft 365 Copilot and ServiceNow Agent. While the researchers didn’t provide exact figures on just how many AI agents are deployed across the web, they did offer a considerable amount of insight into how they are operating, which is largely without a safety net.

    Just half of the 30 AI agents that got put under the magnifying glass by MIT CSAIL include published safety or trust frameworks, like Anthropic’s Responsible Scaling Policy, OpenAI’s Preparedness Framework, or Microsoft’s Responsible AI Standard. One in three agents has no safety framework documentation whatsoever, and five out of 30 have no compliance standards. That is troubling when you consider that 13 of 30 systems reviewed exhibit frontier levels of agency, meaning they can operate largely without human oversight across extended task sequences. Browser agents in particular tend to operate with significantly higher autonomy. This would include things like Google’s recently launched AI “Autobrowse,” which can complete multi-step tasks by navigating different websites and making use of user information to do things like log into sites on your behalf.

    One of the troubles with letting agents browse freely and with few guardrails is that their activity is nearly indistinguishable from human behavior, and they do little to dispel any confusion that might occur. The researchers found that 21 out of the 30 agents provide no disclosure to end users or third parties that they are AI agents and not human users. This results in most AI agent activity being mistaken for human traffic. MIT found that just seven agents published stable User-Agent (UA) strings and IP address ranges for verification. Nearly as many explicitly use Chrome-like UA strings and residential/local IP contexts to make their traffic requests appear more human, making it next to impossible for a website to distinguish between authentic traffic and bot behavior.

    For some AI agents, that’s actually a marketable feature. The researchers found that BrowserUse, an open-source AI agent, sells itself to users by claiming to bypass anti-bot systems to browse “like a human.” More than half of all the bots tested provide no specific documentation about how they handle robots.txt files (text files that are placed in a website’s root directory to instruct web crawlers on how they can interact with the site), CAPTCHAs that are meant to authenticate human traffic, or site APIs. Perplexity has even made the case that agents acting on behalf of users shouldn’t be subject to scraping restrictions since they function “just like a human assistant.”

    The fact that these agents are out in the wild without much protection in place means there is a real threat of exploits. There is a lack of standardization for safety evaluations and disclosures, leaving many agents potentially vulnerable to attacks like prompt injections, in which an AI agent picks up on a hidden malicious prompt that can make it break its safety protocols. Per MIT, nine of 30 agents have no documentation of guardrails against potentially harmful actions. Nearly all of the agents fail to disclose internal safety testing results, and 23 of the 30 offer no third-party testing information on safety.

    Just four agents—ChatGPT Agent, OpenAI Codex, Claude Code, and Gemini 2.5—provided agent-specific system cards, meaning the safety evaluations were tailored to how the agent actually operates, not just the underlying model. But frontier labs like OpenAI and Google offer more documentation on “existential and behavioral alignment risks,” they lack details on the type of security vulnerabilities that may arise during day-to-day activities—a habit that the researchers refer to as “safety washing,” which they describe as publishing high-level safety and ethics frameworks while only selectively disclosing the empirical evidence required to rigorously assess risk.

    There has at least been some momentum toward addressing the concerns raised by MIT’s researchers. Back in December, OpenAI and Anthropic (among others) joined forces, announcing a foundation to create a development standard for AI agents. But the AI Agent Index shows just how wide the transparency gap is when it comes to agentic AI operation. AI agents are flooding the web and workplace, functioning with a shocking amount of autonomy and minimal oversight. There’s little to indicate at the moment that safety will catch up to scale any time soon.

    [ad_2]

    AJ Dellinger

    Source link

  • Big AI Isn’t Waiting for the Backlash

    [ad_1]

    Photo-Illustration: Intelligencer; Photo: Getty Images

    Meta’s hard and early pivot into artificial intelligence hasn’t exactly gone as planned, with tens of billions of investment dollars sunk into middling models, departmental restructurings, and clashing visions. In technical terms, the company remains an AI also-ran. In another way, though, it’s emerging as an industry leader: It’s spending a ton of money on politics.

    Regarding regulation and national law, firms like Meta are, for now, in reasonably good shape. They have an administration that’s broadly deregulatory and specifically pro–AI industry and has mostly limited its threats of intervention to complaints about “wokeness” — a problem for a company like Anthropic, perhaps, but maybe less so for ones like Meta that preemptively ponied up and fell in line. Plenty of money will be spent by the AI industry on national politics, of course (OpenAI president Greg Brockman recently became a Trump PAC megadonor), but for now, AI firms are pushing further into state and local politics and Meta is spending a lot. According to the New York Times:

    Meta is preparing to spend $65 million this year to boost state politicians who are friendly to the artificial intelligence industry, beginning this week in Texas and Illinois, according to company representatives … Political operatives tied to A.I. interests have focused this election cycle on state capitols out of concern that states were developing a patchwork of laws that would stifle A.I. development.

    This, says the Times, is “the biggest election investment by Meta” so far and is focused, to start, on supporting AI-friendly Republicans in Texas and Democrats in Illinois. Meta isn’t alone here: A fleet of new PACs backed by other AI firms is funneling money into local and state elections across the country.

    What are these companies lobbying for, exactly? Their needs fit imperfectly into two categories. First, they want to fend off direct regulation of how AI products are built, used, and deployed. That includes avoiding “transparency” laws that often include risk audits, whistleblower protections, and frameworks for ensuring AI “safety,” in both the catastrophic and child-safety senses of the word. In this fight, AI firms have a useful ally in the federal government, which has been actively pressuring state lawmakers to drop the issue, most recently in Utah.

    Closer to the ground and a bit further from the national political discourse, for now, is the matter of data centers. Much of the money AI companies spend on AI — raised from investors, their own balance sheets, and, more recently, bond sales — goes into buying GPUs and leasing or building structures in which to put them. These structures then need huge amounts of power coming from either the grid or newly constructed generators of one type or another (if you’re xAI, this means standing up gas turbines without permits; if you’re Meta, this may look like partnering directly with a nuclear power plant). In addition to the staggering power needs, data centers use a lot of water. And despite their eye-popping costs to build and run, they barely create any jobs. For the sorts of communities being approached with these projects — places that may be persuaded to accept the mixed prospect of hosting an Amazon warehouse or, say, a massive new ICE detention center — AI data centers are uniquely unappealing. As a result, they encounter local resistance from across the political spectrum. According to the Financial Times:

    Over the past year, the White House has courted tech billionaires and gone out of its way to protect the AI industry’s agenda, fast-tracking permits for data centre construction and approving the sales of advanced chips to China while cracking down on states’ attempts to regulate chatbots … But across the US, citizens, clergy and elected officials in conservative communities are leading a grassroots rebellion against the rapid rollout of the technology.

    Data centers offer an almost perfectly sympathetic NIMBY cause. They’re a drain on local resources, straining infrastructure and driving up utility prices. They exist to support a technology about which people are fairly pessimistic across the political spectrum. They’re pitched as investments in an exciting future, but that future will unfold elsewhere while your town, now designated as an infrastructural non-place, is just stuck with a big jobless box that uses more power and water than everyone else combined.

    The surge in local lobbying isn’t about winning this argument — good luck with that! — so much as it’s about getting as much done as possible while the companies still can, buying support at the state level and breaking ground in as many municipalities as possible before data-center backlash becomes a universal condition of local politics in America. AI firms always talk about how they’re in a technological race with one another or against China in which every day counts. But they’re also in a race to take advantage of a brief domestic political moment during which they’re relatively unencumbered and haven’t yet been metabolized into American politics. At the national, state, and local levels, this may be as good as the AI industry will ever have it. And ahead of the midterms — not to mention the prospect of 2028 — it’s lobbying like it’s running out of time.


    See All



    [ad_2]

    John Herrman

    Source link

  • OpenAI deepens India push with Pine Labs fintech partnership | TechCrunch

    [ad_1]

    As India pitches itself as a global hub for applied artificial intelligence, OpenAI has partnered with Pine Labs to integrate AI-driven reasoning into the fintech firm’s payments stack, automating settlement and invoicing workflows in a move the companies say could help accelerate AI-led commerce in India.

    The partnership will see Pine Labs embed OpenAI’s application programming interfaces — software tools that let companies plug AI into their existing systems — within its payments and commerce infrastructure, the companies said on Thursday, all with the aim of enabling AI-assisted settlement, reconciliation, and invoicing workflows.

    The deal underscores OpenAI’s broader push to expand its footprint in India, one of its fastest-growing markets, as it looks to move beyond being known primarily as the maker of ChatGPT and embed its technology into education, enterprise, and infrastructure. Earlier this week, OpenAI partnered with leading Indian engineering, medical, and design institutions to bring AI tools into higher education, betting that India’s large developer base and more than a billion internet users will play a central role in the next phase of AI adoption.

    Pine Labs is already using AI internally to automate parts of its settlement and reconciliation process, cutting the time it takes to clear daily settlements from hours to minutes, according to Chief executive B Amrish Rau. The Noida-based company previously relied on manual checks by dozens of employees to process funds from multiple banks before markets opened each day, a workflow that is now largely handled by AI-driven systems, he said in an interview.

    For Pine Labs, the partnership is intended to extend those AI-driven efficiencies beyond internal operations to merchants and corporate clients, starting with business-to-business use cases such as invoice processing, settlements and payments orchestration, Rau told TechCrunch. He noted the company sees faster adoption in B2B workflows, where AI agents can handle large volumes of repetitive financial tasks under predefined rules, before similar capabilities reach consumer-facing payments.

    “People talk about retail AI, but the bigger impact of all of this is really efficiency improvement, especially in B2B,” Rau said. “If you look at invoicing and settlement, those are workflows where agents can actually drive the process end to end, and that’s where adoption can happen faster.”

    The rollout of more autonomous, agent-led payment workflows will move faster in overseas markets where regulations already allow such transactions, Rau said, while India is likely to see a more gradual adoption focused on AI-assisted commerce rather than fully agent-initiated payments. He said that Pine Labs is already prototyping agent-driven payments in parts of the Middle East and Southeast Asia, even as Indian regulations require tighter controls on how payments are authorized.

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    For OpenAI, the partnership offers a route deeper into India’s payments and enterprise ecosystem as it looks to move beyond consumer-facing tools and embed its models into high-volume, regulated workflows. Rau said the collaboration is aimed at increasing merchant stickiness and expanding Pine Labs’ role from a payments processor to a broader commerce platform, with higher transaction volumes over time translating into incremental revenue.

    Pine Labs says it works with more than 980,000 merchants, 716 consumer brands, and 177 financial institutions, and has processed over 6 billion cumulative transactions valued at over ₹11.4 trillion (about $126 billion), per its prospectus published last year. The fintech operates across 20 countries, including Malaysia, Singapore, Australia, parts of Africa, the UAE, and the U.S., giving the OpenAI partnership reach across both Indian and international markets.

    Rau said the partnership does not involve revenue sharing between the two companies, with Pine Labs not taking a cut if its merchants choose to embed OpenAI’s tools. “We’ve kept it completely independent of each other — anything related to payment and payment services, we will get the benefit of it, and anything related to OpenAI revenues will go to them,” he said.

    The arrangement, Rau added, is also non-exclusive. He compared it to OpenAI’s partnership with Stripe in the U.S. and said Pine Labs remains open to working with other AI providers.

    Rau said Pine Labs is building additional security and compliance layers around AI-driven workflows to ensure that sensitive merchant and consumer transaction data remains protected, as the company integrates AI more deeply into its payments systems. He said the focus is on ensuring transactions remain secure and compliant even as more workflows are automated by AI.

    Pine Labs’ interest in AI-driven commerce builds on earlier work through its Setu unit, which has experimented with agent-led bill payment experiences using chatbots including ChatGPT and Anthropic’s Claude. Separately, India also began piloting consumer payments directly through AI chatbots last year.

    The new announcement comes as India hosts its AI Impact Summit in New Delhi, where global AI companies including OpenAI, Anthropic, and Google are showcasing their latest capabilities alongside Indian startups demonstrating AI applications aimed at large-scale deployment across sectors such as finance, healthcare, and education.

    [ad_2]

    Jagmeet Singh

    Source link

  • Humain CEO Tareq Amin Injects $3B Into Elon Musk’s xAI to Power Saudi A.I. Ambitions

    [ad_1]

    Humain CEO Tareq Amin’s $3 billion investment in xAI positions Saudi Arabia at the center of a rapidly shifting global A.I. power structure. Photo by Amal Alhasan/Getty Images for Fortune Media

    Tareq Amin, CEO of Saudi Arabia’s largest A.I. company, Humain, has been on a dealmaking blitz since taking the helm of the Kingdom’s national A.I. initiative last year. His latest move: a $3 billion investment in Elon Musk’s xAI. The investment was made during xAI’s $20 billion fundraising round in January, Humain announced today (Feb. 18). The raise came just weeks before xAI merged with Musk’s SpaceX earlier this month, as Musk consolidates his A.I., communications and space ambitions ahead of a widely anticipated IPO.

    Founded in 2025 by Crown Prince Mohammed Bin Salman and backed by Saudi Arabia’s massive sovereign wealth fund, the Public Investment Fund. Humain sits at the center of the Kingdom’s push to diversify its economy beyond oil. A core part of that mandate: building sovereign A.I. infrastructure at home.

    The xAI stake is the latest example of Humain’s ability to “deploy meaningful capital behind exceptional opportunities where long-term vision, technical excellence and execution converge,” said Amin in a statement. Amin, who previously led Aramco Digital and Japan’s Rakuten Mobile, has spent the past several months striking blockbuster partnerships with U.S. tech heavyweights, including Nvidia, AMD, Cisco, Amazon Web Services and Groq (not xAI’s chatbot Grok).

    Humain did not respond to requests for comment from Observer.

    Most of the partnerships are focused on expanding Saudi Arabia’s data center footprint and compute capacity. A joint venture with AMD and Cisco, for example, aims to build domestic A.I. infrastructure capable of powering up to one gigawatt.

    xAI’s relationship with Humain dates back to November, when the companies unveiled plans for a 500-megawatt data center in Saudi Arabia. The facility—xAI’s first outside the U.S.—will run on Nvidia chips and deploy the company’s Grok models across the Kingdom.

    Humain’s deepening ties to xAI underscore a broader realignment in global A.I. alliances, with Gulf states emerging as critical capital providers and infrastructure hubs for American developers. In November, Humain and the United Arab Emirates’ A.I. company, G42, received U.S. approval to acquire up to 35,000 advanced A.I. chips each, marking a sharp reversal from earlier semiconductor export restrictions.

    Other regional players are also forging closer links with U.S. firms. G42 secured a $1.5 billion investment from Microsoft and is set to help develop Stargate UAE, an A.I. compute cluster in Abu Dhabi to be operated by OpenAI and Oracle.

    The Emirati-backed MGX has participated in large fundraising rounds for xAI, OpenAI and Anthropic, while Qatar’s sovereign wealth fund earlier this week joined Anthropic’s new $380 billion Series G financing—further cementing the Middle East’s growing influence over the future of A.I.

    Humain CEO Tareq Amin Injects $3B Into Elon Musk’s xAI to Power Saudi A.I. Ambitions

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Exclusive: OpenAI Has Poached Instagram’s Celebrity Whisperer

    [ad_1]

    OpenAI has hired Instagram’s vice president of global partnerships, Charles Porch, to serve as the AI company’s first-ever vice president of global creative partnerships. The newly created position is the latest move in OpenAI’s push to win over a skeptical entertainment industry.

    In his over 15 years at Instagram, Facebook, and Meta, Porch was instrumental in bringing high-profile figures to the platforms. He facilitated the exclusive Instagram launch of Beyoncé’s self-titled album in 2013, coordinated Instagram’s portrait studios at Vanity Fair’s Oscar Party and the Met Gala, convinced Pope Francis to join the social media platform in 2016, and led an initiative in 2025 to lure TikTok creators over to Instagram Reels with “Breakthrough Bonus” payments.

    OpenAI is hoping to reap similar benefits from Porch’s deep relationships with both talent and management in the worlds of music, film, fashion, art, sports, and the creator ecosystem.

    While Porch and the company offered sparse details on the still-evolving role, which will begin in March, the most likely applications of his talent include arranging deals to license entertainers’ likenesses to appear in OpenAI’s video generation model Sora, building out the future of interactive AI platforms, and promoting AI tools for artistic development in industries like music, fashion, and film.

    In an interview with Vanity Fair this week, Porch explained, “I’m going to be the person that’s talking to creative communities around the world to figure out how we build the best products to serve them.”

    AI companies have so far received a frosty reception in Hollywood over fears that the technology will replace jobs, erode creativity, and devalue intellectual property. In 2023, dual writers’ and actors’ strikes paralyzed the industry, held up largely by complex negotiations over the usage of artificial intelligence. Both unions won a number of protections, including guarantees of compensation should actors’ images be used to create digital doubles and guardrails on studios’ ability to replace human labor with AI. These contracts are set to expire this summer, however.

    In December, OpenAI made a major breakthrough with a $1 billion agreement with Disney. The three-year licensing deal will allow Sora to produce content featuring “animated, masked, and creature” characters from the worlds of Disney, Marvel, Pixar, and Star Wars.

    Licensing the likeness of real people will be a far taller order. In recent months, big-name stars like Matthew McConaughey, Michael Caine, and Gwyneth Paltrow have licensed their voices to be recreated by AI companies ElevenLabs and Speechify for audio content, signaling an openness from talent and agencies to dipping a toe into the world of AI, provided the right compensation models, data privacy agreements, and level of creative and reputational control.

    [ad_2]

    Julia Black

    Source link

  • U.S. court bars OpenAI from using ‘Cameo’ | TechCrunch

    [ad_1]

    A federal district court in Northern California ruled in favor of Cameo, a platform that allows users to get personalized video messages from celebrities, and ordered OpenAI to stop using “Cameo” in its products and features.

    OpenAI was using the “Cameo” name for its AI-powered video generation app Sora 2. Users could use that feature to insert digital likenesses of themselves into AI-generated videos. In a ruling filed Saturday, the court said the name was similar enough to cause user confusion, and rejected OpenAI’s argument that “Cameo” was merely descriptive, finding that “it suggests rather than describes the feature.”

    In November, the court granted a temporary restraining order to Cameo and stopped OpenAI from using the word. The AI company then renamed the feature to “Characters” after that order.

    “We have spent nearly a decade building a brand that stands for talent-friendly interactions and genuine connection, and we like to say that ‘every Cameo is a commercial for the next one.” Cameo CEO Steven Galanis said in a statement.

    “This ruling is a critical victory not just for our company, but for the integrity of our marketplace and the thousands of creators who trust the Cameo name. We will continue to vigorously defend our intellectual property against any platform that attempts to trade on the goodwill and recognition we have worked so hard to establish,” he noted.

    “We disagree with the complaint’s assertion that anyone can claim exclusive ownership over the word ‘cameo,’ and we look forward to continuing to make our case,” an OpenAI spokesperson told Reuters in response to the ruling.

    OpenAI has been involved in several intellectual property cases in recent months. Earlier this month, the company ditched “IO” branding around its upcoming hardware products, according to court documents obtained by WIRED. In November, digital library app OverDrive sued OpenAI over its use of “Sora” for its video generation app. The company is also in legal disputes with various artists, creatives, and media groups in various geographies over copyright violations.

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    [ad_2]

    Ivan Mehta

    Source link

  • India has 100M weekly active ChatGPT users, Sam Altman says | TechCrunch

    [ad_1]

    India has 100 million weekly active ChatGPT users, making the country one of OpenAI’s largest markets globally, CEO Sam Altman said ahead of a government-hosted AI summit.

    On Sunday, Altman outlined ChatGPT’s growing adoption in India in an article published in the Indian English daily Times of India, as OpenAI prepares to formally participate in the five-day India AI Impact Summit in New Delhi, beginning Monday. Altman is attending the event alongside senior executives from several of the world’s leading AI companies.

    The growth comes as OpenAI, like other leading AI firms, looks to India’s young population and its more than a billion internet users to fuel global expansion. The ChatGPT maker opened a New Delhi office in August 2025 after months of groundwork in the country, and has adjusted its approach for India’s price-sensitive market, including rolling out a sub-$5 ChatGPT Go tier that was later made free for a year for Indian users.

    In the article, Altman said India is ChatGPT’s second-largest user base after the United States, highlighting the South Asian nation’s growing weight in OpenAI’s global strategy. The disclosure comes as ChatGPT’s overall usage has surged worldwide, with the platform reaching 800 million weekly active users as of October 2025 and reported to be approaching 900 million.

    Altman also highlighted the role of students in driving adoption, saying India has the largest number of student users of ChatGPT globally.

    Indian students have become a key growth segment for leading AI companies more broadly, as rivals race to embed their tools in classrooms and learning workflows. Google has similarly targeted the market, offering Indian students a free one-year subscription to its AI Pro plan in September 2025. Separately, India accounts for the highest global usage of Gemini for learning, Chris Phillips, Google’s vice president and general manager for education, said last month.

    “With its focus on access, practical Al literacy, and the infrastructure that supports widespread adoption, India is well positioned to broaden who benefits from the technology and to help shape how democratic AI is adopted at scale,” Altman wrote.

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    ChatGPT’s rapid growth also highlights a broader challenge for AI companies in India: translating widespread adoption into sustained economic impact. Indian government initiatives such as the IndiaAI Mission — a national program aimed at expanding computing capacity, supporting startups and accelerating AI adoption in public services — seek to address those gaps. However, the country’s price-sensitive market and infrastructure constraints have made monetization and large-scale deployment more complex than in developed economies.

    “Given India’s size, it also risks forfeiting a vital opportunity to advance democratic AI in emerging markets around the world,” Altman wrote, warning that uneven access and adoption could concentrate AI’s economic gains in too few hands.

    Altman also signaled that OpenAI plans to deepen its engagement with the Indian government, writing that the company would soon announce new partnerships aimed at expanding access to AI across the country. He did not provide details, but said the focus would be on widening reach and enabling more people to put AI tools to practical use.

    The India AI Impact Summit is expected to draw a wide cross-section of global technology and political leaders, including Anthropic CEO Dario Amodei, Sundar Pichai of Google, and senior Indian business figures such as Mukesh Ambani and Nandan Nilekani. Political leaders including Emmanuel Macron, Sheikh Khaled bin Mohamed bin Zayed Al Nahyan, and Luiz Inácio Lula da Silva are also expected to attend, spotlighting India’s ambition to position itself as a central player in global AI debates.

    For global AI firms, including OpenAI, the summit underscores how India’s vast user base is translating into growing influence over how the technology evolves.

    OpenAI did not respond to a request for comment.

    [ad_2]

    Jagmeet Singh

    Source link

  • Fei-Fei Li and Andrej Karpathy Back a New A.I. Use Case: Simulating Human Behavior

    [ad_1]

    A.I. pioneer Fei-Fei Li is lending her support to Simile’s effort to simulate human behavior at scale. John Nacion/Variety via Getty Images

    Every three months, public companies brace for analyst questions during quarterly earnings calls. But what if firms could predict these queries in advance and rehearse their responses? That’s one of the capabilities touted by Simile, a new A.I. startup spun out of Stanford and backed by acclaimed researcher Fei-Fei Li and OpenAI co-founder Andrej Karpathy.

    Simile emerged from stealth yesterday (Feb. 12) with $100 million in funding from a round led by Index Ventures. Alongside Li and Karpathy, the startup—which hasn’t disclosed its valuation—also counts investors including Quora co-founder Adam D’Angelo and Scott Belsky, a partner at A24 Films.

    Li and Karpathy both have close ties to Simile’s founding team, which includes Stanford researchers Joon Park, Percy Liang and Michael Bernstein. Li is the co-director of Stanford’s Human-Centered A.I. Institute and advised Karpathy during his Ph.D. study at the university. She is widely known for foundational work such as ImageNet, a large-scale image database that helped drive major breakthroughs in computer vision. Karpathy and Bernstein also contributed to that project.

    Simile’s mission of using A.I. to reflect and model societal behavior taps into an underexplored research area, according to Karpathy, who previously worked at OpenAI and Tesla before launching his own education-focused A.I. startup. While large language models typically present a single, cohesive personality, Karpathy argues they are actually trained on data drawn from vast numbers of people. “Why not lean into that statistical power: Why simulate one ‘person’ when you could try to simulate a population?” he wrote in a post on X.

    That idea underpins Simile’s broader goal. The Palo Alto-based startup aims to simulate the real-world effects of major decisions, from public policy to product launches, across virtual populations that mirror human behavior. The team has already tested this concept on a smaller scale through projects like Smallville, a 2023 Stanford experiment in which 25 autonomous A.I. agents interacted in a virtual environment.

    Now, Simile is scaling the approach for business use. After spending the past seven months developing its model, the company is already working with clients on applications ranging from product development to litigation forecasting. CVS Health Corporation, for example, uses Simile to create simulated focus groups, while Gallup uses the platform to build digital polling panels. For earning calls, Simile can predict about 80 percent of the questions that analysts ultimately ask, said Park, the startup’s CEO, during a recent appearance on TBPN.

    At present, Simile’s models are based on data from hundreds of thousands of people who have signed up for its studies. Over time, the company hopes to expand that to simulations representing the world’s entire population of roughly 8 billion people.

    Simile joins a growing wave of A.I. companies focused on using simulation to model real-world scenarios. Much of the existing research in this space has centered on physical systems, such as robotics and autonomous vehicles, through “world model” platforms developed by firms like Google and Nvidia.

    One of the most prominent figures in world models is Li herself. In 2024, she took a leave of absence from Stanford to launch World Labs, a startup that builds 3D digital environments from image and text prompts. The company has raised $230 million to date and is valued at more than $1 billion.

    Fei-Fei Li and Andrej Karpathy Back a New A.I. Use Case: Simulating Human Behavior

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link