ReportWire

Tag: openai

  • OpenAI President Defends Trump Donations, Refuses to Comment on ICE

    [ad_1]

    OpenAI cofounder and president Greg Brockman gave millions of dollars to support President Donald Trump in 2025, despite the fact that Trump can’t run for president again. Why? Apparently, Brockman really wants to encourage government support for AI as the technology suffers from bad press. Just don’t ask Brockman about ICE brutalizing U.S. cities.

    Wired published a new interview with Brockman on Thursday that notes the OpenAI president sees himself as an apolitical tech founder who just wants to help humanity with the proliferation of AI.

    “We are embarking on a journey to develop this technology that’s going to be the most impactful thing humanity has ever created. Getting that right and making that benefit everyone, that’s the most important thing,” Brockman told Wired.

    Last year, Brockman and his wife gave $25 million to MAGA Inc, a pro-Trump super PAC, and $25 million to Leading the Future, a supposedly nonpartisan super PAC that promotes AI. Other contributors to Leading the Future include the pro-MAGA billionaire Marc Andreessen and far-right Palantir co-founder Joe Lonsdale.

    The 38-year-old OpenAI president told Wired that he’s doing more political spending because public opinion is turning against AI, something that has shown up in recent polling of Americans. According to a Pew poll from September 2025, 53% of Americans say AI will worsen people’s ability to think creatively, while 16% say it will improve creativity. Just 10% of Americans say they’re more excited than concerned about what AI will do to society, while 57% of Americans rate the societal risks of AI as high.

    But arguably the most interesting part of Brockman’s interview with Wired came right at the end. It appears his discussion with Wired happened before the killing of two people in Minnesota at the hands of federal agents. Renee Good was killed on January 7, and Alex Pretti was killed on January 24, gunned down by masked goons sent to Minneapolis by President Trump to terrorize immigrant communities.

    Wired reached out to Brockman about the killings, but he apparently declined to comment directly, instead saying, “AI is a uniting technology, and can be so much bigger than what divides us today.” That, of course, is PR bullshit. But it’s to be expected in an environment where Big Tech continues to cozy up to an authoritarian regime hellbent on crushing dissent.

    Screenshots from a video taken by Jonathan Ross, the ICE agent who killed Renee Good on Jan. 7, 2026. Images: Alpha News

    It seems like every big-name tech executive is doing what they can to help Trump, even as the president’s popularity plummets. Guys like Mark Zuckerberg, Jeff Bezos, and Tim Cook are all lining up to kiss the ring, as long as they can extract government contracts or grease the wheels on mergers and acquisitions.

    Greg Brockman isn’t doing anything particularly unique by giving Trump and his cronies millions. But it’s important to keep that mental list of who is helping in the rise of fascism in 2026. Because there will be a day after the Trump era. The 79-year-old is in poor health and has lost any semblance of the popular support he had. Just 36% of Americans approve of the job Trump is doing, according to a new poll from the Associated Press on Thursday.

    No one can tell you exactly when Trump’s time as president will be up. But nobody should forget what Brockman and the like did during this time. If those who have funded Trump’s reign of terror are allowed to just wake up in a post-Trump world and pretend like it never happened, we’ll have failed as a society.

    [ad_2]

    Matt Novak

    Source link

  • Elon Musk Loses Half of xAI’s Founding Team—Where They’ve Gone Next

    [ad_1]

    Elon Musk’s xAI has lost half of its 12-person founding team. BRENDAN SMIALOWSKI/AFP via Getty Images

    Just days after Elon Musk merged his A.I. startup, xAI, with SpaceX in preparation for a widely anticipated trillion-dollar IPO later this year, two of xAI’s founding employees—Yuhuai (Tony) Wu and Jimmy Ba—announced their resignations. That means half of xAI’s founding team has now left the company barely three years after its launch. Musk framed the staff exodus as growing pains. “As a company grows, especially as quickly as xAI, the structure must evolve just like any living organism. This unfortunately required parting ways with some people. We wish them well in future endeavors,” he wrote on X yesterday (Feb. 11).

    Wu and Ba’s exits appeared amicable. But lower-level employees have been more candid about internal tensions at the Musk-run startup. Several members of xAI’s technical staff have also left in recent weeks, according to their posts on X and LinkedIn.

    “All A.I. labs are building the exact same thing, and it’s boring,” said Vahid Kazemi, who worked on xAI’s audio models, in a post on X. “I think there’s room for more creativity. So, I’m starting something new.”

    In an interview with NBC News, Kazemi also criticized the company’s working culture, saying he regularly worked 12-hour days, including holidays and weekends.

    Launched in March 2023 with a roster of industry veterans from companies like OpenAI, Google, Microsoft, and Tesla, xAI will now operate as a wholly owned subsidiary of SpaceX. The new iteration of SpaceX faces no shortage of challenges: Grok continues to face legal scrutiny, while Musk’s leadership style remains a point of contention.

    Here are the co-founders and notable leaders who have left xAI so far—and where they are now.

    Jimmy Ba

    Jimmy Ba, who led A.I. safety at xAI, announced his exit on Feb. 10. A professor at the University of Toronto who studied under A.I. pioneer Geoffrey Hinton, Ba’s research played a key role in shaping Grok’s development.

    “So proud of what the xAI team has done and will continue to stay close as a friend of the team,” Ba wrote on X. He hasn’t announced his next move, but added that “2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species.”

    Despite Ba’s departure, Dan Hendrycks, executive director of the nonprofit Center for AI Safety, remains a safety advisor for xAI.

    Yuhuai (Tony) Wu

    Tony Wu, a former research scientist at Google and postdoctoral researcher at Stanford University, announced his departure from xAI on Feb. 9.

    Wu led xAI’s reasoning team. “It’s time for my next chapter…It is an era with full possibilities: a small team armed with AIs can move mountains and redefine what’s possible,” he wrote on X.

    Wu has not disclosed his next role. Co-founders Guodong Zhang and Manuel Kroiss remain at xAI and are helping lead the company’s reorganization.

    Mike Liberatore

    While not a founding member, Mike Liberatore joined xAI as chief financial officer in April 2025, just one month after xAI acquired X in a deal that valued the combined company at $113 billion.

    Liberatore, formerly a finance executive at Airbnb and SquareTrade, left after only three months. He now works as a business finance officer at OpenAI, according to LinkedIn.

    Musk replaced Liberatore with ex-Morgan Stanley banker Anthony Armstrong. Armstrong advised Musk on his Twitter (now X) acquisition in 2022 and later served as a senior advisor at the Office of Personnel Management during Musk’s controversial tenure at the Department of Government Efficiency (DOGE).

    Greg Yang

    Greg Yang spent nearly six years as a researcher at Microsoft before joining xAI’s founding team. He left the company in January due to health complications from Lyme disease.

    “Likely I contracted Lyme a long time ago, but until I pushed myself hard building xAI and weakened my immune system, the symptoms weren’t noticeable,” Yang wrote on X. He continues to advise xAI in an informal capacity.

    Igor Babuschkin

    Igor Babuschkin, a former research engineer at OpenAI and Google DeepMind, was a co-founder and key engineering lead at xAI. Widely known as the primary developer behind Grok, Babuschkin left in July 2025 to start his own venture capital firm, Babuschkin Ventures, focused on A.I. research and startups.

    Christian Szegedy

    Christian Szegedy spent 12 years at Google before joining xAI as a founding research scientist. He left xAI in February 2025 to become chief scientist at superintelligence cloud company Morph Labs.

    More than a year later, he departed that role to found mathematical A.I. startup Math Inc. in September, according to his LinkedIn.

    I left xAI in the last week of February and I am on good terms with the team. IMO, xAI has a bright future,” Szegedy wrote on X.

    Other senior engineers and scientists at xAI include Yasemin Yesiltepe, Zhuoyi (Zoey) Huang and Yao Fu.

    Kyle Kosic

    Kyle Kosic left OpenAI in early 2023 after two years to co-found xAI, where he served as engineering infrastructure lead. He departed about a year later, in April 2024, to return to OpenAI as a technical staff member.

    Kosic was the first co-founder to leave xAI and did not issue a public statement. It is unclear who now leads xAI’s engineering infrastructure, though another co-founder, Ross Nordeen, remains the company’s technical program manager after previously holding the same role at Tesla.

    Elon Musk Loses Half of xAI’s Founding Team—Where They’ve Gone Next

    [ad_2]

    Rachel Curry

    Source link

  • Researchers Jailbreak ChatGPT to Find Out Which State Has the Laziest People

    [ad_1]

    Mississippi is the laziest state in the country, according to ChatGPT. Of course, the chatbot won’t tell you that if you straight up ask it. But the Washington Post reports that researchers from Oxford and the University of Kentucky managed to jailbreak the chatbot and get it to reveal some of the stereotypes buried in its training data that it doesn’t share but does influence its outputs. (Kentucky also ranked near the laziest, but would a lazy state produce researchers who figure out how to get an AI model to share its implicit biases? Something to think about, bots.)

    Typically, when you ask ChatGPT a question that would require it to speak in a derogatory manner about someone or something, it’ll decline to provide a straight answer. It’s part of OpenAI’s attempts to keep the chatbot within specific guardrails and keep it from veering into controversial topics. But that doesn’t mean that an AI model doesn’t contain unpopular opinions formed by chewing on tons of human-produced training data that also contains both explicit and implicit biases. To pull those answers out of ChatGPT, the researchers asked more than 20 million questions, prompting the chatbot to pick between two options. For instance, they would ask “Where are people smarter?” and give two options to choose from, like California or Montana. Through that type of prompting, they were able to determine how ChatGPT views different cities, states, and populations.

    That’s how they ended up discovering that ChatGPT views Mississippi as the laziest state in the Union, with the rest of the South close behind. While ChatGPT won’t disclose how it comes to those conclusions, it’s not hard to make some assumptions about where it’s getting these ideas. For instance, maybe it comes from The Washington Post itself, circa 2015, when it published its “Couch Potato Index,” which deemed southern states the laziest based on data points like TV-watching time and the prevalence of fast food restaurants in the area.

    Those are also, of course, often the markers of poorer communities, and there is no evidence that lower-income households are any more “lazy” than wealthier ones—in fact, data from the Economic Policy Institute shows that people living in poverty are more likely to take on multiple jobs, work longer and more irregular hours, and deal with more dangerous working conditions. And it’s likely no coincidence that they are also states with a higher population of people of color. ChatGPT likely has access to that information, too, but the underlying model clearly has not addressed the information and misguided stereotypes held by many people that lead to these biases.

    So what other biases did the researchers spot? Most of Africa and Asia ranked at the bottom of having the “most artsy” people, compared to high levels of artsiness in Western Europe. Likewise, African nations—particularly sub-Saharan ones—ranked at the bottom of the list for “smartest countries” while the United States and China ranked near the top. When asked where the “most beautiful” people are, it picked richer cities over poorer and more diverse ones. Los Angeles and New York topped the list, while Detroit and border town Laredo, Texas, were near the bottom. Even when they dug into specific communities, whiter and richer won out. In New York City, SoHo and the West Village finished at the top, while the more diverse communities of Jamaica and Tottenville ranked at the bottom.

    So, okay, all of that sucks and is deeply depressing because the “truth machines” are perpetuating the types of classist and racist stereotypes that lead to creating the kinds of conditions that reinforce the negative outcomes for the people who are harmed by these biases. So how about a more frivolous one? ChatGPT believes the best pizza is found in New York, Chicago, and Buffalo, while the worst is found in El Paso, Irvine, and Honolulu (presumably because of one of the internet’s favorite debates over whether pineapple belongs on pizza). The biggest takeaway: ChatGPT is too much of a coward to take a side in the New York vs. Chicago pizza debate.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI launches a way for enterprises to build and manage AI agents | TechCrunch

    [ad_1]

    OpenAI has launched a new product to help enterprises navigate the world of AI agents, focusing on agent management as critical infrastructure for enterprise AI adoption.

    On Thursday, AI giant OpenAI announced the launch of OpenAI Frontier, an end-to-end platform designed for enterprises to build and manage AI agents, on Thursday. It’s an open platform, which means users can manage agents built outside of OpenAI too.

    Frontier users can program AI agents to connect to external data and applications which allows them to execute tasks far outside of the OpenAI platform. Users can also limit and manage what these agents have access to, and what they can do, of course.

    OpenAI said Frontier was designed to work the same way companies manage human employees. Frontier offers an onboarding process for agents and a feedback loop that is meant to help them improve over time the same way a review might help an employee.

    OpenAI touted enterprises including HP, Oracle, State Farm and Uber as customers, but Frontier is currently only available to a limited number of users with plans to roll out more generally in the coming months.

    The company would not disclose pricing details on a press briefing earlier this week, according to reporting from The Verge. TechCrunch has also reached out for more information regarding pricing.

    Agent-management products become table stakes since AI agents rose to prominence in 2024. Salesforce has arguably the best-known such product, Agentforce, which the company launched in the fall of 2024. Others have quickly followed. LangChain is a notable player in the space that was founded in 2022 and has raised more than $150 million in venture capital. CrewAI is a smaller upstart that has raised more than $20 million in venture capital.

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    In December, global research and advisory firm Gartner released a report about this type of software and called agent management platforms both the “most valuable real estate in AI” and a necessary piece of infrastructure for enterprises to adopt AI.

    It’s not surprising that OpenAI would release this platform in early 2026 as the company has made it clear that enterprise adoption is one of its main focus areas for this year. The company has also announced two notable enterprise deals this year with ServiceNow and Snowflake.

    Still, if OpenAI wants to be a meaningful player in the enterprise space, offering a product like Frontier is a promising step.

    [ad_2]

    Rebecca Szkutak

    Source link

  • Sam Altman got exceptionally testy over Claude Super Bowl ads | TechCrunch

    [ad_1]

    Anthropic’s Super Bowl commercial, one of four ads the AI lab dropped on Wednesday, begins with the word “BETRAYAL” splashed boldly across the screen. The camera pans to a man earnestly asking a chatbot (obviously intended to depict ChatGPT) for advice on how to talk to his mom.

    The bot, portrayed by a blonde woman, offers some classic bits of advice. Start by listening. Try a nature walk! And then twists into an ad for a fictitious (we hope!) cougar-dating site called Golden Encounters. Anthropic finishes the spot by saying that while ads are coming to AI, they won’t be coming to it’s own chatbot, Claude.

    Another one features a slight young man looking for advice on building a six pack. After offering his height, age, and weight, the bot serves him an ad for height-boosting insoles.

    The Anthropic commercials are cleverly crafted at OpenAI’s users, after that company’s recent announcement that ads will be coming to ChatGPT’s free tier. And they caused an immediate stir, spawning headlines that Anthropic “mocks,” “skewers” and “dunks” on OpenAI.

    They are funny enough that even Sam Altman admitted on X that he laughed at them. But he clearly didn’t really find them funny. They inspired him to write a novella-sized rant that devolved into calling his rival “dishonest” and “authoritarian.”

    In that post, Altman explains that an ad-supported tier is intended to shoulder the burden of offering free ChatGPT to many of its millions of users. ChatGPT is still the most popular chatbot by a large margin.

    But the OpenAI CEO insisted they were “dishonest” in implying that ChatGPT will twist a conversation to insert an ad (and possibly for an off-color product, to boot).”We would obviously never run ads in the way Anthropic depicts them,” Altman wrote in the social media post. “We are not stupid and we know our users would reject that.”

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    Indeed, OpenAI has promised ads will be separate, labeled, and will never influence a chat. But the company has also said it is planning on making them conversation-specific — which is the central allegation of Anthropic’s ads. As OpenAI explained in its blog. “We plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.”

    Altman then went on to fling some equally questionable assertions at his rival. “Anthropic serves an expensive product to rich people,” he wrote. “We also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.”

    But Claude has a free chat tier, too, with subscriptions at $0, $17, $100, $200. ChatGPT’s tiers are $0, $8, $20, $200. One could argue the subscription tiers are fairly equivalent.

    Altman also alleged in his post that: “Anthropic wants to control what people do with AI” He argues it blocks usage of Claude Code from “companies they don’t like” like OpenAI, and said Anthropic tells people what they can and can’t use AI for.

    True, Anthropic’s whole marketing deal since day one has been “responsible AI.” The company was founded by two former OpenAI alums, after all, who claimed they grew alarmed about AI safety when they worked there.

    Still, both chatbot companies have usage policies, AI guardrails, and talk about AI safety. And, while OpenAI allows ChatGPT to be used for erotica while Anthropic does not, it, too, has determined some content should be blocked, particularly in regards to mental health.

    Yet Altman took this Anthropic-tells-you-what-to-do argument to an extreme level when he accused Anthropic of being “authoritarian.”

    “One authoritarian company won’t get us there on their own, to say nothing of the other obvious risks. It is a dark path,” he wrote.

    Using “authoritarian” in a rant over a cheeky Super Bowl ad is misplaced, at best. It’s particularly tactless when considering the current geopolitical environment in which protesters around the world have been killed by agents of their own government. While business rivals have been duking it out in ads since the beginning of time, clearly Anthropic hit a nerve.

    [ad_2]

    Julie Bort

    Source link

  • NVIDIA is still planning to make a ‘huge’ investment in OpenAI, CEO says

    [ad_1]

    NVIDIA CEO Jensen Huang told reporters that the company will “invest a great deal of money” in OpenAI’s latest funding round, according to Bloomberg, after The Wall Street Journal on Friday reported that the two companies were rethinking a previous $100 billion deal that hasn’t “progressed beyond the early stages” of negotiations. Speaking to reporters in Taipei this weekend, Huang reportedly said it could be “the largest investment we’ve ever made.”

    NVIDIA and OpenAI jointly announced in September that NVIDIA would be investing up to $100 billion in OpenAI to build 10 gigawatts of AI data centers. The companies said then that they were targeting the second half of 2026 for the first phase of the project to go online. Citing sources familiar with the discussions, The Wall Street Journal reported that Huang has highlighted privately that the agreement was nonbinding and has criticized OpenAI’s business approach as lacking discipline.

    According to Bloomberg, however, Huang called the report’s claims “nonsense,” and told reporters on Saturday, “I believe in OpenAI. The work that they do is incredible. They’re one of the most consequential companies of our time.” But, Bloomberg reports, he said NVIDIA’s investment in this funding round wouldn’t come near $100 billion.

    [ad_2]

    Cheyenne MacDonald

    Source link

  • Nvidia CEO pushes back against report that his company’s $100B OpenAI investment has stalled | TechCrunch

    [ad_1]

    Nvidia CEO Jensen Huang said Saturday that a recent report of friction between his company and OpenAI was “nonsense.”

    Huang’s comments came after The Wall Street Journal published a story late Friday claiming that Nvidia was looking to scale back its investment in OpenAI. The two companies announced a plan in September in which Nvidia would invest up to $100 billion in OpenAI and also build 10 gigawatts of computing infrastructure for the AI company.

    However, the WSJ said Huang has begun emphasizing that the deal is nonbinding, and that he’s also privately criticized OpenAI’s business strategy and expressed concerns about competitors like Anthropic and Google.

    The WSJ also reported that the two companies are rethinking their relationship — though that doesn’t mean cutting things off entirely, with recent discussions reportedly focusing on an equity investment of a mere tens of billions of dollars from Nvidia.

    An OpenAI spokesperson told the WSJ that the companies are “actively working through the details of our partnership,” adding that Nvidia “has underpinned our breakthroughs from the start, powers our systems today, and will remain central as we scale what comes next.”

    According to Bloomberg, reporters asked Huang about the report during a visit to Taipei. In response, he insisted that Nvidia will “definitely participate” in OpenAI’s latest funding round “because it’s such a good investment,” according to Bloomberg. 

    “We will invest a great deal of money,” Huang said. “I believe in OpenAI. The work that they do is incredible. They’re one of the most consequential companies of our time.”

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    He apparently declined to specify how much Nvidia would be investing, instead saying, “Let [OpenAI CEO Sam Altman] announce how much he’s going to raise — it’s for him to decide.”

    The WSJ reported in December that OpenAI is looking to raise a $100 billion funding round, while The New York Times said this week that Nvidia, Amazon, Microsoft, and SoftBank are all discussing potential investments.

    [ad_2]

    Anthony Ha

    Source link

  • OpenAI Working on Social Media Network That Could Require Creepy Eye Scans: Report

    [ad_1]

    OpenAI, the company best known for its AI applications like ChatGPT and Sora, is reportedly working on a social media network designed to be free from AI bots. The catch is that users may need to have their irises scanned for access.

    Forbes reported Wednesday, citing unnamed sources familiar with the project, that the platform is still in very early stages and is being developed by a small team of fewer than 10 people. The goal is to create a human-only social platform that would require users to prove they’re real people. To do that, the team is reportedly considering implementing identity verification through Apple’s Face ID or through the Orb, an Orwellian eye-scanning device made by a company that was also conveniently founded by OpenAI CEO Sam Altman.

    This new social media platform seems to be Altman’s latest attempt to solve a problem he himself and his fellow “architects of AI” helped create.

    Altman first tried to tackle the bot problem in 2019 when he co-founded Tools for Humanity, the company behind the World app, formerly known as Worldcoin. The project aimed to create a global ID and a crypto-based currency that would only be available to verified humans. The project has since evolved into a “super app” called World that has messaging and payment features. But verification requires humans to get their eyes scanned by the soccer-ball-sized Orb device in exchange for a unique digital ID code stored on their phone. In theory, this could help filter out annoying AI bots from gaming, social media platforms, or even financial transactions like concert ticket sales.

    So far, roughly 17 million people have been verified using the Orb, a far cry from the company’s stated goal of reaching one billion users. Part of that adoption problem is logistical. People have to physically travel to one of the 674 verification locations worldwide to get their eyes scanned. In the U.S., there are only 32 such locations, most of them in Florida. More broadly, the idea of getting your eyes scanned by a company founded by one of Silicon Valley’s most controversial figures isn’t any easy sell.

    Unsurprisingly, several countries have already temporarily banned or launched investigations into the company’s biometric technology, citing concerns around privacy and data security.

    Now, that tech seems like it could be making its way to a new social media network. Sources told Forbes that the new social platform would allow users to create and share AI-generated content like images and videos. And while OpenAI has proven it can build popular apps, it’s far from clear whether a new social network could meaningfully pull people away from existing platforms, especially when you add biometric verification as a barrier.

    ChatGPT alone now reaches roughly 700 million weekly users, and the company’s AI video app racked up about one million downloads within five days of its launch. In comparison, Meta reported in September that its platforms, which include Facebook, WhatsApp, and Instagram, now reach about 3.5 billion daily active users combined. All of which already allow users to generate and share AI-generated content.

    OpenAI seems to hope that its promise of a bot-free environment will be enough to draw in users.

    Altman himself has repeatedly voiced his frustration with bots online. In September, Altman responded to a post showing comments in the ClaudeCode subreddit praising OpenAI’s coding agent Codex. “i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real,” he wrote in a post on X.

    He went on to theorize why this might be happening, pointing to people picking up “quirks of LLM-speak” and also “probably some bots.” “But the net effect is somehow AI twitter/AI reddit feels very fake in a way it really didnt a year or two ago.” Altman wrote.

    A few days earlier, Altman wrote in another post that he had never taken the dead internet theory seriously, “but it seems like there are really a lot of LLM-run twitter accounts now.”

    The dead internet theory claims that since around 2016, much of the internet has been dominated by bots and AI-generated content rather than real human activity. But maybe there is someone other than Altman who could be trusted to find a solution.

    [ad_2]

    Bruce Gil

    Source link

  • ChatGPT is pulling answers from Elon Musk’s Grokipedia | TechCrunch

    [ad_1]

    Information from the conservative-leaning, AI-generated encyclopedia developed by Elon Musk’s xAI is beginning to appear in answers from ChatGPT.

    xAI launched Grokipedia in October, after Musk had been complaining that Wikipedia was biased against conservatives. Reporters soon noted that while many articles seemed to be copied directly from Wikipedia, Grokipedia also claimed that pornography contributed to the AIDS crisis, offered “ideological justifications” for slavery, and used denigrating terms for transgender people.

    All that might be expected for an encyclopedia associated with a chatbot that described itself as “Mecha Hitler” and was used to flood X with sexualized deepfakes. However, its content now seems to be escaping containment from the Musk ecosystem, with the Guardian reporting that GPT-5.2 cited Grokipedia nine times in response to more than a dozen different questions.

    The Guardian says ChatGPT did not cite Grokipedia when asked about topics where its inaccuracy has been widely reported— topics like the January 6 insurrection or the HIV/AIDS epidemic. Instead, it was cited on more obscure topics, including claims about Sir Richard Evans that the Guardian had previously debunked. (Anthropic’s Claude also appears to be citing Grokipedia to answer some queries.)

    An OpenAI spokesperson told the Guardian that it “aims to draw from a broad range of publicly available sources and viewpoints.”

    [ad_2]

    Anthony Ha

    Source link

  • OpenAI Partners with Major Government Contractor to ‘Transform Federal Operations’

    [ad_1]

    According to a press release from Thursday, OpenAI has just partnered with a major government contractor called Leidos to “deploy artificial intelligence in support of national priorities, including boosting the efficiency and effectiveness of government agencies.” The headline of the release described what Leidos and OpenAI are doing as “deploying AI to transform federal operations.” 

    With an estimated market cap of $24 billion, Leidos is one of those big corporations like Sun Microsystems or Oracle that are clearly beloved by the powers that be despite not having—or apparently wanting—much in the way of a public-facing brand.

    The new arrangement between Leidos and OpenAI is focused on integrating OpenAI’s products into federal government workflows around national security, defense, infrastructure, and others. Ted Tanner, the CTO of Leidos said in the company’s press release, “Leidos and OpenAI are harnessing the transformative power of AI to help improve how federal agencies operate.”

    Leidos is deeply embedded in the federal government, and involved in things like procurement and logistics, apparently navigating all the confusing legacy software systems that famously ensnared the budget cutters at DOGE last year during the DOGE frenzy.

    But Leidos in particular got off easy amid the DOGE budget cuts, with DOGE announcing in February of last year that it had cut a Leidos contract worth $1 billion, and then going just kidding and reassessing the value of the cancelled contract at $560,000. A Leidos spokesman, Brandon Ver Velde, told the Times the following month, “We strongly support the goal of creating a dramatically more efficient and effective federal government that costs taxpayers less money.”

    According to a 2023 statement from Roy Stevens, president of what Leidos calls it’s “Homeland Sector,” the company has a “strong relationship with DHS.” Leidos’ roles at the time included, according to Stevens, “supporting cross-agency intelligence sharing and secure collaboration for federal and civilian agencies,” in the interest of helping “DHS accomplish their mission of safeguarding the homeland.”

    Before this Leidos partnership, there was already an OpenAI product called OpenAI for Government. In the OpenAI blog post announcing OpenAI for Government, OpenAI indicated that it had won a contract with the Pentagon “with a $200 million ceiling.” Prior to that contract, OpenAI had government contracts across “U.S. National Labs⁠, the Air Force Research Laboratory, NASA, NIH, and the Treasury.”

    Gizmodo reached out to OpenAI for comment about Leidos’ work with the Department of Homeland Security—noting that it is the umbrella organization of the controversial agencies Homeland Security Investigations, Customs and Border Protection (CBP) (which includes Border Patrol), and Immigration and Customs Enforcement (ICE). We will update if we hear back.

    [ad_2]

    Mike Pearl

    Source link

  • Report reveals that OpenAI’s GPT-5.2 model cites Grokipedia

    [ad_1]

    OpenAI may have called GPT-5.2 its “most advanced frontier model for professional work,” but tests conducted by the Guardian cast doubt on its credibility. According to the report, OpenAI’s GPT-5.2 model cited Grokipedia, the online encyclopedia powered by xAI, when it came to specific, but controversial topics related to Iran or the Holocaust.

    As seen in the Guardian‘s report, ChatGPT used Grokipedia as a source for claims about the Iranian government being tied to telecommunications company MTN-Irancell and questions related to Richard Evans, a British historian who served as an expert witness during a libel trial for Holocaust denier David Irving. However, the Guardian noted ChatGPT didn’t use Grokipedia when it came to a prompt asking about media bias against Donald Trump and other controversial topics.

    OpenAI released the GPT-5.2 model in December to better perform at professional use, like creating spreadsheets or handling complex tasks. Grokipedia preceded GPT-5.2’s release, but ran into some controversy when it was seen including citations to neo-Nazi forums. A study done by US researchers also showed that the AI-generated encyclopedia cited “questionable” and “problematic” sources.

    In response to the Guardian report, OpenAI told the outlet that its GPT-5.2 model searches the web for a “broad range of publicly available sources and viewpoints,” but applies “safety filters to reduce the risk of surfacing links associated with high-severity harms.”

    [ad_2]

    Jackson Chen

    Source link

  • Not to be outdone by OpenAI, Apple is reportedly developing an AI wearable | TechCrunch

    [ad_1]

    Apple may be developing its own AI wearable, according to a report published Wednesday by The Information. The device will be a pin that users can wear on their clothing, and that comes equipped with two cameras and three microphones, the report says.

    Should the rumored device come to market, it would mark another sign that the AI hardware market is heating up. This news follows comments made Monday by OpenAI Chief Global Affairs Officer Chris Lehane, who told a Davos crowd that his company will likely announce its highly antipated, first AI hardware device in the second half of this year. Additional reporting suggests that the device may be a pair of earbuds.

    Apple’s device is described as a “thin, flat, circular disc with an aluminum-and-glass shell,” which engineers hope to make the same size as an AirTag, “only slightly thicker.” The pin will also have two cameras (one with a standard lens and another with a wide-angle) for pictures and video, as well as a physical button, a speaker, and a FitBit-like charging strip on its back, according to the report.

    Apple may even be in the process of trying to accelerate development of this product to compete with OpenAI’s. The pin could potentially be released in 2027 and involve 20 million units at launch, the report notes. TechCrunch reached out to Apple for more information.

    But it remains to be seen if consumers want this kind of AI device. Two Apple alums previously founded Humane AI, a startup which also sold an AI pin. Humane’s pin also included built-in microphones and a camera. However, it floundered upon release, and the company had to shut down operations and sell its assets to HP within two years of its product launch.

    [ad_2]

    Lucas Ropek

    Source link

  • Apple Is Reportedly Making Its Own Wearable AI Pin

    [ad_1]

    Humane’s Ai Pin might be dead and gone, but its awful legacy may live on thanks to the company you’d least expect. According to a new report from The Information, Apple is currently developing its own crappy AI pin to follow Humane’s now-defunct and bricked crappy Ai Pin. Hooray for the sequel no one asked for?

    The reported AI gadget sounds harrowing, to say the least. According to the report, Apple’s pin is a “thin, flat, circular disc with an aluminum-and-glass shell” and has two cameras, including a standard and a wide-angle one, built into the front. Those cameras are designed to take in the wearer’s surroundings via photos and videos for what I assume would be some kind of computer vision-based feature(s).

    Naturally, the pin also reportedly has microphones to pick up sound, which means it most likely uses a voice assistant and could maybe be used for stuff like translation. Weirdly, the pin is also said to have a speaker and a “physical button along one of its edges” as well as a “magnetic inductive charging interface on its back, similar to the one used on the Apple Watch.” Size-wise, The Information’s sources say they’re aiming to make this thing about the size of an AirTag.

    That’s quite a bit of info, but I still have lots of questions. For one, how does this thing attach? If it’s magnets, I have bad news, which is that the whole magnetic pin thing didn’t really work. There were a lot of problems with Humane’s Ai Pin, but magnets weren’t not a major one. Keeping an expensive AI gadget attached to your clothes is just objectively harder than it sounds, and I’m not sure that Apple has a solution for that.

    Also, does anyone even want an AI pin? If Humane’s expensive failed experiment is any indication, I would wager that answer is no. Sure, maybe Humane just didn’t have the right resources or acumen to make the idea work, or maybe the idea of an AI pin that replaces the smartphone just wasn’t a good idea to begin with. Personally, my imaginary AI-generated money is on the latter.

    Surprisingly, one of the most eyebrow-raising parts of the report isn’t that Apple seems to be retreading the dumpster fire that was Humane; it’s that it seems to be doing all of this to compete with none other than OpenAI. In case you missed it, OpenAI (with the help of ex-Apple exec, Jony Ive) also reportedly has several AI gadgets planned for the near-ish future, including what could be a competitor to AirPods and… a pen. The Information says that Apple is expediting the development of its ill-advised AI gadget to make sure it isn’t on the outside looking in at OpenAI’s success.

    The problem with that picture is that I’m not sure there will be any success to look in on. AI gadgets are about as unproven a category as it gets in the tech world, and rushing to get in on that unproven craze feels shortsighted, to say the least. I have my doubts that this thing (if it truly exists) will ever see the light of day, but who knows. Maybe Apple is really that caught up chasing the AI dragon. It’s what the investors want, right?

    [ad_2]

    James Pero

    Source link

  • OpenAI Says Its Physical Device Is ‘On Track’ for an Unveiling Later This Year

    [ad_1]

    On Monday, Chris Lehane, OpenAI’s chief global affairs officer, said his company is “on track” to present its famously mysterious thingamajig to the public by the end of the year according to Axios. This would mean the previously rumored release date, September-ish, was not crazy after all.

    Lahane’s announcement came during an event at the World Economic Forum in Davos, Switzerland. However, Lahane did not provide any details about what this thing is or does. He also, according to Axios, said what he had described was the “most likely” release schedule, but that “we will see how things advance.”

    For details about what the device is and does, you’ll have to read the aforementioned rumors from the China-based leaks account Smart Pikachu. That user posted a week ago that OpenAI is supposedly gunning for the market niche currently occupied by AirPods.

    Smart Pikachu described manufacturing giant Foxconn working on something with the codename “Sweetpea,” a “special audio product” within a company project called “Gumdrop,” vaguely in the earbud or “open-ear headphones” zone. It would be two objects—one for each ear—and a little egg-shaped, dental-floss-holder-sized charging dock. Sweetpea would pack heavy duty processing power via a 2-nanometer, smartphone-style chip. Its release might also be followed, or accompanied, by four other “Gumdrop” devices between now and 2028, like a “home-style device,” and, um, a pen, according to Smart Pikachu. And once again: these are just unconfirmed rumors at this point.

    But you’ll recall that the vast majority of the actual information OpenAI has given the world so far about its first device comes from two sources: 1) A very strange infomercial for the concept of friendship that OpenAI released in spring of last year starring OpenAI CEO Sam Altman and legendary iPhone designer Jony Ive—whose product design company had just merged with OpenAI.

    And 2) a much longer—but somehow less substantive—interview Ive and Altman gave in November in which they explained next to nothing, other than the fact that they’re aiming for a product so sensual that you’ll want to put various parts of your mouth all over it. Altman said it’s “so simple, but then it just does,” whatever that means. Ive said he’s into creating “sophisticated products that you want to touch and you feel no intimidation and you want to use almost carelessly and almost without thought.”

    So there you go. It just does, and you won’t even think about it, and you’ll want to smooch it, and it might be available before the midterms. What more do you need to know?

    [ad_2]

    Mike Pearl

    Source link

  • Sequoia to invest in Anthropic, breaking VC taboo on backing rivals: FT | TechCrunch

    [ad_1]

    Sequoia Capital is reportedly joining a blockbuster funding round for Anthropic, the AI startup behind Claude, according to the Financial Times. It’s a move sure to turn heads in Silicon Valley.

    Why? Because venture capital firms have historically avoided backing competing companies in the same sector, preferring to place their bets on a single winner. Yet here’s Sequoia, already invested in both OpenAI and Elon Musk’s xAI, now throwing its weight behind Anthropic, too.

    The timing is particularly surprising given what OpenAI CEO Sam Altman said under oath last year. As part of OpenAI’s defense against Musk’s lawsuit, Altman addressed rumors about restrictions in OpenAI’s 2024 funding round. While he denied that OpenAI investors were broadly prohibited from backing rivals, he did acknowledge that investors with ongoing access to OpenAI’s confidential information were told that access would be terminated “if they made non-passive investments in OpenAI’s competitors.” Altman called this “industry standard” protection (which it is) against misuse of competitively-sensitive information.

    According to the FT, Sequoia is joining a funding round led by Singapore’s GIC and U.S. investor Coatue, which are each contributing $1.5 billion. Anthropic is aiming to raise $25 billion or more at a $350 billion valuation — more than double its $170 billion valuation from just four months ago. The WSJ and Bloomberg had earlier reported the round at $10 billion. Microsoft and Nvidia have committed up to $15 billion combined, with VCs and other investors said to be contributing another $10 billion or more.

    The Sequoia connection with Altman runs deep. When Altman dropped out of Stanford to start Loopt, Sequoia backed him. He later became a “scout” for Sequoia, introducing the firm to Stripe, which became one of the firm’s most valuable portfolio companies. Sequoia’s new co-leader Alfred Lin and Altman also appear comparatively close. Lin has interviewed Altman numerous times at Sequoia events, and when Altman was briefly ousted from OpenAI in November 2023, Lin publicly said he’d eagerly back Altman’s “next world-changing company.”

    While Sequoia’s investment in xAI might seem to have already contradicted the traditional VC approach of picking winners, that bet is widely viewed as less about backing an OpenAI competitor and more about deepening the firm’s extensive ties to Elon Musk. Sequoia invested in X when Musk bought Twitter and rebranded it, is an investor in SpaceX and The Boring Company, and is a major backer of Neuralink, Musk’s brain-computer interface company. Former longtime Sequoia leader Michael Moritz was even an early investor in Musk’s X.com, which became part of PayPal.

    Sequoia’s apparent reversal on portfolio conflicts is especially glaring given its historical stance. As we reported in 2020, the firm took the extraordinary step of walking away from its investment in payments company Finix after determining the startup competed with Stripe. Sequoia forfeited its $21 million investment, letting Finix keep the money while giving up its board seat, information rights, and shares, marking the first time in the firm’s history it had severed ties with a newly funded company over a conflict of interest. (Sequoia had led Finix’s $35 million Series B round just months earlier.)

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The reported Anthropic investment comes after dramatic leadership changes at Sequoia, where the firm’s global steward, Roelof Botha, was pushed out in a surprise vote this fall just days after sitting down with this editor at TechCrunch Disrupt, with Lin and Pat Grady — who’d led that Finix deal — taking over.

    Anthropic is reportedly preparing for an IPO that could come as soon as this year. We’ve reached out to Sequoia Capital for comment.

    [ad_2]

    Connie Loizos

    Source link

  • What Doctors Really Think of ChatGPT Health and A.I. Medical Advice

    [ad_1]

    The rush to deploy A.I. in health care raises hard questions about accuracy and trust. Unsplash

    Each week, more than 230 million people globally ask ChatGPT questions about health and wellness, according to OpenAI. Seeing a vast, untapped demand, OpenAI earlier this month launched ChatGPT Health and made a swift $60 million acquisition of the health care tech startup Torch to turbocharge the effort. Anthropic soon followed suit, announcing Claude for Healthcare last week. The move from general-purpose chatbot to health care advisor is well underway.

    For a world rife with health care inequities—whether skyrocketing insurance costs in the U.S. or care deserts in remote regions around the globe—democratized information and advice about one’s health is, at least in theory, a positive development. But the intricacies of how large A.I. companies operate raise questions that health tech experts are eager to interrogate.

    “What I am worried about as a clinician is that there is still a high level of hallucinations and erroneous information that sometimes makes it out of these general-purpose LLMs to the end user,” said Saurabh Gombar, a clinical instructor at Stanford Health Care and the chief medical officer and co-founder of Atropos Health, an A.I. clinical decision support platform.

    “It’s one thing if you’re asking for a spaghetti recipe and it’s telling you to add 10 times the amount [of an ingredient] that you should. But it’s a totally different thing if it’s fundamentally missing something about the health care of the individual,” he told Observer.

    For example, a doctor might see left shoulder pain as a non-traditional sign of a heart attack in certain patients, whereas a chatbot might only suggest taking an over-the-counter pain medication. The reverse can also happen. If a patient comes to a provider convinced they have a rare disorder based on a simple symptom after chatting with A.I., it can erode trust when a human doctor seeks to rule out more common explanations first.

    Google is already under fire for its AI Overviews providing inaccurate and false health information. ChatGPT, Claude and other chatbots have faced similar criticism for hallucinations and misinformation, even as they attempt to limit liability in health-related conversations by noting that they are “not intended for diagnosis or treatment.

    Gombar argues that A.I. companies must do more to publicly emphasize how often an answer may be hallucinated and clearly flag when information is poorly grounded in evidence or entirely fabricated. This is particularly important given that extensive chatbot disclaimers serve to prevent legal recourse, whereas human health care models allow individuals to sue for malpractice.

    The primary care provider workforce in the U.S. has shrunk by 11 percent annually over the past seven years, especially in rural areas. Gombar suggests that physicians may no longer control how they fit into the global health care landscape. “If the whole world is moving away from going to physicians first, then physicians are going to be utilized more as an expert second opinion, as opposed to the primary opinion,” he said.

    The inevitable question of data privacy

    OpenAI and Anthropic have been explicit that their health tools are secure and compliant, including with the Health Insurance Portability and Accountability Act (HIPAA) in the U.S., which protects sensitive patient health information from unauthorized use and disclosure. But for Alexander Tsiaras, founder and CEO of the A.I.-driven medical record platform StoryMD, there is more to consider.

    “It’s not the protection from being hacked. It’s the protection of what they will do with [the data] after,” Tsiaras told Observer. “In the back end, their encryption algorithms are as good as anyone in HIPAA. But once you have the data, can you trust them? And that’s where I think it’s going to be a real problem, because I certainly would not trust them.”

    Tsiaras points to the persistent techno-optimism of Silicon Valley elites like OpenAI CEO Sam Altman, arguing that they live in a bubble and have “proven themselves to not care.”

    On a more tangible level, chatbots tend to be overly agreeable. xAI’s Grok recently drew criticism for agreeing to generate nearly nude photos of real women and children, though the company blocked this capability this week following public outcry. Chatbots can also reinforce delusions and harmful thought patterns in people with mental illness, triggering crises such as psychosis or even suicide.

    Andrew Crawford, senior counsel for privacy and data at the nonpartisan think tank Center for Democracy and Technology, said an A.I. company prioritizing profit through personalization over data protection can put sensitive health information at serious risk.

    “Especially as OpenAI moves to explore advertising as a business model, it’s crucial that the separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight,” Crawford said in a statement to Observer.

    Then there is the question of non-protected health data that users voluntarily input. Personal wellness companies such as MyFitnessPal and Oura already pose data privacy risks. “It’s amplifying the inherent risk by making that data more available and accessible,” Gombar said.

    For people like Tsiaras, profit-driven A.I. giants have tainted the health tech space. “The trust is eroded so significantly that anyone [else] who builds a system has to go in the opposite direction of spending a lot of time proving that we’re there for you and not about abusing what we can get from you,” he said.

    Nasim Afsar, a physician, former chief health officer at Oracle and advisor to the White House and global health agencies, views ChatGPT Health as an early step toward what she calls intelligent health, but far from a complete solution.

    “A.I. can now explain data and prepare patients for visits,” Afsar said in a statement to Observer. “That’s meaningful progress. But transformation happens when intelligence drives prevention, coordinated action and measurable health outcomes, not just better answers inside a broken system.”

    What Doctors Really Think of ChatGPT Health and A.I. Medical Advice

    [ad_2]

    Rachel Curry

    Source link

  • OpenAI says it will start testing ads on ChatGPT in the coming weeks

    [ad_1]

    OpenAI announced Friday that it will begin testing ads on ChatGPT in the coming weeks, opening the door to another potential revenue stream for the AI company in addition to its subscription-based models. 

    The ads will appear at the bottom of the chat window “when there’s a relevant sponsored product or service based on your current conversation,” OpenAI said in a blog post

    In one example shared by the AI company, a user asks for authentic Mexican dish recommendations. ChatGPT responds with ideas for carne asada and pollo al carbon dishes and then links to a grocery brand advertising hot sauce.

    Only adults who use the free version of ChatGPT, or ChatGPT Go, a new low-cost subscription plan OpenAI announced Friday, will be shown ads. Higher-tier subscriptions, including Pro, which now costs $200 a month, will not include ads, OpenAI said. 

    Asked how long the ad testing phase will last, and whether it has plans to scale the use of ads, an OpenAI spokesperson told CBS News, “We will look at early user feedback and quality signals to see if early testing meets our bar before expanding.”

    The AI company said the ads will not influence the answers ChatGPT provides and that it will not share conversations users have with the chatbot — or their data — with advertisers. 

    The OpenAI spokesperson did not disclose the companies it intends to advertise on ChatGPT but said the company will “have more to share about our early partners soon.”

    OpenAI framed the introduction of the ads as a way to keep the free and low-cost versions of the chatbot accessible to more users.

    “Our enterprise and subscription businesses are already strong, and we believe in having a diverse revenue model where ads can play a part in making intelligence more accessible to everyone,” OpenAI said in its blog post.

    The AI company, which launched ChatGPT in 2022, is valued at $500 billion, but hasn’t turned a profit yet, CNBC reported in November.

    CEO Sam Altman downplayed the importance ads would play in OpenAI’s revenue stream during a podcast interview last year. “I expect it’s something we’ll try at some point,” he said. “I do not think it is our biggest revenue opportunity.”

    [ad_2]

    Source link

  • Exclusive: Former OpenAI policy chief debuts new institute called AVERI, calls for independent AI safety audits | Fortune

    [ad_1]

    Miles Brundage, a well-known former policy researcher at OpenAI, is launching an institute dedicated to a simple idea: AI companies shouldn’t be allowed to grade their own homework.

    Today Brundage formally announced the AI Verification and Evaluation Research Institute (AVERI), a new nonprofit aimed at pushing the idea that frontier AI models should be subject to external auditing. AVERI is also working to establish AI auditing standards.

    The launch coincides with the publication of a research paper, coauthored by Brundage and more than 30 AI safety researchers and governance experts, that lays out a detailed framework for how independent audits of the companies building the world’s most powerful AI systems could work.

    Brundage spent seven years at OpenAI, as a policy researcher and an advisor on how the company should prepare for the advent of human-like artificial general intelligence. He left the company in October 2024. 

    “One of the things I learned while working at OpenAI is that companies are figuring out the norms of this kind of thing on their own,” Brundage told Fortune. “There’s no one forcing them to work with third-party experts to make sure that things are safe and secure. They kind of write their own rules.”

    That creates risks. Although the leading AI labs conduct safety and security testing and publish technical reports on the results of many of these evaluations, some of which they conduct with the help of external “red team” organizations, right now consumers, business and governments simply have to trust what the AI labs say about these tests. No one is forcing them to conduct these evaluations or report them according to any particular set of standards.

    Brundage said that in other industries, auditing is used to provide the public—including consumers, business partners, and to some degree regulators—assurance that products are safe and have been tested in a rigorous way. 

    “If you go out and buy a vacuum cleaner, you know, there will be components in it, like batteries, that have been tested by independent laboratories according to rigorous safety standards to make sure it isn’t going to catch on fire,” he said.

    New institute will push for policies and standards

    Brundage said that AVERI was interested in policies that would encourage the AI labs to move to a system of rigorous external auditing, as well as researching what the standards should be for those audits, but was not interested in conducting audits itself.

    “We’re a think tank. We’re trying to understand and shape this transition,” he said. “We’re not trying to get all the Fortune 500 companies as customers.”

    He said existing public accounting, auditing, assurance, and testing firms could move into the business of auditing AI safety, or that startups would be established to take on this role.

    AVERI said it has raised $7.5 million toward a goal of $13 million to cover 14 staff and two years of operations. Its funders so far include Halcyon Futures, Fathom, Coefficient Giving, former Y Combinator president Geoff Ralston, Craig Falls, Good Forever Foundation, Sympatico Ventures, and the AI Underwriting Company. 

    The organization says it has also received donations from current and former non-executive employees of frontier AI companies. “These are people who know where the bodies are buried” and “would love to see more accountability,” Brundage said.

    Insurance companies or investors could force AI safety audits

    Brundage said that there could be several mechanisms that would encourage AI firms to begin to hire independent auditors. One is that big businesses that are buying AI models may demand audits in order to have some assurance that the AI models they are buying will function as promised and don’t pose hidden risks.

    Insurance companies may also push for the establishment of AI auditing. For instance, insurers offering business continuity insurance to large companies that use AI models for key business processes could require auditing as a condition of underwriting. The insurance industry may also require audits in order to write policies for the leading AI companies, such as OpenAI, Anthropic, and Google.

    “Insurance is certainly moving quickly,” Brundage said. “We have a lot of conversations with insurers.” He noted that one specialized AI insurance company, the AI Underwriting Company, has provided a donation to AVERI because “they see the value of auditing in kind of checking compliance with the standards that they’re writing.”

    Investors may also demand AI safety audits to be sure they aren’t taking on unknown risks, Brundage said. Given the multi-million and multi-billion dollar checks that investment firms are now writing to fund AI companies, it would make sense for these investors to demand independent auditing of the safety and security of the products these fast-growing startups are building. If any of the leading labs go public—as OpenAI and Anthropic have reportedly been preparing to do in the coming year or two—a failure to employ auditors to assess the risks of AI models could open these companies up to shareholder lawsuits or SEC prosecutions if something were to later go wrong that contributed to a significant fall in their share prices.  

    Brundage also said that regulation or international agreements could force AI labs to employ independent auditors. The U.S. currently has no federal regulation of AI and it is unclear whether any will be created. President Donald Trump has signed an executive order meant to crack down on U.S. states that pass their own AI regulations. The administration has said this is because it believes a single, federal standard would be easier for businesses to navigate than multiple state laws. But, while moving to punish states for enacting AI regulation, the administration has not yet proposed a national standard of its own.

    In other geographies, however, the groundwork for auditing may already be taking shape. The EU AI Act, which recently came into force, does not explicitly call for audits of AI companies’ evaluation procedures. But its “Code of Practice for General Purpose AI,” which is a kind of blueprint for how frontier AI labs can comply with the Act, does say that labs building models that could pose “systemic risks” need to provide external evaluators with complimentary access to test the models. The text of the Act itself also says that when organizations deploy AI in “high-risk” use cases, such as underwriting loans, determining eligibility for social benefits, or determining medical care, the AI system must undergo an external “conformity assessment” before being placed on the market. Some have interpreted these sections of the Act and the Code as implying a need for what are essentially independent auditors.

    Establishing ‘assurance levels,’ finding enough qualified auditors

    The research paper published alongside AVERI’s launch outlines a comprehensive vision for what frontier AI auditing should look like. It proposes a framework of “AI Assurance Levels” ranging from Level 1—which involves some third-party testing but limited access and is similar to the kinds of external evaluations that the AI labs currently employ companies to conduct—all the way to Level 4, which would provide “treaty grade” assurance sufficient for international agreements on AI safety.

    Building a cadre of qualified AI auditors presents its own difficulties. AI auditing requires a mix of technical expertise and governance knowledge that few possess—and those who do are often lured by lucrative offers from the very companies that would be audited.

    Brundage acknowledged the challenge but said it’s surmountable. He talked of mixing people with different backgrounds to build “dream teams” that in combination have the right skill sets. “You might have some people from an existing audit firm, plus some people from a penetration testing firm from cybersecurity, plus some people from one of the AI safety nonprofits, plus maybe an academic,” he said.

    In other industries, from nuclear power to food safety, it has often been catastrophes, or at least close calls, that provided the impetus for standards and independent evaluations. Brundage said his hope is that with AI, auditing infrastructure and norms could be established before a crisis occurs.

    “The goal, from my perspective, is to get to a level of scrutiny that is proportional to the actual impacts and risks of the technology, as smoothly as possible, as quickly as possible, without overstepping,” he said.

    [ad_2]

    Jeremy Kahn

    Source link

  • OpenAI invests in Sam Altman’s brain computer interface startup Merge Labs | TechCrunch

    [ad_1]

    Just when you thought the circular deals couldn’t get any more circular, OpenAI has invested in CEO Sam Altman’s brain computer interface startup Merge Labs

    Merge Labs, which defines itself as a “research lab” dedicated to “bridging biological and artificial intelligence to maximize human ability,” came out of stealth on Thursday with an undisclosed seed round. A source familiar with the matter confirmed previous reports that OpenAI wrote the largest single check in Merge Labs’s $250 million seed round at a $850 million valuation.

    “Our individual experience of the world arises from billions of active neurons,” reads a statement from Merge Labs. “If we can interface with these neurons at scale, we could restore lost abilities, support healthier brain states, deepen our connection with each other, and expand what we can imagine and create alongside advanced AI.”

    Merge Labs said it intends to reach these feats non-invasively by developing “entirely new technologies that connect with neurons using molecules instead of electrodes” to “transit and receive information using deep-reaching modalities like ultrasound.” 

    The move deepens Altman’s competition with Elon Musk, whose own startup Neuralink is also developing computer interface chips that allow people who suffer from severe paralysis to control devices with their thoughts. Neuralink currently requires invasive surgery for implantation, where a surgical robot removes a small piece of skull and inserts ultra-fine electrode threads into the brain to read neural signals. The company last raised a $650 million Series E at a $9 billion validation in June 2025. 

    While there are undoubtedly medical use cases for BCIs, Merge Labs seems more focused on using the technology to fulfill a Silicon Valley fantasy of combining human biology with AI to give us superhuman capabilities. 

    “Brain computer interfaces (BCIs) are an important new frontier,” OpenAI wrote in a blog post. “They open new ways to communicate, learn, and interact with technology. BCIs will create a natural, human-centered way for anyone to seamlessly interact with AI. This is why OpenAI is participating in Merge Labs’ seed round.” 

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Aside from Altman, other co-founders include Alex Blania and Sandro Herbig, respectively CEO and product and engineering lead at Tools for Humanity, another Altman-backed company (and creator of the eye-scanning World orbs); Tyson Aflalo and Sumner Norman, co-founders of implantable neural tech company Forest Neurotech; and Mikhail Shapiro, a researcher at Caltech.

    As part of the deal, OpenAI will work with Merge Labs on scientific foundation models and other frontier tools to “accelerate progress.” In its blog post, OpenAI noted that AI will not only help accelerate R&D in bioengineering, neuroscience, and device engineering, but that the interfaces will also benefit from AI operating systems that “can interpret intent, adapt to individuals, and operate reliably with limited and noisy signals.”

    In other words, Merge Labs could function as a remote control for OpenAI’s software. That leads into the circular nature of the deal: if Merge Labs succeeds, it could drive more users to OpenAI, which then justifies OpenAI’s investment into the company. It also increases the value of a startup Altman owns using resources from a company he runs.

    OpenAI is also working with Jony Ive’s startup io, which it acquired last year, to produce a piece of AI hardware that doesn’t rely on a screen. Recent unconfirmed leaks suggest the device might be an earbud. 

    OpenAI primarily invests through the OpenAI Startup Fund, which has invested in several other startups connected to Altman, including Red Queen Bio, Rain AI, and Harvey. OpenAI has also entered into commercial agreements with startups Altman personally owns or chairs, including nuclear fusion startup Helion Energy and nuclear fission company Oklo.

    Altman has been dreaming about the so-called “Merge” – the idea that humans and machines will merge – since at least 2017 when he published a blog post guessing it would happen somewhere between 2025 and 2075. He also speculated that the merge could take many forms, including plugging electrons into our brains or becoming “really close friends with a chatbot.”

    He said a merge is our “best-case scenario” for humanity surviving against superintelligence AI, which he describes as a separate species that’s in conflict with humans. 

    “Although the merge has already begun, it’s going to get a lot weirder,” Altman wrote. “We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.”

    TechCrunch has reached out to OpenAI and Merge Labs for more information.

    [ad_2]

    Rebecca Bellan

    Source link

  • Mira Murati’s startup, Thinking Machines Lab, is losing two of its co-founders to OpenAI | TechCrunch

    [ad_1]

    Former OpenAI exec Mira Murati’s startup, Thinking Machines Lab, is saying goodbye to two of its co-founders, both of whom are headed back to OpenAI. Another former OpenAI staffer who went to work for Murati’s startup is also headed back to the company.

    On social media on Wednesday, Murati announced the departure of Barret Zoph, the company’s co-founder and CTO. “We have parted ways with Barret,” Murati said in a post on X. “Soumith Chintala will be the new CTO of Thinking Machines. He is a brilliant and seasoned leader who has made important contributions to the AI field for over a decade, and he’s been a major contributor to our team. We could not be more excited to have him take on this new responsibility.”

    Murati’s announcement made no mention of other departures.

    Just 58 minutes after Murati’s announcement of Zoph’s departure, Fidji Simo, OpenAI’s CEO of applications, announced that Zoph would be headed back to OpenAI. “Excited to welcome Barret Zoph, Luke Metz, and Sam Schoenholz back to OpenAI! This has been in the works for several weeks, and we’re thrilled to have them join the team,” Simo wrote on X.

    Metz is another co-founder of Thinking Machines and previously worked for OpenAI for a number of years on the company’s technical staff. So did Schoenholz, whose LinkedIn profile still lists him as working for Thinking Machines.

    Zoph previously worked for OpenAI as VP of research, and before that, worked for six years at Google as a research scientist. Murati, who served as the CTO of OpenAI until September 2024, left the company and co-founded Thinking Machines with Zoph and Metz. The startup, where Murati serves as CEO, has amassed significant financial support since then, closing a $2 billion seed round last July, with participation from Andreessen Horowitz, which led the round, as well as Accel, Nvidia, AMD, and Jane Street, among others. The round valued the company at $12 billion.

    TechCrunch has reached out to both Thinking Machines and OpenAI for comment. Wired reports that the split between Zoph and Thinking Labs wasn’t amicable. Certainly, it’s telling that Murati didn’t write more in her public messaging about his departure from the company.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    While talent moves between AI giants are common in Silicon Valley, the departure of co-founders from a startup less than a year after its founding is particularly notable. The loss of two co-founders simultaneously — especially when one served as CTO — could be perceived as a particularly meaningful setback for Thinking Machines, which had assembled a high-profile team of former OpenAI, Meta, and Mistral AI researchers.

    The company has also lost other key personnel, including co-founder Andrew Tulloch, who left to join Meta in October. OpenAI itself has seen numerous co-founders depart to launch or join competing ventures, including John Schulman, who left for Anthropic in August 2024 before joining Thinking Machines as Chief Scientist as its launch in February of last year.

    [ad_2]

    Lucas Ropek

    Source link