ReportWire

Tag: Sam Altman

  • Ruoming Pang, Meta’s $200M Superintelligence Hire, Jumps to OpenAI After Just 7 Months

    [ad_1]

    Sam Altman reportedly courted Pang for months. Andrew Harnik/Getty Images

    Ruoming Pang, a prominent A.I. researcher recruited by Meta last year with a pay package reportedly worth more than $200 million, has left the company to join OpenAI, The Information reported yesterday (Feb. 25). His departure marks another setback for Mark Zuckerberg’s elite A.I. team and underscores the escalating A.I. talent war. Pang joined Meta Superintelligence Labs (MSL) in July after being poached from Apple. He remained at Meta for only seven months.

    Zuckerberg unveiled MSL in July 2025 as the centerpiece of Meta’s push to develop advanced A.I. systems. The lab quickly became the focus of an aggressive—and costly—hiring spree. Alexandr Wang, founder of Scale AI, now leads the group as Meta’s A.I. chief after Meta acquired 40 percent of his startup. Within MSL, a smaller, more secretive unit known as TBD Lab is tasked with building next-generation foundation models.

    Pang was originally from Shanghai and earned his undergraduate degree from Shanghai Jiao Tong University. He holds a master’s in computer science from the University of Southern California and earned a Ph.D. from Princeton University in 2006. Over the course of his career, Pang has worked on some of the most consequential A.I. systems in the industry, making him one of the more sought-after engineers in the field.

    At Apple, he spent nearly four years as a “senior distinguished engineer,” leading development of the foundation models behind Apple Intelligence. Before Apple, Pang spent roughly 15 years at Google DeepMind as a principal software engineer, where he worked on large-scale machine learning systems, including privacy-preserving technologies and speech recognition.

    OpenAI has not disclosed Pang’s title, scope of responsibilities or the terms of his compensation. The Sam Altman-led company reportedly courted him for months, so the package is likely substantial. OpenAI employees earn roughly $1.5 million in annual salary and equity, according to the Wall Street Journal. Pang is widely expected to continue working on foundation models and superintelligence research.

    For Meta, Pang’s exit complicates Zuckerberg’s ambition to dominate the superintelligence race. The company has successfully recruited high-profile researchers from OpenAI, Google and Anthropic. However, MSL has also seen a steady stream of departures in recent months.

    Among the most prominent was Yann LeCun, Meta’s chief A.I. scientist, who exited at the end of last year after more than a decade at the company. LeCun publicly criticized MSL chief Wang’s lack of experience with A.I. research

    Other departures have been quieter but telling. Ethan Knight joined MSL for only a few weeks before moving to OpenAI last August—a stint so brief it never appeared on his LinkedIn profile. Bert Maher, a software engineer, left after 12 years at Meta to join Anthropic. Avi Verma, who had been expected to join Meta from OpenAI, ultimately backed out.

    Pang’s move is the latest signal that Silicon Valley’s A.I. talent war is intensifying. Even as talk of an A.I. bubble grows louder and tech companies rely on increasingly complex financial structures to sustain lofty valuations, leaders like Zuckerberg, Altman and Anthropic’s Dario Amodei show little sign of restraint. Instead, they are offering compensation packages worth tens or even hundreds of millions of dollars to persuade top researchers that their vision for superintelligence will prevail.

    Ruoming Pang, Meta’s $200M Superintelligence Hire, Jumps to OpenAI After Just 7 Months

    [ad_2]

    Rachel Curry

    Source link

  • Sam Altman gets defensive about AI’s massive electricity usage: ‘It takes a lot of energy to train a human’ | Fortune

    [ad_1]

    OpenAI CEO Sam Altman isn’t worried about AI’s increasingly glaring resource consumption, and argued humans require a lot too. 

    In an on-stage interview at the India AI Impact summit, he went on the defensive after he was asked about ChatGPT’s water needs.

    He dismissed claims that the chatbot uses gallons of water per query as “completely untrue, totally insane,” according to a clip posted by The Indian Express, explaining that data centers powering ChatGPT have largely moved away from water-heavy “evaporative cooling” to prevent overheating.

    Altman was then asked about the electricity needed for AI. In contrast to the issue of water, he claimed it was “fair” to bring up the technology’s energy requirements, saying “We need to move toward nuclear, or wind, or solar [energy] very quickly.”

    But he pointed out that comparing AI’s power needs to humans isn’t exactly apples to apples.

    “It also takes a lot of energy to train a human,” he said, prompting some in the crowd to laugh. “It takes, like, 20 years of life, and all of the food you eat during that time before you get smart.”

    Altman expanded even further by noting that today’s humans wouldn’t even be here were it not for their ancestors dating back hundreds of thousands of years to when modern humans first emerged.

    “Not only that, it took, like, the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science or whatever to produce you,” he added.

    When comparing humans to ChatGPT’s potential, you have to take this context into account, he argued. A fair comparison would be to pit the energy a human uses to answer a query with an AI after it is trained. On that measure “probably, AI has already caught up on an energy efficiency basis measured that way.”

    In a June 2025 blog post, Altman claimed each ChatGPT query takes about 0.34 watt-hours of electricity, or around what an oven uses in about a second. Still, he published this fact before OpenAI released its newest GPT-5 model and its subsequent upgrades. Energy consumption can also vary based on the complexity of a query, for example, answering a question versus creating an image.

    Experts have warned that AI as a whole will  increase its cumulative power and water consumption greatly over the next 20 years or so. Overall, AI’s water usage is set to grow by about 130%, or by about 30 trillion liters (7.9 trillion gallons) of water through 2050, according to a January report by water technology company Xylem and market research firm Global Water Intelligence. 

    Over that same period, rising electricity demands are expected to increase the water use for data centers’ power generation by about 18%, reaching roughly 22.3 trillion liters (5.8 trillion gallons) per year. Meanwhile, the ever more complex chips data centers use will need more water during the manufacturing process, which will skyrocket the amount they require by 600% to 29.3 trillion liters (7.7 trillion gallons) annually from about 4.1 trillion liters (1.8 trillion gallons) today.

    While OpenAI has moved away from evaporative cooling, 56% of all data centers globally still use the method in some form, according to the Xylem and Global Water Intelligence report. 

    OpenAI’s own 800-acre data center complex in Abilene, Texas will reportedly use water, albeit, in a more efficient, closed-loop system that continuously recirculates water to cool the data center, the Texas Tribune reported. The data center will initially use 8 million gallons of water from the city of Abilene to fill its cooling system.

    [ad_2]

    Marco Quiroz-Gutierrez

    Source link

  • Sam Altman Defends A.I. Energy Use With Human Comparison, Sparking Debate

    [ad_1]

    Sam Altman challenged critics of A.I.’s water and electricity consumption. Photo by John MacDougall/AFP via Getty Images

    Sam Altman is pushing back on mounting criticism over the environmental toll of A.I. The OpenAI chief has dismissed claims about A.I.’s water consumption as “fake” and drawn comparisons between the electricity required to power A.I. systems and the energy it takes to develop human intelligence.

    Figures suggesting that tools like ChatGPT consume multiple gallons of water per query are “totally insane” and have “no connection to reality,” Altman said in a Feb. 20 interview with The Indian Express on the sidelines of the AI Impact Summit in New Delhi. Last year, Altman claimed that ChatGPT uses 0.000085 gallons of water per query—roughly one-fifteenth of a teaspoon—though he did not explain how he calculated that figure.

    A.I.’s water footprint largely stems from the need for evaporative cooling systems used to keep data center hardware from overheating. But Altman argued that companies like OpenAI are no longer directly managing such cooling processes. Many A.I. developers, he noted, are shifting toward cooling systems that recirculate liquid rather than continually drawing fresh supplies. Meanwhile, tech giants like Microsoft, Meta, Google and Amazon have pledged to replenish more water than they withdraw by 2030.

    Even so, data centers continue to drink up water at a rapid pace. Total A.I.-related water consumption for cooling reached 23.7 cubic kilometers in 2025, a 38 percent increase over 2020, and is expected to more than triple over the next 25 years, according to a January report from Xylem. Despite the industry’s pivot to alternative methods, the report found that 56 percent of data center capacity still relies on some form of evaporative cooling.

    Altman was more measured when it came to electricity usage. “What is fair, though, is the energy consumption,” he said. “We need to move towards nuclear, wind, and solar very quickly.”

    Last April, the International Energy Agency reported that data centers accounted for roughly 1.5 percent of global electricity consumption in 2024. Their power use is rising at a rate more than four times faster than overall electricity demand and is expected to more than double by 2030.

    In response, major tech companies are pursuing data center agreements tied to alternative energy sources, including nuclear power, to ease pressure on grids. Altman, who previously led Y Combinator, has personally invested in nuclear ventures such as Oklo, which is developing small-scale nuclear plants, and Helion, which aims to commercialize nuclear fusion.

    The OpenAI CEO also argued that critics overlook the energy required to develop human intelligence. “People talk about how much energy it takes to train an A.I. model relative to how much it costs a human to do one inference query,” he said. “But it also takes a lot of energy to train a human—it takes, like, 20 years of life and all the food you eat during that time before you get started.”

    A more appropriate comparison, he suggested, would measure the energy used by a fully trained A.I. model to answer a question against that used by a human doing the same task. “Probably A.I. has already caught up on an energy efficiency basis measured that way.”

    The remarks quickly sparked debate online over whether such comparisons are appropriate. “He’s saying a really big spreadsheet and a baby are morally equivalent,” wrote Matt Stoller, research director of the American Economic Liberties Project, in a post on X. Sridhar Vembu, founder and chief scientist of software firm Zoho Corporation, also took issue with the OpenAI chief’s statements. A.I. should “quietly recede into the background” instead of dominating our lives, said the billionaire on X. “I do not want to see a world where we equate a piece of technology to a human being.”

    Sam Altman Defends A.I. Energy Use With Human Comparison, Sparking Debate

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Sam Altman: Know What Else Used a Lot of Energy? Human Civilization

    [ad_1]

    At last week’s India AI Impact Summit in New Delhi, industry leaders convened to discuss the future of artificial intelligence and how best to squeeze it into parts of your life you haven’t even considered. Notably absent was Bill Gates, who dropped out hours before his scheduled keynote over the ongoing scrutiny about his presence in the Epstein Files (though he continues to deny any wrongdoing). While the convention was reportedly a bit chaotic, what with the protests and all, the luminaries from around the tech world present nonetheless kept things upbeat and optimistic, declaring “full steam ahead” on the technological hype train carrying our species and planet off a cliff.

    Also in attendance was OpenAI’s Sam Altman, who earned numerous headlines over the course of the event for his words and antics. His buzz blitzkrieg started on Thursday at a seemingly easy photo-opp layup with Indian Prime Minister Narendra Modi and other AI executives all raising their joined hands in a celebratory display of industry-wide solidarity. Altman and the former colleague and present CEO of Anthropic to his left, Dario Amodei, notably refused to complete the chain and hold each other’s hands, making for an all-too-poignant moment. Altman would continue to make news throughout the summit for his comments on the industry’s “urgent” need for global regulation and his sneaking suspicion that companies might actually be using AI as a scapegoat to whitewash their layoffs.

    Ever the yapper, Altman has bagged yet another round of earned media for an interview with The Indian Express’ Anant Goenka, during which he posited some controversial rebuttals to concerns about AI’s environmental impact.

    Altman started off by saying the claims about ChatGPT consuming “‘17 gallons of water for each query’ or whatever,” are “completely untrue, totally insane, no connection to reality,” before qualifying that, OK, maybe it was a valid concern when his company “used to do evaporative cooling in data centers.”

    He went on to say that there is “fair” concern about the amount of energy data centers eat to crank out the most soulless slop you’ve ever seen, but suggested the onus of responsibility for dealing with AI’s ravenous appetite falls to the energy sector itself, which Altman feels needs to “move towards nuclear or wind and solar very quickly.”

    Altman then stunned the crowd and firmly re-entered the discourse with a mind-blowing truth bomb for those who still felt AI was consuming too much energy.

    “It also takes a lot of energy to train a human,” Altman rejoined euphorically. “It takes like 20 years of life, and all the food you eat before that time, before you get smart. And not only that, it took like the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever to produce you, and then you took whatever you took.”

    It is true that every person and the sum total of human civilization have consumed a sizable amount of energy (and water) to get to where we are today. While the value comparison of a nascent tech industry and its models to the entirety of civilization and human beings may have elicited adulation at the summit, Altman got an icier reception from the internet. Social media quickly took to roasting the remarks as “dystopian” and “deeply antisocial and antihuman.”

    Perhaps further illuminating the backlash, Altman’s energy comments butt up against the frustrating lack of transparency within the industry our collective futures now hinge upon. There are currently no regulations in place requiring data centers to disclose their water and energy consumption. Furthermore, center employees and business partners are typically muzzled by nondisclosure agreements. This has made reporting and research on the true expenditure levels a tricky figure to pin down.

    At least we’ve got Sam to keep us informed while waiting for some clarity about what’s actually going on and being used in those centers.

    [ad_2]

    Justin Caffier

    Source link

  • Sam Altman would like remind you that humans use a lot of energy, too | TechCrunch

    [ad_1]

    OpenAI CEO Sam Altman addressed concerns about AI’s environmental impact this week while speaking at an event hosted by The Indian Express.

    For one thing, Altman — who was in India for a major AI summit — said concerns about AI’s water usage are “totally fake,” though he acknowledged it was a real issue when “we used to do evaporative cooling in data centers.”

    “Now that we don’t do that, you see these things on the internet where, ‘Don’t use ChatGPT, it’s 17 gallons of water for each query’ or whatever,” Altman said. “This is completely untrue, totally insane, no connection to reality.”

    He added that it’s “fair” to be concerned about “the energy consumption — not per query, but in total, because the world is now using so much AI.” In his view, this means the world needs to “move towards nuclear or wind and solar very quickly.”

    There’s no legal requirement for tech companies to disclose how much energy and water they use, so scientists have been trying to study it independently. Data centers have also been connected to rising electricity prices.

    Citing a previous conversation with Bill Gates, the interviewer asked whether it’s accurate to say a single ChatGPT query currently uses the equivalent of 1.5 iPhone battery charges, to which Altman replied, “There’s no way it’s anything close to that much.”

    Altman also complained that many discussions about ChatGPT’s energy usage are “unfair,” especially when they focus on “how much energy it takes to train an AI model, relative to how much it costs a human to do one inference query.”

    Techcrunch event

    Boston, MA
    |
    June 9, 2026

    “But it also takes a lot of energy to train a human,” Altman said. “It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you.”

    So in his view, the fair comparison is, “If you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question versus a human? And probably, AI has already caught up on an energy efficiency basis, measured that way.”

    You can watch the full interview below. The conversation about water and energy usage begins at around 26:35.

    [ad_2]

    Anthony Ha

    Source link

  • Exclusive: OpenAI Has Poached Instagram’s Celebrity Whisperer

    [ad_1]

    OpenAI has hired Instagram’s vice president of global partnerships, Charles Porch, to serve as the AI company’s first-ever vice president of global creative partnerships. The newly created position is the latest move in OpenAI’s push to win over a skeptical entertainment industry.

    In his over 15 years at Instagram, Facebook, and Meta, Porch was instrumental in bringing high-profile figures to the platforms. He facilitated the exclusive Instagram launch of Beyoncé’s self-titled album in 2013, coordinated Instagram’s portrait studios at Vanity Fair’s Oscar Party and the Met Gala, convinced Pope Francis to join the social media platform in 2016, and led an initiative in 2025 to lure TikTok creators over to Instagram Reels with “Breakthrough Bonus” payments.

    OpenAI is hoping to reap similar benefits from Porch’s deep relationships with both talent and management in the worlds of music, film, fashion, art, sports, and the creator ecosystem.

    While Porch and the company offered sparse details on the still-evolving role, which will begin in March, the most likely applications of his talent include arranging deals to license entertainers’ likenesses to appear in OpenAI’s video generation model Sora, building out the future of interactive AI platforms, and promoting AI tools for artistic development in industries like music, fashion, and film.

    In an interview with Vanity Fair this week, Porch explained, “I’m going to be the person that’s talking to creative communities around the world to figure out how we build the best products to serve them.”

    AI companies have so far received a frosty reception in Hollywood over fears that the technology will replace jobs, erode creativity, and devalue intellectual property. In 2023, dual writers’ and actors’ strikes paralyzed the industry, held up largely by complex negotiations over the usage of artificial intelligence. Both unions won a number of protections, including guarantees of compensation should actors’ images be used to create digital doubles and guardrails on studios’ ability to replace human labor with AI. These contracts are set to expire this summer, however.

    In December, OpenAI made a major breakthrough with a $1 billion agreement with Disney. The three-year licensing deal will allow Sora to produce content featuring “animated, masked, and creature” characters from the worlds of Disney, Marvel, Pixar, and Star Wars.

    Licensing the likeness of real people will be a far taller order. In recent months, big-name stars like Matthew McConaughey, Michael Caine, and Gwyneth Paltrow have licensed their voices to be recreated by AI companies ElevenLabs and Speechify for audio content, signaling an openness from talent and agencies to dipping a toe into the world of AI, provided the right compensation models, data privacy agreements, and level of creative and reputational control.

    [ad_2]

    Julia Black

    Source link

  • India has 100M weekly active ChatGPT users, Sam Altman says | TechCrunch

    [ad_1]

    India has 100 million weekly active ChatGPT users, making the country one of OpenAI’s largest markets globally, CEO Sam Altman said ahead of a government-hosted AI summit.

    On Sunday, Altman outlined ChatGPT’s growing adoption in India in an article published in the Indian English daily Times of India, as OpenAI prepares to formally participate in the five-day India AI Impact Summit in New Delhi, beginning Monday. Altman is attending the event alongside senior executives from several of the world’s leading AI companies.

    The growth comes as OpenAI, like other leading AI firms, looks to India’s young population and its more than a billion internet users to fuel global expansion. The ChatGPT maker opened a New Delhi office in August 2025 after months of groundwork in the country, and has adjusted its approach for India’s price-sensitive market, including rolling out a sub-$5 ChatGPT Go tier that was later made free for a year for Indian users.

    In the article, Altman said India is ChatGPT’s second-largest user base after the United States, highlighting the South Asian nation’s growing weight in OpenAI’s global strategy. The disclosure comes as ChatGPT’s overall usage has surged worldwide, with the platform reaching 800 million weekly active users as of October 2025 and reported to be approaching 900 million.

    Altman also highlighted the role of students in driving adoption, saying India has the largest number of student users of ChatGPT globally.

    Indian students have become a key growth segment for leading AI companies more broadly, as rivals race to embed their tools in classrooms and learning workflows. Google has similarly targeted the market, offering Indian students a free one-year subscription to its AI Pro plan in September 2025. Separately, India accounts for the highest global usage of Gemini for learning, Chris Phillips, Google’s vice president and general manager for education, said last month.

    “With its focus on access, practical Al literacy, and the infrastructure that supports widespread adoption, India is well positioned to broaden who benefits from the technology and to help shape how democratic AI is adopted at scale,” Altman wrote.

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    ChatGPT’s rapid growth also highlights a broader challenge for AI companies in India: translating widespread adoption into sustained economic impact. Indian government initiatives such as the IndiaAI Mission — a national program aimed at expanding computing capacity, supporting startups and accelerating AI adoption in public services — seek to address those gaps. However, the country’s price-sensitive market and infrastructure constraints have made monetization and large-scale deployment more complex than in developed economies.

    “Given India’s size, it also risks forfeiting a vital opportunity to advance democratic AI in emerging markets around the world,” Altman wrote, warning that uneven access and adoption could concentrate AI’s economic gains in too few hands.

    Altman also signaled that OpenAI plans to deepen its engagement with the Indian government, writing that the company would soon announce new partnerships aimed at expanding access to AI across the country. He did not provide details, but said the focus would be on widening reach and enabling more people to put AI tools to practical use.

    The India AI Impact Summit is expected to draw a wide cross-section of global technology and political leaders, including Anthropic CEO Dario Amodei, Sundar Pichai of Google, and senior Indian business figures such as Mukesh Ambani and Nandan Nilekani. Political leaders including Emmanuel Macron, Sheikh Khaled bin Mohamed bin Zayed Al Nahyan, and Luiz Inácio Lula da Silva are also expected to attend, spotlighting India’s ambition to position itself as a central player in global AI debates.

    For global AI firms, including OpenAI, the summit underscores how India’s vast user base is translating into growing influence over how the technology evolves.

    OpenAI did not respond to a request for comment.

    [ad_2]

    Jagmeet Singh

    Source link

  • Sam Altman got exceptionally testy over Claude Super Bowl ads | TechCrunch

    [ad_1]

    Anthropic’s Super Bowl commercial, one of four ads the AI lab dropped on Wednesday, begins with the word “BETRAYAL” splashed boldly across the screen. The camera pans to a man earnestly asking a chatbot (obviously intended to depict ChatGPT) for advice on how to talk to his mom.

    The bot, portrayed by a blonde woman, offers some classic bits of advice. Start by listening. Try a nature walk! And then twists into an ad for a fictitious (we hope!) cougar-dating site called Golden Encounters. Anthropic finishes the spot by saying that while ads are coming to AI, they won’t be coming to it’s own chatbot, Claude.

    Another one features a slight young man looking for advice on building a six pack. After offering his height, age, and weight, the bot serves him an ad for height-boosting insoles.

    The Anthropic commercials are cleverly crafted at OpenAI’s users, after that company’s recent announcement that ads will be coming to ChatGPT’s free tier. And they caused an immediate stir, spawning headlines that Anthropic “mocks,” “skewers” and “dunks” on OpenAI.

    They are funny enough that even Sam Altman admitted on X that he laughed at them. But he clearly didn’t really find them funny. They inspired him to write a novella-sized rant that devolved into calling his rival “dishonest” and “authoritarian.”

    In that post, Altman explains that an ad-supported tier is intended to shoulder the burden of offering free ChatGPT to many of its millions of users. ChatGPT is still the most popular chatbot by a large margin.

    But the OpenAI CEO insisted they were “dishonest” in implying that ChatGPT will twist a conversation to insert an ad (and possibly for an off-color product, to boot).”We would obviously never run ads in the way Anthropic depicts them,” Altman wrote in the social media post. “We are not stupid and we know our users would reject that.”

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    Indeed, OpenAI has promised ads will be separate, labeled, and will never influence a chat. But the company has also said it is planning on making them conversation-specific — which is the central allegation of Anthropic’s ads. As OpenAI explained in its blog. “We plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.”

    Altman then went on to fling some equally questionable assertions at his rival. “Anthropic serves an expensive product to rich people,” he wrote. “We also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.”

    But Claude has a free chat tier, too, with subscriptions at $0, $17, $100, $200. ChatGPT’s tiers are $0, $8, $20, $200. One could argue the subscription tiers are fairly equivalent.

    Altman also alleged in his post that: “Anthropic wants to control what people do with AI” He argues it blocks usage of Claude Code from “companies they don’t like” like OpenAI, and said Anthropic tells people what they can and can’t use AI for.

    True, Anthropic’s whole marketing deal since day one has been “responsible AI.” The company was founded by two former OpenAI alums, after all, who claimed they grew alarmed about AI safety when they worked there.

    Still, both chatbot companies have usage policies, AI guardrails, and talk about AI safety. And, while OpenAI allows ChatGPT to be used for erotica while Anthropic does not, it, too, has determined some content should be blocked, particularly in regards to mental health.

    Yet Altman took this Anthropic-tells-you-what-to-do argument to an extreme level when he accused Anthropic of being “authoritarian.”

    “One authoritarian company won’t get us there on their own, to say nothing of the other obvious risks. It is a dark path,” he wrote.

    Using “authoritarian” in a rant over a cheeky Super Bowl ad is misplaced, at best. It’s particularly tactless when considering the current geopolitical environment in which protesters around the world have been killed by agents of their own government. While business rivals have been duking it out in ads since the beginning of time, clearly Anthropic hit a nerve.

    [ad_2]

    Julie Bort

    Source link

  • OpenAI Working on Social Media Network That Could Require Creepy Eye Scans: Report

    [ad_1]

    OpenAI, the company best known for its AI applications like ChatGPT and Sora, is reportedly working on a social media network designed to be free from AI bots. The catch is that users may need to have their irises scanned for access.

    Forbes reported Wednesday, citing unnamed sources familiar with the project, that the platform is still in very early stages and is being developed by a small team of fewer than 10 people. The goal is to create a human-only social platform that would require users to prove they’re real people. To do that, the team is reportedly considering implementing identity verification through Apple’s Face ID or through the Orb, an Orwellian eye-scanning device made by a company that was also conveniently founded by OpenAI CEO Sam Altman.

    This new social media platform seems to be Altman’s latest attempt to solve a problem he himself and his fellow “architects of AI” helped create.

    Altman first tried to tackle the bot problem in 2019 when he co-founded Tools for Humanity, the company behind the World app, formerly known as Worldcoin. The project aimed to create a global ID and a crypto-based currency that would only be available to verified humans. The project has since evolved into a “super app” called World that has messaging and payment features. But verification requires humans to get their eyes scanned by the soccer-ball-sized Orb device in exchange for a unique digital ID code stored on their phone. In theory, this could help filter out annoying AI bots from gaming, social media platforms, or even financial transactions like concert ticket sales.

    So far, roughly 17 million people have been verified using the Orb, a far cry from the company’s stated goal of reaching one billion users. Part of that adoption problem is logistical. People have to physically travel to one of the 674 verification locations worldwide to get their eyes scanned. In the U.S., there are only 32 such locations, most of them in Florida. More broadly, the idea of getting your eyes scanned by a company founded by one of Silicon Valley’s most controversial figures isn’t any easy sell.

    Unsurprisingly, several countries have already temporarily banned or launched investigations into the company’s biometric technology, citing concerns around privacy and data security.

    Now, that tech seems like it could be making its way to a new social media network. Sources told Forbes that the new social platform would allow users to create and share AI-generated content like images and videos. And while OpenAI has proven it can build popular apps, it’s far from clear whether a new social network could meaningfully pull people away from existing platforms, especially when you add biometric verification as a barrier.

    ChatGPT alone now reaches roughly 700 million weekly users, and the company’s AI video app racked up about one million downloads within five days of its launch. In comparison, Meta reported in September that its platforms, which include Facebook, WhatsApp, and Instagram, now reach about 3.5 billion daily active users combined. All of which already allow users to generate and share AI-generated content.

    OpenAI seems to hope that its promise of a bot-free environment will be enough to draw in users.

    Altman himself has repeatedly voiced his frustration with bots online. In September, Altman responded to a post showing comments in the ClaudeCode subreddit praising OpenAI’s coding agent Codex. “i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real,” he wrote in a post on X.

    He went on to theorize why this might be happening, pointing to people picking up “quirks of LLM-speak” and also “probably some bots.” “But the net effect is somehow AI twitter/AI reddit feels very fake in a way it really didnt a year or two ago.” Altman wrote.

    A few days earlier, Altman wrote in another post that he had never taken the dead internet theory seriously, “but it seems like there are really a lot of LLM-run twitter accounts now.”

    The dead internet theory claims that since around 2016, much of the internet has been dominated by bots and AI-generated content rather than real human activity. But maybe there is someone other than Altman who could be trusted to find a solution.

    [ad_2]

    Bruce Gil

    Source link

  • OpenAI Says Its Physical Device Is ‘On Track’ for an Unveiling Later This Year

    [ad_1]

    On Monday, Chris Lehane, OpenAI’s chief global affairs officer, said his company is “on track” to present its famously mysterious thingamajig to the public by the end of the year according to Axios. This would mean the previously rumored release date, September-ish, was not crazy after all.

    Lahane’s announcement came during an event at the World Economic Forum in Davos, Switzerland. However, Lahane did not provide any details about what this thing is or does. He also, according to Axios, said what he had described was the “most likely” release schedule, but that “we will see how things advance.”

    For details about what the device is and does, you’ll have to read the aforementioned rumors from the China-based leaks account Smart Pikachu. That user posted a week ago that OpenAI is supposedly gunning for the market niche currently occupied by AirPods.

    Smart Pikachu described manufacturing giant Foxconn working on something with the codename “Sweetpea,” a “special audio product” within a company project called “Gumdrop,” vaguely in the earbud or “open-ear headphones” zone. It would be two objects—one for each ear—and a little egg-shaped, dental-floss-holder-sized charging dock. Sweetpea would pack heavy duty processing power via a 2-nanometer, smartphone-style chip. Its release might also be followed, or accompanied, by four other “Gumdrop” devices between now and 2028, like a “home-style device,” and, um, a pen, according to Smart Pikachu. And once again: these are just unconfirmed rumors at this point.

    But you’ll recall that the vast majority of the actual information OpenAI has given the world so far about its first device comes from two sources: 1) A very strange infomercial for the concept of friendship that OpenAI released in spring of last year starring OpenAI CEO Sam Altman and legendary iPhone designer Jony Ive—whose product design company had just merged with OpenAI.

    And 2) a much longer—but somehow less substantive—interview Ive and Altman gave in November in which they explained next to nothing, other than the fact that they’re aiming for a product so sensual that you’ll want to put various parts of your mouth all over it. Altman said it’s “so simple, but then it just does,” whatever that means. Ive said he’s into creating “sophisticated products that you want to touch and you feel no intimidation and you want to use almost carelessly and almost without thought.”

    So there you go. It just does, and you won’t even think about it, and you’ll want to smooch it, and it might be available before the midterms. What more do you need to know?

    [ad_2]

    Mike Pearl

    Source link

  • Sequoia to invest in Anthropic, breaking VC taboo on backing rivals: FT | TechCrunch

    [ad_1]

    Sequoia Capital is reportedly joining a blockbuster funding round for Anthropic, the AI startup behind Claude, according to the Financial Times. It’s a move sure to turn heads in Silicon Valley.

    Why? Because venture capital firms have historically avoided backing competing companies in the same sector, preferring to place their bets on a single winner. Yet here’s Sequoia, already invested in both OpenAI and Elon Musk’s xAI, now throwing its weight behind Anthropic, too.

    The timing is particularly surprising given what OpenAI CEO Sam Altman said under oath last year. As part of OpenAI’s defense against Musk’s lawsuit, Altman addressed rumors about restrictions in OpenAI’s 2024 funding round. While he denied that OpenAI investors were broadly prohibited from backing rivals, he did acknowledge that investors with ongoing access to OpenAI’s confidential information were told that access would be terminated “if they made non-passive investments in OpenAI’s competitors.” Altman called this “industry standard” protection (which it is) against misuse of competitively-sensitive information.

    According to the FT, Sequoia is joining a funding round led by Singapore’s GIC and U.S. investor Coatue, which are each contributing $1.5 billion. Anthropic is aiming to raise $25 billion or more at a $350 billion valuation — more than double its $170 billion valuation from just four months ago. The WSJ and Bloomberg had earlier reported the round at $10 billion. Microsoft and Nvidia have committed up to $15 billion combined, with VCs and other investors said to be contributing another $10 billion or more.

    The Sequoia connection with Altman runs deep. When Altman dropped out of Stanford to start Loopt, Sequoia backed him. He later became a “scout” for Sequoia, introducing the firm to Stripe, which became one of the firm’s most valuable portfolio companies. Sequoia’s new co-leader Alfred Lin and Altman also appear comparatively close. Lin has interviewed Altman numerous times at Sequoia events, and when Altman was briefly ousted from OpenAI in November 2023, Lin publicly said he’d eagerly back Altman’s “next world-changing company.”

    While Sequoia’s investment in xAI might seem to have already contradicted the traditional VC approach of picking winners, that bet is widely viewed as less about backing an OpenAI competitor and more about deepening the firm’s extensive ties to Elon Musk. Sequoia invested in X when Musk bought Twitter and rebranded it, is an investor in SpaceX and The Boring Company, and is a major backer of Neuralink, Musk’s brain-computer interface company. Former longtime Sequoia leader Michael Moritz was even an early investor in Musk’s X.com, which became part of PayPal.

    Sequoia’s apparent reversal on portfolio conflicts is especially glaring given its historical stance. As we reported in 2020, the firm took the extraordinary step of walking away from its investment in payments company Finix after determining the startup competed with Stripe. Sequoia forfeited its $21 million investment, letting Finix keep the money while giving up its board seat, information rights, and shares, marking the first time in the firm’s history it had severed ties with a newly funded company over a conflict of interest. (Sequoia had led Finix’s $35 million Series B round just months earlier.)

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The reported Anthropic investment comes after dramatic leadership changes at Sequoia, where the firm’s global steward, Roelof Botha, was pushed out in a surprise vote this fall just days after sitting down with this editor at TechCrunch Disrupt, with Lin and Pat Grady — who’d led that Finix deal — taking over.

    Anthropic is reportedly preparing for an IPO that could come as soon as this year. We’ve reached out to Sequoia Capital for comment.

    [ad_2]

    Connie Loizos

    Source link

  • OpenAI invests in Sam Altman’s brain computer interface startup Merge Labs | TechCrunch

    [ad_1]

    Just when you thought the circular deals couldn’t get any more circular, OpenAI has invested in CEO Sam Altman’s brain computer interface startup Merge Labs

    Merge Labs, which defines itself as a “research lab” dedicated to “bridging biological and artificial intelligence to maximize human ability,” came out of stealth on Thursday with an undisclosed seed round. A source familiar with the matter confirmed previous reports that OpenAI wrote the largest single check in Merge Labs’s $250 million seed round at a $850 million valuation.

    “Our individual experience of the world arises from billions of active neurons,” reads a statement from Merge Labs. “If we can interface with these neurons at scale, we could restore lost abilities, support healthier brain states, deepen our connection with each other, and expand what we can imagine and create alongside advanced AI.”

    Merge Labs said it intends to reach these feats non-invasively by developing “entirely new technologies that connect with neurons using molecules instead of electrodes” to “transit and receive information using deep-reaching modalities like ultrasound.” 

    The move deepens Altman’s competition with Elon Musk, whose own startup Neuralink is also developing computer interface chips that allow people who suffer from severe paralysis to control devices with their thoughts. Neuralink currently requires invasive surgery for implantation, where a surgical robot removes a small piece of skull and inserts ultra-fine electrode threads into the brain to read neural signals. The company last raised a $650 million Series E at a $9 billion validation in June 2025. 

    While there are undoubtedly medical use cases for BCIs, Merge Labs seems more focused on using the technology to fulfill a Silicon Valley fantasy of combining human biology with AI to give us superhuman capabilities. 

    “Brain computer interfaces (BCIs) are an important new frontier,” OpenAI wrote in a blog post. “They open new ways to communicate, learn, and interact with technology. BCIs will create a natural, human-centered way for anyone to seamlessly interact with AI. This is why OpenAI is participating in Merge Labs’ seed round.” 

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    Aside from Altman, other co-founders include Alex Blania and Sandro Herbig, respectively CEO and product and engineering lead at Tools for Humanity, another Altman-backed company (and creator of the eye-scanning World orbs); Tyson Aflalo and Sumner Norman, co-founders of implantable neural tech company Forest Neurotech; and Mikhail Shapiro, a researcher at Caltech.

    As part of the deal, OpenAI will work with Merge Labs on scientific foundation models and other frontier tools to “accelerate progress.” In its blog post, OpenAI noted that AI will not only help accelerate R&D in bioengineering, neuroscience, and device engineering, but that the interfaces will also benefit from AI operating systems that “can interpret intent, adapt to individuals, and operate reliably with limited and noisy signals.”

    In other words, Merge Labs could function as a remote control for OpenAI’s software. That leads into the circular nature of the deal: if Merge Labs succeeds, it could drive more users to OpenAI, which then justifies OpenAI’s investment into the company. It also increases the value of a startup Altman owns using resources from a company he runs.

    OpenAI is also working with Jony Ive’s startup io, which it acquired last year, to produce a piece of AI hardware that doesn’t rely on a screen. Recent unconfirmed leaks suggest the device might be an earbud. 

    OpenAI primarily invests through the OpenAI Startup Fund, which has invested in several other startups connected to Altman, including Red Queen Bio, Rain AI, and Harvey. OpenAI has also entered into commercial agreements with startups Altman personally owns or chairs, including nuclear fusion startup Helion Energy and nuclear fission company Oklo.

    Altman has been dreaming about the so-called “Merge” – the idea that humans and machines will merge – since at least 2017 when he published a blog post guessing it would happen somewhere between 2025 and 2075. He also speculated that the merge could take many forms, including plugging electrons into our brains or becoming “really close friends with a chatbot.”

    He said a merge is our “best-case scenario” for humanity surviving against superintelligence AI, which he describes as a separate species that’s in conflict with humans. 

    “Although the merge has already begun, it’s going to get a lot weirder,” Altman wrote. “We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.”

    TechCrunch has reached out to OpenAI and Merge Labs for more information.

    [ad_2]

    Rebecca Bellan

    Source link

  • The good, bad, and the ugly of Apple’s AI deal with Google | Fortune

    [ad_1]

    Apple and Google’s surprise AI partnership announcement on Monday sent shockwaves across the tech industry (and lifted Google’s market cap above $4 trillion). The two tech giants’ deal to infuse Google’s AI technology into Apple’s mobile software, including in an updated version of the Siri digital assistant, has major implications in the high-stakes battle to dominate AI and to own the platform that will define the next generation of computing.

    While there are still many unanswered questions about the partnership, including the financial component and the duration of the deal, some key takeaways are already clear. Here’s why the deal is good news for Google, so-so news for Apple, and bad news for OpenAI.

    The deal is further validation that Google has got its AI mojo back

    When OpenAI debuted ChatGPT in November 2022, and throughout a good part of the next two years, many industry observers had their doubts about Google’s prospects in the changing landscape. The search giant at times appeared to be floundering as it raced to field models that could be as capable as OpenAI’ s ChatGPT and Anthropic’s Claude. Google endured several embarrassing product debuts, when its Bard chatbot and then its successor Gemini models got facts wrong, recommended glue as a pizza topping, and generated images of historically anachronistic Black Nazis.

    But today, Google’s latest Gemini models (Gemini 3) are among the most capable on the market and gaining traction among both consumers and businesses. The company has also been attracting lots of customers to its Google Cloud, in part because of the power of its bespoke AI chips, called tensor processing units (or TPUs), which may offer cost and speed advantages over Nvidia’s graphics processing units (GPUs) for running AI models.

    Apple’s statement on Monday that “after careful consideration” it had determined that Google’s AI technology “provides the most capable foundation for Apple Foundation Models” served as Gemini’s ultimate validation—particularly given that until now, OpenAI was Apple’s preferred technology provider for “Apple Intelligence” offerings. Analysts at Bank of America said the deal reinforced “Gemini’s position as a leading LLM for mobile devices” and should also help strengthen investor confidence in the durability of Google’s search distribution and long-term monetization.

    Hamza Mudassir, who runs an AI agent startup and teaches strategy and policy at the University of Cambridge’s Judge School of Business, said Apple’s decision is likely about more than just Gemini’s technical capabilities. Apple does not allow partners to train on Apple user data, and Mudassir theorized that Apple may have concluded Google’s control over its ecosystem—such as owning its own cloud—could provide data privacy and intellectual property guarantees that perhaps OpenAI or Anthropic couldn’t match.

    The deal also likely translates directly into revenue for Google. Although the financial details of the were not disclosed, a previous report from Bloomberg suggested Apple was paying Google about $1 billion a year for the right to use its tech.

    The bigger prize for Google may be the foot-in-the-door the deal provides to Apple’s massive distribution channel: the approximately 1.5 billion iPhone users worldwide. With Gemini powering the new version of Siri, Google may get a share of any revenue those users generate through product discovery and purchases made through a Gemini-powered Siri. Eventually, it might potentially even lead to an arrangement that would see Gemini’s chatbot app pre-installed on iPhones.

    For Apple, the implications of the deal are a bit more ambivalent

    Apple’s Tim Cook
    David Paul Morris/Bloomberg via Getty Images

    The iPhone maker will obviously benefit from giving users a much more capable Siri, as well as other AI features, at an attractive cost and while guaranteeing user privacy. Dan Ives, an equity analyst who covers Apple for Wedbush, said in a note the deal provided Apple with “a stepping stone to accelerate its AI strategy into 2026 and beyond.”

    But Apple’s continuing need to rely on partners—first OpenAI and now Google—to deliver these AI features is a worrisome sign, suggesting that Apple, a champion of vertical integration, is still struggling to build its own LLM.

    It’s a problem that has dogged the company since the beginning of the generative AI era: For months last year several Apple Intelligence features were delayed, and the long-awaited debut of an updated Siri has been pushed back numerous times. These delays have taken a toll on Apple’s reputation as a tech leader and angered customers, some of whom filed a class action lawsuit against the company after the AI features promoted in ads for the iPhone 16 weren’t initially available on the device.

    When Apple CEO Tim Cook promised an updated version of Siri would be released in 2026, many assumed it would be powered by Apple’s own AI models. But apparently those models are not yet ready for prime time and the new Siri will be powered by Google instead.

    Daniel Newman, an analyst at the Futurum Group, said that 2026 is a “make-or-break year” for Apple. “We have long said the company has the user base and distribution that allows it to be more patient in chasing new trends like AI, but this is a critical year for Apple,” Newman said.

    Cook has shaken up the ranks, installing a new head of AI who previously worked at Google on Gemini. And, if the delays turn out to be related to Apple’s specific requirements around things like privacy, it may ultimately prove to have been worth the wait. Ideally, Apple would want an AI model that matches the capabilities of those from OpenAI, Anthropic, and Google but which is compact enough to run entirely on an iPhone, so that user data does not have to be transmitted to the cloud. It’s possible, said Mudassir, that Apple is grappling with technical limitations involving the amount of power these models consume and how much heat they generate. Partnering with Google buys Apple time to make breakthroughs in compression and architecture while also getting Wall Street “off its back,” he said.

    Apple defenders note that the company is rarely a first mover in new technology—it was not the first to create an MP3 player, a smartphone, wireless earphones, or a smart watch, yet it came from behind to dominate many of those product categories with a combination of design innovation and savvy marketing. And Apple has a history of learning from partners for key technology, such as chips, before ultimately bringing these efforts in-house.

    Or, in the case of internet search, Apple simply partnered with Google for the long-term, using the Google engine to handle search queries in its Safari browser. The fact that Apple never developed its own search engine has not hurt its growth. Could the same principle hold true for AI?

    But the Apple-Google tie up is almost certainly bad news for OpenAI

    OpenAI CEO Sam Altman
    Florian Gaertner/Photothek via Getty Images

    While the Google partnership is not exclusive, meaning that Apple may continue to rely on OpenAI’s models for some of its Apple Intelligence features and OpenAI still has a chance to prove its models’ worth to Cupertino, Apple’s decision to go with Google is definitely a blow. At the very least, it solidifies the narrative that Google has not only caught up with OpenAI, but has now edged past it in having the best AI models in the market.

    Deprived of built-in distribution through Apple’s customer base, OpenAI may find it harder to grow its own user base. The company currently boasts more than 800 million weekly users, but recent reports suggest that the rate of usage may be slowing. OpenAI CEO Sam Altman has noted that many people currently see ChatGPT as synonymous with AI. But that perception could fray if Apple users find delight in using Gemini through Siri and come to see Gemini as the better model.
    .
    Altman told reporters last month that he sees Apple as his company’s primary long-term rival. OpenAI is in the process of developing a new kind of AI device, with help from Apple’s former chief designer Jony Ive, that Altman hopes will rival the phone as the primary way consumers interface with AI assistants. That device may debut this year. As long as Apple was dependent on ChatGPT to power Siri, OpenAI had a good view into the capabilities its new device would be competing against. OpenAI is unlikely to have as much insight into Apple’s AI capabilities going forward, which may make it harder for the upstart to position its new device as an iPhone killer.

    OpenAI has to hope its new device is a hit that may enable it to cement users into a closed ecosystem, not dissimilar to the one Apple has built around its hardware device and iOS software. This “walled garden” approach is one way to keep users from switching to rival products when they offer broadly similar capabilities. OpenAI will also have to hope its AI researchers achieve breakthroughs that give it a more decisive and long-lasting edge over Google. That might convince Apple to rely more heavily on OpenAI again in the future. Or, it could obviate the need for OpenAI to have distribution on Apple’s devices at all.

    This story was originally featured on Fortune.com

    [ad_2]

    Jeremy Kahn, Beatrice Nolan

    Source link

  • OpenAI is hiring a new Head of Preparedness to try to predict and mitigate AI’s harms

    [ad_1]

    OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company’s safety strategy. It comes at the end of a year that’s seen OpenAI hit with numerous accusations about ChatGPT’s impacts on users’ mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledged that the “potential impact of models on mental health was something we saw a preview of in 2025,” along with other “real challenges” that have arisen alongside models’ capabilities. The Head of Preparedness “is a critical role at an important time,” he said.

    Per the job listing, the Head of Preparedness (who will make $555K, plus equity), “will lead the technical strategy and execution of OpenAI’s Preparedness framework, our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm.” It is, according to Altman, “a stressful job and you’ll jump into the deep end pretty much immediately.”

    Over the last couple of years, OpenAI’s safety teams have undergone a lot of changes. The company’s former Head of Preparedness, Aleksander Madry, was reassigned back in July 2024, and Altman said at the time that the role would be taken over by execs Joaquin Quinonero Candela and Lilian Weng. Weng left the company a few months later, and in July 2025, Quinonero Candela announced his move away from the preparedness team to lead recruiting at OpenAI.

    [ad_2]

    Cheyenne MacDonald

    Source link

  • Sam Altman Hints at the Radical Design Choices Behind OpenAI’s Upcoming Devices

    [ad_1]

    Sam Altman claims that the AI device that OpenAI is currently building with famed Apple designer Jony Ive is actually a family of devices, and teased that they will likely not include a key component used by nearly every other smart device. 

    In a video interview on Dec. 18, Big Technology’s Alex Kantrowitz pushed the OpenAI co-founder and CEO to provide additional news about the device, which was officially announced in May. Responding to rumors that the device would be phone-sized and lack a screen, Altman clarified that OpenAI will be releasing a “small family of devices,” rather than a single device. 

    As for the rumored lack of a screen, Altman didn’t provide any firm information, but did opine on the current state of user interfaces for AI applications. He predicted that over time, computers will evolve from “dumb, reactive” machines into smarter, “proactive” entities that can understand user intent. But current devices, like laptops and smartphones, are not “well-suited to that kind of world.” 

    Altman said that he wants OpenAI’s devices to break some of the “unquestioned assumptions” around how smart devices work, since AI is such a unique technology, with screens being a prime example of such an assumption. Using a screen would limit OpenAI’s device to “the same way we’ve had graphical user interfaces working for many decades,” says Altman, and a keyboard would only slow down interactions. 

    “I don’t think the current form factor of devices is the optimal fit, it’d be very odd if it were, for this incredible new affordance we have,” Altman said. 

    Ive has also expressed a distaste for screens in devices over recent years, and has even expressed regret for the “unintended consequences” of his role in popularizing smart devices with screens through the iPhone and iPad. 

    In a November interview, Ive said that he “can’t bear products that are like a dog wagging their tail in your face,” and that the new devices will be designed to spark joy in users. The devices are still a long way away, though. Ive said in that same interview that he plans to reveal the devices within two years.

    [ad_2]

    Ben Sherry

    Source link

  • Sam Altman’s Cringe AI Thirst Trap Says a Lot About the Future of OpenAI

    [ad_1]

    OpenAI’s latest AI model launch has raised questions about the company’s wide range of projects and priorities, due in part to an NSFW image that co-founder and CEO Sam Altman generated and shared to promote it. 

    On December 16, OpenAI released an updated image-generation feature for ChatGPT, powered by its latest text-to-image AI model, named GPT-Image-1.5. Altman posted about the new model on his X account, and, as an example of its capabilities, included an AI-generated image of himself as a shirtless, muscular firefighter standing above a Christmas-themed December calendar. 

    According to X’s metrics, Altman’s firefighter post has been viewed over four million times and reposted over 1,000 times. Several of those reposts pointed out that the December dates in the calendar aren’t accurate to 2025, while others remarked on the disparity between Altman’s bold claims of using AI to cure cancer and eliminate poverty and OpenAI’s current offerings. 

    GPT-Image-1.5 is designed to compete against Nano Banana, the popular AI image generator and editor Google released in August. According to a recent report from The Information, OpenAI deprioritized development on new image models several months ago, but when Google released Nano Banana, “leaders at OpenAI rushed to improve its image technology.” 

    The Information also reported that according to some OpenAI employees, for much of 2025 “Altman seemed to be running OpenAI as if it had already conquered the chatbot market,” venturing beyond the core ChatGPT business into AI video and social media with Sora, web browsers with ChatGPT Atlas, and a physical device currently being designed by Jony Ive. Some of these initiatives reportedly “took resources away from efforts to increase ChatGPT’s mass appeal.” 

    In a video posted to OpenAI’s X account on December 17, OpenAI co-founder and president Greg Brockman admitted that new products like image generation require large amounts of compute, which has forced leadership to make difficult trade-offs. 

    When OpenAI released its previous frontier image-generation model in March of this year, it set off a viral trend of users generating images in the style of beloved anime production company Studio Ghibli. Usually, having your product go viral is an absolute win for businesses, but according to Brockman, the trend was so massive that OpenAI decided to “take a bunch of compute from research and move it to our deployment” in order to meet the demand. “That was really sacrificing the future for the present,” Brockman said in the video. 

    The extended deadline for the 2026 Inc. Regionals Awards is Friday, December 19, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Ben Sherry

    Source link

  • World launches its ‘super app,’ including crypto pay and encrypted chat features | TechCrunch

    [ad_1]

    World, the biometric ID verification project co-founded by Sam Altman, released the newest version of its app today, debuting several new features, including an encrypted chat integration and an expanded, Venmo-like capability for sending and requesting crypto. 

    World was created by the startup Tools for Humanity in 2019, and originally launched its app in 2023. The company says that, in a world roiled by AI-generated digital fakery, it hopes to create digital “proof of human” tools that can help separate the humans from the bots.

    During a small gathering at World’s headquarters in San Francisco on Thursday, Altman and World’s co-founder and CEO, Alex Blania, briefly introduced the new version of the app (which developers have termed a “super app”) before the product team took over to explain the new features. During his remarks, Altman said that the concept for World grew out of conversations he and Blania had had about the need to create a new kind of economic model. That model, based around web3 principles, is what World has been trying to accomplish through its verification network. “It’s really hard to both identify unique people and do that in a privacy-preserving way,” said Altman.

    World Chat, the app’s new messenger, seems designed to do just that. It uses end-to-end encryption to keep users’ conversations safe (this encryption is described as being equivalent to Signal, the privacy-focused messenger), and also leverages color-coded speech bubbles to alert users to whether the person they’re talking to has been verified by World’s system or not, the company said. The idea is to incentivize verification, giving people the power to know whether the person they’re talking to is who they say they are. Chat was originally launched in beta in March.

    The other big feature reveal on Thursday was an expanded digital payment system that allows app users to send and receive cryptocurrency. World app has functioned as a digital wallet for some time, but the newest version of the app includes broader capabilities. Using virtual bank accounts, users can also receive paychecks directly into World App and make deposits from their bank accounts, both of which can then be converted into crypto. You don’t have be verified by World’s authentication system to use these features.

    Tiago Sada, World’s chief product officer, told TechCrunch that part of the reason chat was added was to create a more interactive experience for users. “What we kept hearing from people is that they wanted a more social World app,” Sada said. World Chat is designed to fill that need, creating what Sada says is a secure way to communicate. “It took a lot of work to make this feature-rich messenger that is similar to a WhatsApp or a Telegram, but with encryption and security of something that is a lot closer to Signal,” Sada said.

    World (which was originally called Worldcoin) deploys a unique authentication process: interested humans get their eyes scanned at one of the company’s offices, where the Orb—a large verification device—converts the person’s iris into a unique and encrypted digital code. That code, the verified World ID, can then be used by the person to interact with World’s ecosystem of services, which are available through its app.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    The addition of more social-friendly features is clearly meant to drive broader adoption of the app, which makes sense since scaling verification is the company’s main challenge. Altman has said that he would like the project to scan a billion people’s eyes, but Tools for Humanity claims to have scanned less than 20 million people.  

    Since standing in long lines at a corporate office to have your eyeballs scanned by a giant metallic ball may seem slightly less than enticing to some users, the company has already sought to make its verification process less cumbersome. In April, Tools for Humanity announced its Orb Minis—hand-held, phone-like devices—that allow users to scan their own eyes from the comfort of their homes. Blania previously told TechCrunch that, eventually, the company would like to turn the Orb Minis into a mobile point-of-sale device or sell its ID sensor tech to device manufacturers. If the company takes such steps, it would drop the barrier to verification significantly, potentially inspiring much more widespread adoption.

    [ad_2]

    Lucas Ropek

    Source link

  • Time’s 2025 Person of the Year goes to “the architects of AI”

    [ad_1]

    Time magazine is spotlighting key players in the artificial intelligence revolution for its 2025 Person of the Year, the magazine announced Thursday. “The architects of AI” are the latest recipients of the designation, which for more than a century has been given out on an annual basis to an influential person, group of people or, occasionally, a defining cultural theme or idea. 

    Previous Person of the Year title-holders have held varying roles in a vast range of occupations, with President Trump taking last year’s cover and Taylor Swift capturing the one before. In 2025, 

    Time’s 2025 honorific was given to the minds and financiers behind AI’s rise to renown and notoriety, including Nvidia CEO Jensen Huang, Softbank CEO Masayoshi Son and Baidu CEO Robin Li, who spoke directly with the magazine for its feature story.

    “Person of the Year is a powerful way to focus the world’s attention on the people that shape our lives,” wrote Sam Jacobs, Time’s editor-in-chief, in an editorial piece about the magazine’s decision. “And this year, no one had a greater impact than the individuals who imagined, designed, and built AI.”

    Jacobs described 2025 as “the year when artificial intelligence’s full potential roared into view, and when it became clear that there will be no turning back or opting out,” adding: “Whatever the question was, AI was the answer.”

    The magazine prepared two separate covers for the issue. In one, artist Jason Seiler painted an interpretative recreation of the iconic 1932 photograph “Lunch Atop a Skyscraper,” an image that depicted workers seated side-by-side on a steel beam hanging high above New York City during the construction of 30 Rockefeller Plaza, which became a symbol of American resilience during the Great Depression. 

    A cast of tech industry characters at the forefront of AI development are perched on the beam in Seiler’s recreation. Mark Zuckerberg, of Meta, Lisa Su, of Advanced Micro Devices, Elon Musk, of xAI, Sam Altman, of Open AI, Demis Hassabis, of DeepMind Technologies, Dario Amodei, of Anthropic, and Fei-Fei Li, of Stanford’s Human-Centered AI Institute, are all pictured, along with Huang. 

    The second cover illustration, by artist Peter Crowther, places the same executives among scaffolding at what looks like a construction site for the giant letters “AI.”

    From left, cover art by Jason Seiler and Peter Crowther for TIME’s 2025 Person of the Year magazine spread.

    Jason Seiler/TIME; Peter Crowther/TIME


    “Every industry needs it, every company uses it, and every nation needs to build it,” Huang said of balancing the pressures to implement AI responsibly and deploy it to the public as quickly as possible. “This is the single most impactful technology of our time.”  

    Most of the industry figures pictured on Time’s cover did not speak to the magazine for the story, so this year’s spread mainly focuses on the implications — positive, negative and in between — of the companies they have built and the technology they continue forging. 

    AI often took center stage in 2025 in investigative news reports, economic and academic studies, and in Washington, D.C., as policymakers grappled with how to regulate its evolution while tech giants scrambled to trump their competitors’ inventions, as the use of some of them, like chatbots, grew to be commonplace, at times with tragic consequences.

    “For these reasons, we recognize a force that has dominated the year’s headlines, for better or for worse,” Jacobs wrote in his editorial. “For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the possible, the Architects of AI are TIME’s 2025 Person of the Year.”

    [ad_2]

    Source link

  • Disney is investing $1 billion in OpenAI and licensing its characters for Sora

    [ad_1]

    (CNN) — Disney is taking a $1 billion equity stake in OpenAI, while also striking a deal that would allow its famous characters be used on Sora, the AI company’s video generation platform.

    Disney’s investment in OpenAI is the first such major licensing agreement for Sora.

    Under the agreement, users of OpenAI’s shortform video-generating social media network Sora will be allowed to make videos using more than 200 Disney animated characters. Those characters including Mickey and Minnie Mouse, Disney Princesses like Ariel, Belle, and Cinderella, characters from Frozen, Moana, and Toy Story. Animated characters from Marvel and Lucasfilm, including Black Panther and Star Wars characters like Yoda are included as well – although the agreement does not include any talent likenesses or voices.

    Users of OpenAI’s popular chatbot ChatGPT will also be able to ask the bot to create images using the Disney characters.

    “The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” Disney CEO Robert A. Iger, CEO said as part of a statement.

    OpenAI, which has come under scrutiny for copyright violations – and also for striking massive ‘circular’ deals leading to fears of an AI bubble – said the deal shows how the creative community and AI can get along.

    “Disney is the global gold standard for storytelling, and we’re excited to partner to allow Sora and ChatGPT Images to expand the way people create and experience great content,” said Sam Altman, co-founder and CEO of OpenAI. “This agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity, and help works reach vast new audiences.”

    Shortly after the announcement, Iger and Altman both sat down with CNBC’s David Faber, during which the Disney boss stressed that the deal “does not, in any way, represent a threat to the creators.”

    “In fact, the opposite, I think it honors them and respects them, in part because there’s a license fee associated with it,” Iger said, later adding that the goal is to “continue to honor, respect, value the creative community in general.”

    Iger also stressed that the deal allows Disney to “be comfortable that OpenAI is putting guardrails essentially around how these are used,” adding that, “really, there’s nothing for us to be concerned about from a consumer perspective.” Altman, too, stressed the presence of guardrails, telling Faber that “it’s very important that we enable Disney to set and evolve those guardrails over time, but they will, of course, be in there.”

    The deal is exclusive, per Iger, at least in part. The Disney CEO hinted that “there is exclusivity, basically, at the beginning of the three-year agreement,” but remained mum on what that means. Asked if OpenAI is pursuing similar deals with other companies, Altman said, “I won’t rule out anything in the future, but we think this alone is going to be a wonderful start.”

    Disney has previously sued AI companies for using their intellectual property. On Monday, the company sent Google a cease and desist letter, according to a source familiar with the situation.

    The cease and desist letter claims the company’s AI products, including its image and video generating products Veo and Nano Banana, are infringing Disney’s copyrights “on a massive scale,” by allowing users to create images and videos depicting their characters. The letter alleges that Google has “refused to implement any technological measures to mitigate or prevent copyright infringement.”

    In response, a Google spokesperson said they have “a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them.”

    More generally, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content.”

    Disney had already sent similar cease and desist letters to Meta and Character.AI. In June, Disney and Universal sued AI photo generation company Midjourney, alleging the company violated copyright law.

    This story has been updated with additional developments and context.

    [ad_2]

    Hadas Gold and CNN

    Source link

  • OpenAI’s Secretive A.I. Gadget Designed by Jony Ive Aims to Redefine Tech’s Vibe

    [ad_1]

    An A.I. device project spearheaded by Sam Altman and Jony Ive has earned the backing of Laurene Powell Jobs. Barbara Kinney/Emerson Collective

    Sam Altman and Jony Ive have stayed painstakingly cryptic about what their collaborative A.I. hardware device will ultimately look like. So far, the OpenAI CEO and former Apple designer have shared only that the product will be less clunky than a laptop and less screen-focused than a smartphone. Their latest hint, meanwhile, speaks to the product’s overall “vibe.”

    Current devices can feel like walking through Times Square, with all “the little indignities along the way: flashing lights in my face, tension going here, people bumping into me, noises going off,” Altman said at a recent event hosted by Laurene Powell JobsEmerson Collective. OpenAI’s upcoming device, he added, will instead evoke the feeling of “sitting in the most beautiful cabin by a lake in the mountains and just sort of enjoying the peace and calm.”

    Altman and Ive officially joined forces in May when OpenAI acquired the designer’s hardware startup, io, which previously received backing from Powell Jobs, in a $6.5 billion deal. The acquisition brought Ive into the fold to oversee OpenAI’s efforts to design a consumer-facing A.I. device that reimagines how people interact with technology.

    “What I went to with Sam wasn’t a product but a tentative thesis. It was a thought about the nature of objects and our interface,” Ive said at the same event, declining to offer more details about the pitch he delivered.

    What little the pair have disclosed about their project remains frustratingly vague. The initial design goal was to create something users “want to lick or take a bite out of,” Altman said, adding that an early prototype was scrapped in part because it didn’t fit that description.

    They appear to have since crossed that threshold. According to Altman, their work has now produced its first prototypes, which he described as “jaw-droppingly good.” The final product is expected to arrive in under two years, giving users plenty of time to, as he joked, lick and bite the device to their heart’s content.

    Altman and Ive have emphasized that their device will not be another smartphone and have repeatedly warned about the harmful effects of today’s dominant tech products. Nonetheless, from the clues they’ve offered, their approach seems to echo Apple’s sleek design language. OpenAI’s device will be “playful” and full of “whimsy,” Altman said, describing it as so minimal that consumers will look at it and say, “That’s it?”

    Ive, too, stressed restraint and simplicity. “I can’t bear products that are like a dog wagging its tail in your face, or products that are so proud that they solve the complicated problem and want to remind you of how hard it is,” said the designer. “I love solutions that teeter on appearing almost naive in their simplicity.”

    Even as they try to avoid the pitfalls of modern consumer tech—devices that can fuel unhealthy relationships—the duo are also working toward a release with societal impact on par with landmark products like the iPhone. When asked which device he uses most often, Altman pointed to the iPhone, calling it “the most ‘before-and-after-moment’ product of my life.”

    OpenAI’s Secretive A.I. Gadget Designed by Jony Ive Aims to Redefine Tech’s Vibe

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link