ReportWire

Tag: openai

  • OpenAI’s Sora bans Martin Luther King Jr. deepfakes after his family complained

    [ad_1]

    New York (CNN) — OpenAI announced that it has “paused” users’ ability to generate videos of Martin Luther King Jr. on its artificial intelligence video tool Sora, following backlash over “disrespectful depictions.”

    “While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used,” the company said in a Thursday statement posted on X. “Authorized representatives or estate owners can request that their likeness not be used in Sora cameos.”

    The change comes a few weeks after the launch of Sora 2, which lets users make realistic-looking AI-generated videos using real and historical people. Critics charge that it’s contributing to an era of misinformation and “AI slop” that is blurring the lines between what’s real and what’s fake.

    The product has also generated online discussion about ethics around the use of this technology. Some creators were using King’s likeness for inappropriate purposes. Users recently recreated the late actor Robin Williams in AI videos, prompting his daughter Zelda to call them “disturbing.”

    OpenAI said it “thanks Dr. Bernice A. King for reaching out on behalf of King, Inc., and John Hope Bryant and the AI Ethics Council for creating space for conversations like this.”

    The King Center didn’t immediately respond to CNN’s request for comment.

    [ad_2]

    Jordan Valinsky and CNN

    Source link

  • The Walmart Integration With ChatGPT Has 1 Glaring Problem

    [ad_1]

    Last week, OpenAI rolled out “Instant Checkout,” which allows users to search for, and—more importantly—buy products directly within ChatGPT. Originally, the big-name partner was Shopify, which makes sense. Shopify powers e-commerce for most of the small sellers on the internet—and some bigger names that might surprise you. It made perfect sense that the company would want to make the ability to be discovered in ChatGPT available to those sellers.

    But then, this week, Walmart announced that it would be a part of Instant Checkout. You can now tell ChatGPT you need a new set of towels or an iPhone case, and it will suggest options from Walmart’s catalog and complete the purchase for you.

    That’s a big deal. But there is a catch: For now, Walmart’s ChatGPT integration won’t include fresh groceries, according to The Wall Street Journal.

    That’s a big miss.

    The AI meal-planning dream

    The thing is, one of the things ChatGPT is best at is helping people plan meals. You can ask it to create a weeknight dinner menu for a family of four, and it will instantly return recipes, portion sizes, and shopping lists. The next logical step seems pretty obvious—turn those lists into an order.

    In fact, this is one of the clearest examples of how AI can benefit people in their everyday lives. You should be able to move from “what should I make for dinner?” to “yes, deliver the ingredients tonight.” With an integration like this, we’re so close, and no company is better positioned for that future than Walmart.

    Walmart isn’t just a discount store. It’s the largest grocer in the United States, with more than half its sales coming from food. It has cold-chain logistics, neighborhood stores for same-day pickup, and a massive delivery network already in place.

    Fresh food is a unique challenge

    That infrastructure is the hardest part of grocery e-commerce. Amazon, Instacart, and DoorDash all compete in that space, but they depend on partnerships. Walmart owns the entire stack—from warehouse to doorstep.

    If ChatGPT is where people go to plan meals, Walmart could become the default place where they turn those plans into purchases. It wouldn’t just sell groceries; it would own the conversion layer between digital intent and physical goods. This is why the absence of fresh food in the ChatGPT integration puzzling. It feels like such an obvious connection.

    There are, of course, practical reasons. I get that groceries are complicated. For fresh food, especially, the logistics of getting something ordered and delivered to your home while it’s still, well, fresh, isn’t easy. It’s not entirely surprising that Walmart is taking it slow, at least when it comes to food with a short shelf life. Managing perishable food items requires a lot more coordination than selling HDMI cables or socks.

    But solving that problem could be the killer feature of AI-powered shopping.

    Getting to the future of retail

    Ultimately, every major player in retail knows that AI-driven commerce will depend on who controls the interface, not just who has the stores with all the inventory. If ChatGPT becomes the default place people plan their meals, Walmart has to be the default fulfillment partner. Otherwise, that space will be filled by someone else.

    Shopify is already trying to fill that space. And, for Walmart, which has millions of third-party sellers who offer products in its marketplace, it’s a space it can’t just hand over to a competitor.

    Walmart’s ChatGPT partnership is smart. It shows that the company understands where commerce is heading: away from search bars and toward natural language. It gives Walmart a foothold inside a rapidly growing AI ecosystem that has the potential to change the way people shop.

    But the glaring miss—the absence of fresh groceries—underscores how difficult it will be to fully capture that opportunity. Groceries are Walmart’s greatest advantage and its most complicated challenge. They represent the most frequent purchases, the richest data, and the strongest potential for habit formation.

    If Walmart can figure out how to let ChatGPT plan your meals, generate your list, and deliver everything by dinnertime, it won’t just be keeping up with AI commerce. It will define it.

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    [ad_2]

    Jason Aten

    Source link

  • ChatGPT to allow

    [ad_1]

    OpenAI says it plans to loosen restrictions on ChatGPT, including allowing the chatbot to sext with verified adults. CBS News senior business and technology correspondent Jo Ling Kent reports.

    [ad_2]

    Source link

  • Sam Altman Would Like Us All to Be Grown-Ups About the Sexy Stuff Coming Soon to ChatGPT

    [ad_1]

    In an X post on Wednesday, OpenAI CEO Sam Altman clarified that when he said ChatGPT might soon manufacture custom erotica, that “was meant to be just one example of [OpenAI] allowing more user freedom for adults.”

    A post from Altman the previous day had alerted the world to the fact that ChatGPT will soon include “erotica for verified adults,” and Altman now says that post “blew up on the erotica point” more than he thought it would. “Erotica” is a vague term without a technical or legal definition. It seems to be deployed by collectors of old timey nude photos, or when one describes art or literature that can include titillating amounts of sex and nudity, when said art also needs to sound like it has more redeeming aesthetic value than pornography.

    So go ahead and picture something sexy coming from ChatGPT, but not too sexy, because that would be porn, and as OpenAI told Mashable last year, “We have no intention to create AI-generated pornography.”

    We asked OpenAI to clarify whether it will generate “erotica” in the form of chats only, or whether there will be erotic images produced within the ChatGPT app by its image model, DALL-E—the one that’s so impressive at generating images that look like anime, and which may or may not soon be capable of generating hentai. We will update if we hear back.

    The erotica remark in the earlier Altman post was about a coming update aimed at removing safeguards, and ostensibly allowing “verified adults” to chat with a broadly less restricted version of OpenAI’s signature product. As we noted at the time, the more permissive version of the chat app soon to be delivered sounds a bit like OpenAI highlighting the seemingly addictive or parasocial attributes of ChatGPT once again, after the GPT-5 update flopped at least in part because its default tone had become less friendly and supportive.

    Many, however, reasonably gleaned the idea that porn—the form of content that gets perhaps 13-20 percent of all search traffic online—is in fact on its way to ChatGPT. One popular post speculated that OpenAI was launching a full-scale invasion of the online porn sphere. That’s not a crazy assumption. OpenAI is expected to have cash outflows of around $115 billion between now and 2029, and Altman has been explicit about his company needing to find ways to bring in revenue, even if—as with the launch of Sora 2—OpenAI gets criticized for poor taste. Sora 2’s tsunami of slop videos is justified, Altman says, because it makes people smile, and can “hopefully make some money given all that compute need.” Well, some analysts have estimated the value of the porn industry at close to $200 billion. A piece of that action would build an awful lot of compute.

    On the internet, wild speculation that OpenAI is getting into porn, or porn-adjacent “erotica,” to drive revenue is inevitable given what the company’s CEO is teasing here. If Altman’s intent is to kick off another version of the 1980s home video revolution in order to bring in the cold hard cash his company so desperately needs, content for horny people who aren’t all that discerning would be a historically grounded, if tacky, way to speed up revenue growth.

    So no, OpenAI hasn’t yet clarified where the sexy stuff will come out of the AI pipes, and whether it will be text, photos, or even video. But Altman even struck a rather Larry Flynt-like, free-speech-warrior tone in his clarifying post, saying that “allowing a lot of freedom for people to use AI in the ways that they want is an important part of [OpenAI’s] mission,” and adding that he and his company “are not the elected moral police of the world.”

    [ad_2]

    Mike Pearl

    Source link

  • Even the Inventor of ‘Vibe Coding’ Says Vibe Coding Can’t Cut It

    [ad_1]

    It’s been over a year since OpenAI cofounder Andrej Karpathy exited the company. In the time since he’s been gone, he coined and popularized the term “vibe coding” to describe the practice of farming out coding projects to AI tools. But earlier this week, when he released his own open source model called nanochat, he admitted that he wrote the whole thing by hand, vibes be damned.

    Nanochat, according to Karpathy, is a “minimal, from scratch, full-stack training/inference pipeline” that is designed to let anyone build a large language model with a ChatGPT-style chatbot interface in a matter of hours and for as little as $100. Karpathy said the project contains about 8,000 lines of “quite clean code,” which he wrote by hand—not necessarily by choice, but because he found AI tools couldn’t do what he needed.

    “It’s basically entirely hand-written (with tab autocomplete),” he wrote. “I tried to use claude/codex agents a few times but they just didn’t work well enough at all and net unhelpful.”

    That’s a much different attitude than what Karpathy has projected in the past, though notably he described vibe coding as something best for “throwaway weekend projects.” In his post that is now often credited with being the origin of “vibe coding” as a popular term, Karpathy said that when using AI coding tools, he chooses to “fully give in to the vibes” and not bother actually looking at the code. “When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away,” he wrote. “I’m building a project or webapp, but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

    Of course, nanochat is not a web app, so it makes sense that the strategy didn’t work in this case. But it does highlight the limitations of such an approach, despite lofty promises that it’s the future of programming. Earlier this year, a survey from cloud computing company Fastly found that 95% of surveyed developers said they spend extra time fixing AI-generated code, with some reporting that it takes more time to fix errors than is saved initially by generating the code with AI tools. Research firm METR also recently found that using AI tools actually makes developers slower to complete tasks, and some companies have started hiring human specialists to fix coding messes made by AI tools. The thing to remember about vibe coding is that sometimes the vibes are bad.

    [ad_2]

    AJ Dellinger

    Source link

  • Japan asks OpenAI not to infringe on ‘irreplaceable’ manga and anime content

    [ad_1]

    Japan’s government has asked OpenAI not to infringe on anime and manga content that it called “irreplaceable treasures,” according to a report from ITMedia seen by IGN. The request was made by a key minister in charge of AI and IP in response to numerous videos from OpenAI’s Sora 2 generator that use copyrighted material from Japanese studios.

    “We have requested OpenAI not to engage in any actions that could constitute copyright infringement,” said cabinet minister Minoru Kiuchi at a press conference last week. “Anime and manga are irreplaceable treasures that we can be proud of around the world.”

    Launched on October 1, OpenAI’s Sora 2 can generate 1080p videos up to 20 seconds long with sound. The company also released the Sora app that uses Sora 2 to generate TikTok-style videos of nearly anything. Anime has been a key theme, with many short videos replicating copyrighted materials from franchises like DragonBall and Pokémon.

    Despite the demand, Japan has been one of the more progressive nations when it comes to artificial intelligence. The nation’s AI Promotion Act aims to boost the use of AI as an economic growth driver, while also outlining guidelines around copyright infringement. However, the topic of enforcement is still fuzzy so the government is trying to get a better grip on it. “Japan bears a responsibility to take the lead on making rules [around AI and copyright], precisely because we are a country… [that creates] anime, games, and music,” said parliament member Akihisa Shiozaki on his blog.

    Last month, OpenAI said it had contacted studios to give them the option of opting out of Sora 2 training on their materials, Reuters reported. The new process requires movie studios and other content owners to explicitly ask OpenAI to exclude their copyright material from videos generated by Sora. It’s not known which, if any, Japanese studios the company has contacted.

    [ad_2]

    Steve Dent

    Source link

  • Newsom Vetoes Bill to Restrict AI Chatbots for Minors

    [ad_1]

    The governor said the proposed AI restrictions were too broad, even as parents and advocates urged stronger safeguards for minors online

    On Monday, California Governor Gavin Newsom vetoed a bill meant to restrict the usage of AI chatbots for anyone under 18. 

    The bill was proposed by Assemblymember Rebecca Bauer-Kahan’s (D) as the Leading Ethical AI Development for Kids Act (LEAD). It would have restricted any companion chatbot platform, including those from OpenAI and Meta, from being used by a minor if there were obvious potential for harm or sexual conversations. 

    “While I strongly support the author’s goal of establishing necessary safeguards for the safe use of AI by minors, (the bill) imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors,” Newsom said.

    Newsom faced intense pressure on the LEAD Act, including a personal letter from parents who said their son took his own life after ChatGPT became his “suicide coach.” On the opposing side, the tech industry argued that the bill was too broad and would stifle innovation by taking away useful tools for children, such as AI tutoring systems and programs that could detect early signs of dyslexia.

    Common Sense Media, a non-profit organization that reviews and rates media for families, sponsored the LEAD Act, decried the veto. James Steyer, Common Sense Media’s founder and CEO, said in a statement, “It is genuinely sad that the big tech companies fought this legislation, which actually is in the best interest of their industry long-term.”

    Newsom signed a narrower measure, Track authored by Sen. Steve Padilla (D), that will require chatbots to establish protocols to “detect, remove, and respond to instances of suicide ideation users.”  

    Chatbot operators now will have to implement protocols to ensure their system does not deliver self-harm or suicide content to users, as well as place “reasonable measures” to prevent chatbots from encouraging minors to engage in sexually explicit conduct. 

    [ad_2]

    Anastasia Van Batenburg

    Source link

  • Sam Altman: Lord Forgive Me, It’s Time to Go Back to the Old ChatGPT

    [ad_1]

    Earlier this year, OpenAI scaled back some of ChatGPT’s “personality” as part of a broader effort to improve user safety following the death of a teenager who took his own life after discussing it with the chatbot. But apparently, that’s all in the past. Sam Altman announced on Twitter that the company is going back to the old ChatGPT, now with porn mode.

    “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman said, referring to the company’s age-gating that pushed users into a more age-appropriate experience. Around the same time, users started complaining about ChatGPT getting “lobotomized,” providing worse outputs and less personality.  “We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.” That change followed the filing of a wrongful death lawsuit from the parents of a 16-year-old who asked ChatGPT, among other things, for advice on how to tie a noose before taking his own life.

    But don’t worry, that’s all fixed now! Despite admitting earlier this year that safeguards can “degrade” over the course of longer conversations, Altman confidently claimed, “We have been able to mitigate the serious mental health issues.” Because of that, the company believes it can “safely relax the restrictions in most cases.” In the coming weeks, according to Altman, ChatGPT will be allowed to have more of a personality, like the company’s previous 4o model. When the company upgraded its model to GPT-5 earlier this year, users began grieving the loss of their AI companion and lamenting the chatbot’s more sterile responses. You know, just regular healthy behaviors.

    “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing),” Altman said, apparently ignoring the company’s own previous reporting that warned people could develop an “emotional reliance” when interacting with its 4o model. MIT researchers have warned that users who “perceive or desire an AI to have caring motives will use language that elicits precisely this behavior. This creates an echo chamber of affection that threatens to be extremely addictive.” Now that’s apparently a feature and not a bug. Very cool.

    Taking it a step further, Altman said the company would further embrace its “treat adult users like adults” principle by introducing “erotica for verified adults.” Earlier this year, Altman mocked Elon Musk’s xAI for releasing an AI girlfriend mode. Turns out he’s come around on the waifu way.

    [ad_2]

    AJ Dellinger

    Source link

  • Walmart partners with OpenAI so shoppers can buy things directly in ChatGPT

    [ad_1]

    Walmart is partnering with OpenAI to give shoppers a new feature that lets them complete purchases using ChatGPT, as the retailer invests in artificial intelligence to improve operations. 

    Using ChatGPT’s new “Instant Checkout” feature, shoppers in conversation with the AI-powered bot will be able to browse Walmart’s offerings and complete purchases from within the app.

    ChatGPT first announced “Instant Checkout” last month. The shopping feature lets users query ChatGPT for things like “best mattress under $1,000” or “gift for an avid reader,” and buy suggested products from within the chat, without having to navigate outside the app.

    With the Walmart partnership, the AI-driven shopping experience “allows customers and Sam’s Club members to plan meals, restock essentials, or discover new products simply by chatting — Walmart will take care of the rest,” the retail giant said Tuesday.

    Walmart touts the move as a push beyond traditional e-commerce search tools that retrieve products solely based on consumers requests. “AI will learn and predict customers’ needs, turning shopping from a reactive experience into a proactive one — what Walmart calls agentic commerce,” the company said Tuesday. 

    Walmart CEO Doug McMillon said the consumer-facing enhancement is long overdue. 

    “For many years now, eCommerce shopping experiences have consisted of a search bar and a long list of item responses. That is about to change … We are running toward that more enjoyable and convenient future with Sparky and through partnerships including this important step with OpenAI,” he said in a statement Tuesday. 

    Sparky is Walmart’s generative AI-powered shopping assistant, designed to deliver more conversational and personalized shopping assistance. 

    Sam Altman, cofounder and CEO of OpenAI, the creator of ChatGPT, touted the partnership with Walmart as one that makes “everyday purchases a little simpler.”

    E-commerce giant Amazon is also making a foray into the world of so-called agentic AI, in which bots replace humans. Through its “Buy for Me” feature in the Amazon Shopping App, shoppers can buy goods from vendors selling products that aren’t available on Amazon.com without leaving the Amazon ecosystem. 

    “If a customer decides to proceed with a Buy for Me purchase, they tap on the Buy for Me button on the product detail page to request Amazon make the purchase from the brand retailer’s website on their behalf,” Amazon explains on its corporate website. “Customers are taken to an Amazon checkout page where they confirm order details, including preferred delivery address, applicable taxes and shipping fees, and payment method.”

    [ad_2]

    Source link

  • OpenAI partners with Walmart to let users buy products in ChatGPT, furthering chatbot shopping push

    [ad_1]

    NEW YORK (AP) — OpenAI is partnering with Walmart to let shoppers make purchases directly within ChatGPT, furthering the artificial intelligence company’s push to turn its chatbot into a virtual merchant as it seeks to boost revenue.

    In an Tuesday announcement, Walmart said the new offering will give customers the option to “simply chat and buy.” That means the retailer’s products would be available through instant checkout in ChatGPT — allowing users to buy anything from meal ingredients or household items, to other goods they might be discussing with the chatbot.

    “For many years now, eCommerce shopping experiences have consisted of a search bar and a long list of item responses,” Walmart CEO Doug McMillon said in a prepared statement. “That is about to change.”

    Sam Altman, cofounder and CEO of OpenAI, added that the partnership would “make everyday purchases a little simpler.”

    [ad_2]

    Associated Press

    Source link

  • Sam Altman Just Made Some Spicy Policy Changes for Adult ChatGPT Users

    [ad_1]

    OpenAI co-founder and CEO Sam Altman said that the company is planning to “safely relax” restrictions on what kinds of conversations ChatGPT can engage in, and by the end of the year will even allow adult users to have sexually explicit conversations with the AI system. 

    In a post on X on Tuesday, Altman wrote that “we made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.” 

    These restrictions were instituted after parents of children who committed suicide began to accuse ChatGPT of contributing to their children’s mental health crises or even helping to plan suicides. The parents of Adam Raine, a 16-year-old who committed suicide, have even sued OpenAI in an effort to compel the company to change its safety policies. 

    In a September blog post titled “Teen safety, freedom, and privacy,” Altman wrote that OpenAI would restrict teenage ChatGPT users from engaging in any discussions about suicide or self-harm. An earlier post, released in August, stated that OpenAI would strengthen its safeguards and content-blocking classifiers to prevent conversations that shouldn’t be allowed (such as helping someone to self-harm). If a user expresses suicidal intent, OpenAI said, ChatGPT should direct people to the suicide hotline, which is 988.

    In his post on teen safety and freedom, Altman wrote that OpenAI has a policy to “treat our adult users like adults.” For example, he wrote, “the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it.” 

    On Tuesday, Altman wrote that OpenAI has developed new tools that enable the company to “mitigate the serious mental health issues,” and will begin relaxing ChatGPT’s content restrictions. He wrote that in the next few weeks, OpenAI will release a new version of ChatGPT “that allows people to have a personality that behaves more like what people liked about 4o,” referring to its AI model, GPT-4o. 

    After releasing GPT-5 in August, OpenAI removed GPT-4o from its lineup of available models on ChatGPT. This led to an outcry from ChatGPT users who developed a fondness for 4o’s personality. Eventually, OpenAI added 4o back to the lineup for paid subscribers. “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend,” Altman wrote, “ChatGPT should do it.” 

    Altman added that in December, OpenAI will begin rolling out advanced “age-gating” systems, which will predict the age of a user based on how they use ChatGPT. Users who are found to be adults will be able to go further with ChatGPT than was previously allowed, including “erotica for verified adults.” 

    [ad_2]

    Ben Sherry

    Source link

  • Ex-Apple CEO John Sculley Says This Company Is Apple’s First ‘Real Competitor’ in Years

    [ad_1]

    Former Apple CEO John Sculley, who famously led the company from 1983 to 1993, believes that ChatGPT creator OpenAI is Apple’s “first real competitor in many, many decades.” 

    Speaking on a panel at Zeta Live, an annual event hosted by Zeta Global, a marketing technology firm that he cofounded in 2007, Sculley said that he has “huge admiration for the way Tim Cook has run Apple,” but that “AI has not been a particular strength of theirs.”

    Sculley explained that in the 80s, Steve Jobs saw personal computing as a medium for empowering office workers with tools that provide rapid access to knowledge. Now, he says, AI agents that can autonomously carry out workflows are handling “more and more of the heavy lifting” that knowledge workers for decades have done in tandem with personal computers.” 

    Where Apple’s personal computing revolution imbued workers with intelligence, Scully said, agentic AI is the intelligence, capable of doing the work previously entrusted to well-trained humans. “It completely changes the way we do business models,” he added. 

    Going forward, Sculley anticipates more companies moving to subscription-based business models, because instead of selling tools that enhance worker productivity, companies will be selling access to solutions that operate on their own, with very little human interaction. In this “agentic era,” he said, a model in which customers pay for a solution for as long as they need it makes more economic sense. 

    This waning dependence on individual apps could be a challenge for Apple, which has built a massive app-based ecosystem. The company recently said that the App Store “facilitated $406 billion in developer billings and sales in 2024.” 

    Acknowledging rumors that Cook is planning to retire soon, Sculley said that whoever becomes the next Apple CEO will need to position the company for an era in which “we don’t need a lot of apps, it could all be done with smart agents working across workflow automation.” 

    It appears OpenAI is already positioning itself for such an era. At the company’s DevDay conference on October 6, OpenAI CEO Sam Altman announced the introduction of apps within ChatGPT, with early adopters including Figma, Booking.com, and Canva. As Sculley said, this new feature enables consumers to offload some of the “heavy lifting” of using apps to AI. 

    Another OpenAI-shaped challenge for Apple? Going up against their beloved former head designer, Jony Ive. In May OpenAI announced that it had acquired Ive’s hardware company, io, for $6.4 billion in order to collaborate on an Ive-designed, OpenAI-powered physical device. “If there’s anyone who’s probably going to be able to bring that dimension to the LLM,” said Sculley, “it’s probably going to be Jony Ive working with Sam Altman.” 

    OpenAI has been on a dealmaking tear recently. This morning, the company announced a deal with chipmaker Broadcom to develop and deploy 10 gigawatts of OpenAI-designed AI accelerator hardware. That deal builds on other recent agreements with fellow chipmakers like Nvidia and AMD that cumulatively have secured over 30 gigawatts of compute capacity just in 2025. 

    [ad_2]

    Ben Sherry

    Source link

  • Is the AI Conveyor Belt of Capital About to Stop?

    [ad_1]

    The American economy is little more than a big bet on AI. Morgan Stanley investor Ruchir Sharma recently noted that money poured into AI investments now accounts for about 40% of the United States’ GDP growth in 2025, and AI companies are responsible for 80% of growth in American stocks. So how bad is it that the most recent major deal among AI giants, agreements that have driven up stock prices dramatically, look like a snake eating its own tail?

    In recent months, Nvidia announced that it would invest $100 billion into OpenAI, OpenAI announced that it would pay $300 billion to Oracle for computing power, and Oracle announced it would buy $40 billion worth of chips from Nvidia. It doesn’t take a flow chart to get the feeling that these firms are just moving money around between each other. But surely that’s not happening…right?

    It’s a little harder to get assurances of that than you might think. 

    Artur Widak/Anadolu via Getty Images

    Is it all round-tripping?

    Many of these agreements are, on their face, mutually beneficial. If everything is on the level, while these deals might be circular, they should be moving everything forward. Rishi Jaluria, an analyst at RBC Capital Markets, told Gizmodo that deals like these could result in a “less capacity-constrained world,” which would allow for faster development of models that could produce higher returns on investment.

    “The better models we have, the more we can realize a lot of these AI use cases that are on hold just because the technology isn’t powerful enough yet to handle it,” he said. “If that happens, and that can generate real [return on investment] for customers … that results in real cost savings, potentially new revenue generation opportunities, and that creates net benefits from a GDP perspective.”

    So as long as we keep having AI breakthroughs and these companies figure out how to monetize their products, everything should be fine. On the off chance that doesn’t happen, though? 

    “If that doesn’t happen, if there is no real enterprise AI adoption, then it’s all round-tripping,” Jaluria said.

    Round-tripping, generally speaking, refers to the unethical and typically illegal practice of making trades or transactions to artificially prop up a particular asset or company, making it look like it’s more valuable and in demand than it actually is. In this case, it would be tech companies that are trying to make it appear like they are more valuable than they actually are by announcing big deals with each other that move the stock price. 

    So what might suggest whether this money is actually accomplishing anything other than serving as hot air in a rapidly inflating bubble? Jaluria said he’s watching for faster developments of models, advancements in performance, and overall AI adoption. “If this leads to a step function change in the way enterprise is adopting and utilizing AI, that creates a benefit,” he said.

    Whether that is happening currently or not is kind of in the eye of the beholder. OpenAI has certainly shown advancements in its technology. The release of its Sora 2 video generation model has unleashed a fresh hell upon the world, used to generate significant amounts of copyright violations and misinformation. But the latest version of the company’s flagship model, GPT-5, underwhelmed and failed to live up to expectations when it was released in August. 

    Adoption rates of the technology are also a bit of a Rorschach test. The company boasts that 10% of the world is using ChatGPT, and nearly 80% of the business world says that it’s looking into how to utilize the technology. But the early adopters aren’t finding much utility. According to a survey from the Massachusetts Institute of Technology, 95% of companies that have tried to integrate generative AI tools into their operations have produced zero return on investment.

    Where these investments are generating a return is in the stock market. Which, frankly, does not quell concerns about these firms simply boosting one another’s bottom line.

    Take Oracle, for example. Last month, the cloud provider had a rough quarter by all traditional indicators. It missed on both its revenue and earnings projections, and its net income was flat year-over-year. And yet, the stock price soared. The reason: the company’s plump list of remaining performance obligations—financial agreements that will provide revenue that have not yet been fulfilled. There, the company showed a massive amount of growth, a 359% increase from the year prior, with a projected $455 billion coming in. 

    That money is not real yet. Nor is the growth the company has promised, claiming that its Oracle Cloud Infrastructure revenue would grow from under $20 billion to nearly $150 billion before the start of the 2030s. But all of it was sufficient for investors to drive up Oracle’s share price enough to slingshot CEO Larry Ellison into the top spot on the world’s richest person list, briefly leapfrogging Elon Musk. 

    A video of Sam Altman generated by OpenAI's Sora 2
    Still from a promotion video of Sam Altman generated by OpenAI’s Sora 2. © OpenAI

    OpenAI is either the nexus point or the void at the center

    Most of this promised revenue will come from OpenAI, which made a commitment to purchase $300 billion worth of computing power from the company over five years. The clock on that contract doesn’t start until 2027, but assuming it actually happens, it would be one of the largest cloud computing deals in history.

    It’s also one of the most unlikely, just based on where the companies involved currently stand. In order to provide the compute that it has promised to OpenAI, Oracle will reportedly need to generate 4.5 gigawatts of power capacity, more than two Hoover Dams’ worth of power. On the other side of the deal, OpenAI will have to pay about $60 billion per year to fit the bill for the agreement. It currently generates about $10 billion in revenue, which, statistically speaking, is less than $60 billion.

    You can see a similar circular shape to OpenAI’s recent deal with Nvidia rival AMD, too. The exact details of the agreement weren’t reported, but chipmaker AMD expects to generate tens of billions of dollars over the next half-decade as it sells its AI chips to OpenAI. As part of the agreement, OpenAI gets a swath of shares in AMD, with options to buy up to 10% of the company. Lucky for OpenAI, there’s really no better time to get your hands on some AMD shares than right before it announces a big AI-related deal. The company’s stock price surged by about 35% following the announcement. 

    With those two most recent deals on the books, OpenAI has agreed to more than $1 trillion worth of computing deals so far this year. That’s a lot for any company to spend, but it’s especially a lot for a still-private company that reports just $10 billion in projected revenue through 2025. Even by its most recent funding rounds, the company as a whole is currently valued at about $500 billion.

    Most of those deals have contingencies attached. For instance, Nvidia’s investment in OpenAI isn’t actually $100 billion, but an initial $10 billion for one gigawatt of data center capacity with the potential for $100 billion if 10 gigawatts are ultimately achieved. But the stock prices and valuations certainly seem to treat these deals as if they are set in stone. And OpenAI seems to be operating that way, too. The company claims that it’ll more than 10x its revenue in the next few years, and projects it’ll hit $129 billion annually by 2029.

    Conveyor belts of capital

    That type of potentially inflated revenue figure is the kind of thing that makes some people think of the Dot Com bubble of the early 2000s, where we saw companies like Commerce One receive a $21 billion valuation despite barely having any revenue. But Peter Atwater, Adjunct Professor of Economics at William and Mary and President of consulting firm Financial Insyghts, sees a different reflection in the AI bubble: the housing market collapse. 

    “What we saw at the top of the mortgage market was all of these conveyor belts of capital, money flowing from one party to another party to another party. And what you started to see was that there were multiple points of relationship so that any participant in the system was then dependent on every other conveyor belt in the system working simultaneously to keep the system going,” he told Gizmodo. “In many ways, we’re seeing the same developing web of capital flows across the AI space.”

    This creates some obvious problems. The circular deals that, in theory, are wheels moving the whole thing forward all have to keep turning. If any of them stop, the whole thing stops, because they are all so interconnected that no failure is truly isolated. 

    Atwater said that the types of major, metric-contingent deals that have been dominating headlines in the AI space aren’t all that different from some of what was happening in the mortgage industry back in 2007, where some of the financial commitments required mortgages to meet certain conditions.

    “In the frenzy of a bubble, everyone overcommits. The purpose of overcommitting is to stake a claim in what you believe will be an intensely scarce commodity in the future. So you have buyers overcommit and you have sellers agreeing to overprovide as a result,” he explained. “What we find over and over is that commitments are among the first obligations to be cut off once conditions change, once confidence begins to fall.”

    Right now, there’s a stomach for those commitments. That isn’t guaranteed to be there in the future if all of these promised returns on investment don’t materialize. Atwater said that the market requires credit markets being willing to continue to extend massive sums of money to cover the agreements made, equity markets that value these transactions at “an extraordinary multiple,” and suppliers capable of delivering the promised products. There’s no guarantee that all of those factors will hold. 

    The math is already pretty tricky. As tech commentator Ed Zitron has pointed out, major firms like Microsoft, Meta, Tesla, Amazon, and Google have invested about $560 billion in AI infrastructure over the last two years. They’ve brought in a combined $35 billion in AI-related revenue. OpenAI’s commitments are even bigger, with returns that are arguably even smaller. 

    The company’s development and expansion of its services will rely in no small part on massive data center projects, which will require the same amount of energy to operate as New York City and San Diego combined—energy that currently isn’t even available. And, once again, there is no guarantee that the end product, once all of that energy is spent and data centers are built, will actually generate revenue.

    “Ultimately, if you do not have a consumer for the product, there will be no AI space because these companies can’t continue to do this for nothing. Listening to a lot of the calls in the last couple of weeks, there’s a clear open question as to how these companies are going to make money at this,” Atwater said.

    For the moment, everyone is seeing green, and hope springs eternal. As long as that is the case, no one will ask where the revenue is coming from. “Right now, the AI sector is operating in a forever mindset. They are acting as if they have a very long period of time under which they can figure this out and make money,” Atwater said. “As long as confidence is high, this entire ecosystem can offer fantasy. When confidence falls, they’re going to be expected to deliver real-term performance in a very short time frame.”

    Unfortunately, should that happen, it won’t just be these companies that bear the brunt of the failure. “You have to look at this as a larger ecosystem. To talk about AI today, it means we have to talk about the credit market, we have to talk about the credit market. Wall Street and AI are a single beast,” Atwater said, warning that a very small number of firms currently have a major grasp on the whole of the American economy. 

    Lots of investors are piling into the AI space, fearful of missing out on a market that seems like it can only go up. But few of them are looking at why those valuations and stock prices keep climbing, showing little curiosity as to what might happen if all of this money is just getting shifted around, artificially inflating the actual value of the companies they are betting on. 

    “‘Why?’,” Atwater said, “is the last question asked in a bull market.”

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI Will Stop Saving Users’ Deleted Posts

    [ad_1]

    A controversial court order has forced OpenAI to save deleted users posts “indefinitely” as part of its ongoing legal battle with the New York Times. However, it appears that’s mostly over—for now.

    OpenAI was sued by the Times in December 2023 for allegedly using the Times’s copyrighted material to train its algorithm. Other news organizations also joined the litigation. As part of that case, the AI company was previously ordered to retain its chat logs “indefinitely”—including deleted ones—so that they could be examined for potential evidence related to the case. Ars Technica previously noted that this court order was quite sweeping, and impacted the privacy of “hundreds of millions of ChatGPT users globally.”

    Indeed, OpenAI notably made a big stink about the order when it was instituted, characterizing it as an attack on users’ privacy. “The New York Times and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us: retain consumer ChatGPT and API customer data indefinitely,” said Brad Lightcap, COO, OpenAI, in June.

    Well, it appears the Great Chat Log Retention Saga has come to a close. Ars now reports that, on Thursday, U.S. Judge Ona Wang approved a joint measure that had been submitted by both OpenAI and the Times, which nixed the preservation order that had previously been in place, allowing the company to actually delete the deleted chat logs. That said, Ars notes that “deleted and temporary chats will still be monitored” for some users, although it’s a little unclear who might be impacted.

    The chat logs that have already been retained will continue to be made accessible to the news organizations involved in the legal case, as part of the effort to uncover examples of chatbot “outputs infringing their articles or attributing misinformation to their publications,” Ars notes.

    While the chat log retention drama may be over, what isn’t over is the battle over copyright law currently embroiling the AI industry. At this point, OpenAI has been sued many, many times on similar grounds. So have other AI firms. The copyright issues surrounding generative AI are still largely unsettled—or, rather, are in the process of being settled via the ongoing legal battles that are currently unfolding.

    [ad_2]

    Lucas Ropek

    Source link

  • It’s not too late for Apple to get AI right | TechCrunch

    [ad_1]

    This week, OpenAI announced that apps can now run directly inside ChatGPT, letting users book travel, create playlists, and edit designs without switching between different apps. Some immediately declared it the app platform of the future — predicting a ChatGPT-powered world where Apple’s App Store becomes obsolete.

    But while OpenAI’s app platform presents an emerging threat, Apple’s vision for an improved Siri — though still seriously delayed — could still play out in its favor.

    After all, Apple already controls the hardware, the operating system, and has roughly 1.5 billion iPhone users globally, compared to ChatGPT’s 800 million weekly active users. If Apple’s bet pays off, it could position the iPhone maker in a way that would not only maintain its app industry dominance but also modernize how we use apps in the AI era.

    Apple’s plan is to kill the app icon without killing the app itself. Its vision for AI-powered computing — introduced at its developer conference last year — would see iPhone users interact with an overhauled version of Siri and a revamped system that changes the way you use apps on your phone. (Imagine less tapping and more talking.)

    Apps are passé, long live apps?

    It’s an idea whose time has come.

    Organizing little tappable icons on your iPhone’s Home Screen to make online information more accessible is a dated metaphor for computing. Meant to resemble a scaled-down version of a computer’s desktop, apps are becoming a less common way for users to interact with many of their preferred online services.

    These days, consumers are just as likely to ask an AI assistant for a recommendation or insight as they are to do a Google search or launch a dedicated, single-purpose app, like Yelp. They’ll talk out loud to their smart speakers or Bluetooth-connected AirPods to play their favorite tunes; they’ll ask a chatbot for business information or a summary of reviews for a new movie or show.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    The AI, a large language model trained on web-scraped data and more, determines what the user wants to know and spits out a response.

    This is arguably easier than scouring through Google’s search results for the right link with the answer. (That’s something Google itself realized over a decade ago, when it started putting answers to user queries right on the search results page.)

    AI is also often easier than finding the right app on your now overcrowded iPhone, launching it, and then interacting with its user interface — which varies from app to app — to perform your task or get an answer to your question.

    Photo by Jakub Porzycki/NurPhoto via Getty ImagesImage Credits:NurPhoto / Contributor (opens in a new window) / Getty Images

    However, ChatGPT’s app system, while seemingly improving on this model, remains locked inside the ChatGPT user experience. It requires consumers to engage in a chatbot-style interface to use their apps, which could require user education. To call up an app, you have to name it as the first word of your prompt or otherwise mention the app by name to get a button that prompts you to “use the app for the answer.” Then, you have to type in an accurate query. (If you mess this up, early tests by Bloomberg indicate you could get stuck on a loading screen with no results!)

    We have to wonder: is this the future of apps, or just the future while there’s no other competition? When another solution becomes available — one that’s built into your iPhone, no less — will consumers keep using ChatGPT, or are they still willing to give Siri another try? We don’t know, but we wouldn’t count out Apple yet, even though Siri has quite a bad reputation to salvage at this point.

    Siri may be an embarrassment as it stands today, but Apple’s overall ecosystem has advantages. For starters, consumers already have the apps they want to use on their phone or know how to find them on the App Store, if not. They’ve used many of these apps for years. Muscle memory goes a long way!

    Meanwhile, there are a few roadblocks to getting started with ChatGPT’s app platform.

    You have to install the app in question, of course; then you have to connect the app to ChatGPT by jumping through a warning-filled permission screen. This process requires you to authenticate with the app using your existing username and password, and to enter the two-factor authentication code, if applicable.

    After this one-time setup, things should be easier. For instance, after you generate a Spotify playlist with AI, it can be launched in the Spotify app with a tap.

    However, this experience won’t differ much from Apple’s plans if Apple is able to make things work as promised. Apple says you’ll be able to talk or text Siri to control your apps.

    There are other disadvantages to the OpenAI app model. You can only interact with one app at a time, instead of being able to switch back and forth between apps — something that could be useful when comparing prices or trying to decide between a hotel room and an Airbnb.

    Using apps within ChatGPT also strips away the branding, design, and identity that consumers associate with their favorite apps. (For those who hate how cluttered Spotify’s app has become, perhaps that’s a good thing. Others, however, will disagree.) And, in some cases, using the mobile app version to accomplish your goals may still be easier than using the ChatGPT app version because of the flexibility the former offers.

    Finally, compelling users to switch app platforms could be difficult when there isn’t an obvious advantage to using apps within ChatGPT — except for the fact that it’s neat that you can.

    Can Apple save Siri’s reputation with AI features?

    In its WWDC 2024 demonstration — which Apple swears was not “demoware” — the company showed how the apps would function under this new system and how they could use other AI features like proofreading.

    Most importantly, Apple told developers that they’ll be able to take advantage of some of its AI capabilities without having to do additional work — like a note-taking app using proofreading or rewriting tools. Plus, developers who have already integrated SiriKit into their apps will be able to do more in terms of having users take action in their apps. (SiriKit, a toolkit for making apps interoperable with Siri and Apple’s Shortcuts, is something developers have been using since iOS 10.)

    These developers will see immediate enhancements when the new Siri rolls out.

    Image Credits:Apple

    Apple said it will focus on categories like Notes, Media, Messaging, Payments, Restaurant Reservations, VoIP Calling, and Workouts, to start.

    Apps in these categories will be able to let their users take actions via Siri. In practice, that means Siri will be able to invoke any item from an app’s menus. For example, you could ask Siri to see your presenter notes in a slide deck, and your productivity app would respond accordingly.

    The apps would also be able to access any text displayed on the page using Apple’s standard text systems. That could make the app interactions feel more natural, without the user having to give specifically worded prompts or commands. For instance, if you had a reminder to wish your grandpa a happy birthday, you could say “FaceTime him” to take that action.

    Image Credits:Apple

    Apple’s existing Intents framework is also being updated to gain access to Apple Intelligence, covering even more apps in categories like Books, Browsers, Cameras, Document Readers, File Management, Journals, Mail, Photos, Presentations, Spreadsheets, Whiteboards, and Word Processors. Here, Apple is creating new “Intents” that are pre-defined, trained, and tested, and making them available to developers.

    That means you could tell the photo-editing app Darkroom to apply a cinematic filter to an image via Siri. Plus, Siri will be able to suggest an app’s actions, helping iPhone users discover what their apps can do and take those actions.

    Developers have been adopting the App Intents framework, introduced in iOS 16, because it offers other functionality to integrate their app’s actions and content with other platform features, including Spotlight, Siri, the iPhone’s Action button, widgets, controls, and visual search features — not just Apple Intelligence.

    Image Credits:Apple

    Also, unlike ChatGPT, Apple runs its own operating system on its own hardware and offers the App Store as a discovery mechanism, the app infrastructure, and developer tools, APIs, and frameworks — not just the AI-powered interface that will help you use your apps.

    Though Apple may have to borrow some AI tech from others to do that last bit, it has the data to personalize your app recommendations, and, for the privacy-minded, the controls that let you limit how much information apps themselves can collect. (Where’s the “Do Not Track” option for ChatGPT’s app system, we wonder?)

    OpenAI’s system doesn’t work out of the box with all your apps at launch. It requires developer adoption and relies on the Model Context Protocol (MCP), a newer technology for connecting AI assistants to other systems. That’s why ChatGPT currently works with only a handful of apps, like Booking.com, Expedia, Spotify, Figma, Coursera, Zillow, and Canva. MCP adoption is growing, but the delay in its becoming broadly adopted could give Apple the extra time it needs to catch up.

    What’s more, word is that Apple’s AI system is nearly ready. The company is reportedly already internally testing this, allowing users to take actions in apps by using Siri voice commands. Bloomberg reported that this smarter version of Siri works out of the box works with many apps, including those from major players like Uber, AllTrails, Threads, Temu, Amazon, YouTube, Facebook, and WhatsApp. And it’s still on track to ship next year, Apple confirmed to TechCrunch.

    Apple has an iPhone, OpenAI has Jony Ive

    The iPhone’s status as an app platform will also be difficult to disrupt, even from a company as large and powerful as OpenAI.

    The ChatGPT maker understands this, too, which is why OpenAI is exploring its own device with Apple’s former head of design, Jony Ive. It wants its AI to become more of a part of consumers’ everyday lives and habits, which could require a hardware device.

    But, so far, the company has struggled to think up a better computing paradigm than the smartphone, reports indicate. At the same time, the general public has demonstrated an aversion to always-on AI devices, which bump up against existing social norms and threaten privacy.

    The AI backlash has covered AI device maker Friend’s NYC subway posters, led Taylor Swift fans to attack their idol for dabbling in AI, and threatened the reputation of popular consumer brands and enterprise businesses alike. That leaves the future success of an OpenAI device in question.

    For now, that means OpenAI’s app model is one that essentially boils down to using its app to control other apps.

    If Apple gets its Siri upgrade right, that intermediary may not be necessary.

    [ad_2]

    Sarah Perez

    Source link

  • 4 Startups That Reached $1 Billion Valuations in Less Than 2 Years

    [ad_1]

    Unicorns used to be a rarity in the startup world. Lately, it seems there’s a herd of them running the startup plains. As of July, 2025, there were 1,200 startups across the world worth $1 billion or more, according to CB Insights. And some of those valuations are eye-popping. OpenAI is worth half a trillion dollars. SpaceX is right behind with a $400 billion valuation. And China’s ByteDance is estimated to be worth $300 billion globally (with the as-yet-unformalized U.S. spinoff being valued at just $14 billion).

    Numbers like that might lead some people to think that reaching the $1 billion valuation mark isn’t as hard as it used to be. Those people couldn’t be more wrong. Of the roughly 10 million businesses that were started in the past two years, just four have seen their valuation climb above $1 billion.

    Global startup funding in the second quarter of this year was down 20 percent from the previous quarter, though 11 percent higher than a year ago. In general, though, founders have been weathering a three-year dry period—and if your startup isn’t an AI play, it can still be a challenge to turn an investor’s head.

    So what did these four startups have that others didn’t? All of them are focused on AI, but only one has a product so far. Here’s a look at some of the youngest unicorns in the startup space.

    Thinking Machines Lab

    Founded in February 2025 by Mira Murati, OpenAI’s former CTO, this San Francisco-based startup, which aims to help people harness the power of AI for their own personal goals, saw a swarm of investors from the get-go. By mid-July, it had raised $2 billion at a valuation of $12 billion. Investors included Andreessen Horowitz, Nvidia, Cisco and AMD.

    Thinking Machines Lab has no revenue yet and no products. It plans to operate as an open-source entity, saying it plans to “frequently publish technical blog posts, papers, and code” so other AI model developers can integrate Thinking Machines’ learning into their own products. Murati, when describing the company’s purpose, has said she wants it to “advance AI by making it broadly useful and understandable through solid foundations, open science, and practical applications.” Recently, reports emerged that Thinking Machines would make custom AI models for companies.

    The Bot Company

    Cruise Founder Kyle Vogt co-founded this San Francisco-based startup, which aims to create at-home robots to help out with household chores and other daily tasks, with Paril Jain, and Luke Holoubek, former engineers at Tesla and Cruise. That was May of 2024. In March of this year, The Bot Company raised $150 million for the second time, giving it a valuation of $2 billion. Like Thinking Machines Lab, it has no revenue and no products yet, but it reportedly will steer clear of the humanoid design other companies are pursuing.

    Lovable

    Launched in December 2023, this vibe coding company (a software development process that uses AI to generate code from natural language prompts) became a unicorn just eight months after it opened its doors. Co-founded by Anton Osika and Fabian Hedin, this Swedish startup is looking to disrupt the startup world, by letting aspiring tech entrepreneurs just describe their ideas, rather than having to learn to code themselves. The company, which offers free and paid service plans, raised $200 million in a Series A round in mid-2024, putting its valuation at $1.8 billion. It has already announced that annual recurring revenue has topped $100 million.

    Safe Superintelligence

    OpenAI co-founder Ilya Sutskever left that company after reportedly being involved in the failed push to remove Sam Altman as CEO. His next project is Palo Alto- and Tel Aviv-based Safe Superintelligence, founded in June of 2024. The company, which doesn’t have a product yet, hopes to build AI models that have greater intelligence than humans, but which remain aligned with human interests. By September of that year, Sutskever had raised $1 billion, which gave the company a valuation of $5 billion. It secured another $2 billion in April of this year, sending that valuation to $32 billion, rejecting a reported buyout offer from Meta at roughly the same time (though co-founder and CEO Daniel Gross did jump over to Mark Zuckerberg’s company).    

    [ad_2]

    Chris Morris

    Source link

  • This Brooklyn-Based AI Company Just Raised $2 Billion to Compete With DeepSeek

    [ad_1]

    A Brooklyn startup just raised $2 billion to build a rival to DeepSeek, the Chinese AI company.

    Called Reflection AI, the company is now valued at about $8 billion, up some 15-fold from last March, when it announced $130 million in funding. The company is less than two years old.

    Reflection, which launched in March 2024, originally aimed to build a “superintelligent autonomous coding system,” and use that as a jumping off point. Now, it is working on building an open alternative to the types of closed frontier models that giants like OpenAI are developing. In other words, Reflection wants to be the U.S. answer to China’s DeepSeek.

    “AI is becoming the technology layer that everything else runs on top of,” Reflection noted in a blog post about the funding. “But the frontier is currently concentrated in closed labs. If this continues, a handful of entities will control the capital, compute, and talent required to build AI, creating a runaway dynamic that locks everyone else out.”

    U.S. AI and crypto czar David Sacks praised Reflection on Thursday. “It’s great to see more American open source AI models. A meaningful segment of the global market will prefer the cost, customizability, and control that open source offers. We want the U.S. to win this category too,” he posted on social media platform X.

    Aside from remaining globally competitive, Reflection says there are numerous benefits to frontier open intelligence, including safety, transparency, and accountability. (Frontier in this case refers to the most advanced, large-scale LLMs, like those currently in development behind closed doors at companies like OpenAI.) But it also flags the potential for misuse. High profile players in the space, like OpenAI’s Sam Altman, have publicly fretted about bad actors weaponizing AI; another concern is that others in the space are not putting in place adequate safeguards—even as Altman pushes to avoid regulation. OpenAI has since announced it is working on its own open model.

    “We believe the answer to AI safety is not ‘security through obscurity’ but rigorous science conducted in the open, where the global research community can contribute to solutions rather than a handful of companies making decisions behind closed doors,” Reflection’s blog says.

    The startup has spent the past year assembling a crack team of experts that have “pioneered breakthroughs including PaLM, Gemini, AlphaGo, AlphaCode, AlphaProof, and contributed to ChatGPT and Character AI, among many others.” Its founders, Misha Laskin and Ioannis Antonoglou, worked on DeepMind’s Gemini and Go-playing AI AlphaGo, respectively.

    The company also noted that it developed a large language model and “reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale.” TechCrunch reported that MOE models are a type of architecture that powers these super advanced, frontier LLMs.

    “We saw the effectiveness of our approach first-hand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we’re now bringing these methods to general agentic reasoning,” the blog states.

    Reflection also stated it has come up with a commercial model that will allow the company to sustain itself, while developing frontier models. It aims to release its first model early next year, TechCrunch reported.

    For more on the difference between closed AI models and those that are open-weight, check out this explainer.

    [ad_2]

    Chloe Aiello

    Source link

  • The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission | TechCrunch

    [ad_1]

    Chris Lehane is one of the best in the business at making bad news disappear. Al Gore’s press secretary during the Clinton years, Airbnb’s chief crisis manager through every regulatory nightmare from here to Brussels – Lehane knows how to spin. Now he’s two years into what might be his most impossible gig yet: as OpenAI’s VP of global policy, his job is to convince the world that OpenAI genuinely gives a damn about democratizing artificial intelligence while the company increasingly behaves like, well, every other tech giant that’s ever claimed to be different.

    I had 20 minutes with him on stage at the Elevate conference in Toronto earlier this week – 20 minutes to get past the talking points and into the real contradictions eating away at OpenAI’s carefully constructed image. It wasn’t easy or entirely successful. Lehane is genuinely good at his job. He’s likable. He sounds reasonable. He admits uncertainty. He even talks about waking up at 3 a.m. worried about whether any of this will actually benefit humanity.

    But good intentions don’t mean much when your company is subpoenaing critics, draining economically depressed towns of water and electricity, and bringing dead celebrities back to life to assert your market dominance.

    The company’s Sora problem is really at the root of everything else. The video generation tool launched last week with copyrighted material seemingly baked right into it. It was a bold move for a company already getting sued by the New York Times, the Toronto Star, and half the publishing industry. From a business and marketing standpoint, it was also brilliant. The invite-only app soared to the top of the App Store as people created digital versions of themselves, OpenAI CEO Sam Altman; characters like Pikachu and Cartman of “South Park”; and dead celebrities like Tupac Shakur.

    Asked what drove OpenAI’s decision to launch this newest version of Sora with these characters, Lehane offered that Sora is a “general purpose technology” like the printing press, democratizing creativity for people without talent or resources. Even he – a self-described creative zero – can make videos now, he said on stage.

    What he danced around is that OpenAI initially “let” rights holders opt out of having their work used to train Sora, which is not how copyright use typically works. Then, after OpenAI noticed that people really liked using copyrighted images, it “evolved” toward an opt-in model. That’s not iterating. That’s testing how much you can get away with. (By the way, though the Motion Picture Association made some noise last week about legal threats, OpenAI appears to have gotten away with quite a lot.)

    Naturally, the situation brings to mind the aggravation of publishers who accuse OpenAI of training on their work without sharing the financial spoils. When I pressed Lehane about publishers getting cut out of the economics, he invoked fair use, that American legal doctrine that’s supposed to balance creator rights against public access to knowledge. He called it the secret weapon of U.S. tech dominance.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Maybe. But I’d recently interviewed Al Gore – Lehane’s old boss – and realized anyone could simply ask ChatGPT about it instead of reading my piece on TechCrunch. “It’s ‘iterative’,” I said, “but it’s also a replacement.”

    Lehane listened and dropped his spiel. “We’re all going to need to figure this out,” he said. “It’s really glib and easy to sit here on stage and say we need to figure out new economic revenue models. But I think we will.” (We’re making it up as we go, is what I heard.)

    Then there’s the infrastructure question nobody wants to answer honestly. OpenAI is already operating a data center campus in Abilene, Texas, and recently broke ground on a massive data center in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened the adoption of AI to the advent of electricity – saying those who accessed it last are still playing catch-up – yet OpenAI’s Stargate project is seemingly targeting some of those same economically challenged places to set up facilities with their attendant and massive appetites for water and electricity.

    Asked during our sit-down whether these communities will benefit or merely foot the bill, Lehane went to gigawatts and geopolitics. OpenAI needs about a gigawatt of energy per week, he noted. China brought on 450 gigawatts last year plus 33 nuclear facilities. If democracies want democratic AI, he said, they have to compete. “The optimist in me says this will modernize our energy systems,” he’d said, painting a picture of re-industrialized America with transformed power grids.

    It was inspiring, but it was not an answer about whether people in Lordstown and Abilene are going to watch their utility bills spike while OpenAI generates videos of The Notorious B.I.G. It’s very worth noting that video generation is the most energy-intensive AI out there.

    There’s also a human cost, one made clearer the day before our interview, when Zelda Williams logged onto Instagram to beg strangers to stop sending her AI-generated videos of her late father, Robin Williams. “You’re not making art,” she wrote. “You’re making disgusting, over-processed hotdogs out of the lives of human beings.”

    When I asked about how the company reconciles this kind of intimate harm with its mission, Lehane answered by talking about processes, including responsible design, testing frameworks, and government partnerships. “There is no playbook for this stuff, right?”

    Lehane showed vulnerability in some moments, saying he recognizes the “enormous responsibilities that come with” all that OpenAI does.

    Whether or not those moments were designed for the audience, I believe him. Indeed, I left Toronto thinking I’d watched a master class in political messaging – Lehane threading an impossible needle while dodging questions about company decisions that, for all I know, he doesn’t even agree with. Then news broke that complicated that already complicated picture.

    Nathan Calvin, a lawyer who works on AI policy at a nonprofit advocacy organization, Encode AI, revealed that at the same time I was talking with Lehane in Toronto, OpenAI had sent a sheriff’s deputy to Calvin’s house in Washington, D.C., during dinner to serve him a subpoena. They wanted his private messages with California legislators, college students, and former OpenAI employees.

    Calvin says the move was part of OpenAI’s intimidation tactics around a new piece of AI regulation, California’s SB 53. He says the company weaponized its ongoing legal battle with Elon Musk as a pretext to target critics, implying Encode was secretly funded by Musk. Calvin added that he fought OpenAI’s opposition to California’s SB 53, an AI safety bill, and that when he saw OpenAI claim that it “worked to improve the bill,” he “literally laughed out loud.” In a social media skein, he went on to call Lehane, specifically, the “master of the political dark arts.”

    In Washington, that might be a compliment. At a company like OpenAI whose mission is “to build AI that benefits all of humanity,” it sounds like an indictment.

    But what matters much more is that even OpenAI’s own people are conflicted about what they are becoming.

    As my colleague Max reported last week, a number of current and former employees took to social media after Sora 2 was released, expressing their misgivings. Among them was Boaz Barak, an OpenAI researcher and Harvard professor, who wrote about Sora 2 that it is “technically amazing but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.”

    On Friday, Josh Achiam – OpenAI’s head of mission alignment – tweeted something even more remarkable about Calvin’s accusation. Prefacing his comments by saying they were “possibly a risk to my whole career,” Achiam went on to write of OpenAI: “We can’t be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high.”

    It’s worth pausing to think about that. An OpenAI executive publicly questioning whether his company is becoming “a frightening power instead of a virtuous one,” isn’t on a par with a competitor taking shots or a reporter asking questions. This is someone who chose to work at OpenAI, who believes in its mission, and who is now acknowledging a crisis of conscience despite the professional risk.

    It’s a crystallizing moment, one whose contradictions may only intensify as OpenAI races toward artificial general intelligence. It also has me thinking that the real question isn’t whether Chris Lehane can sell OpenAI’s mission. It’s whether others – including, critically, the other people who work there – still believe it.

    [ad_2]

    Connie Loizos

    Source link

  • A 3-person policy nonprofit that worked on California’s AI safety law is publicly accusing OpenAI of intimidation tactics | Fortune

    [ad_1]

    Nathan Calvin, the 29-year-old general counsel of Encode—a small AI policy nonprofit with just three full-time employees—published a viral thread on X Friday accusing OpenAI of using intimidation tactics to undermine California’s SB 53, the California Transparency in Frontier Artificial Intelligence Act, while it was still being debated. He also alleged that OpenAI used its ongoing legal battle with Elon Musk as a pretext to target and intimidate critics, including Encode, which it implied was secretly funded by Musk.

    Calvin’s thread quickly drew widespread attention, including from inside OpenAI itself. Joshua Achiam, the company’s head of mission alignment, weighed in on X with his own thread, written in a personal capacity, starting by saying, “At what is possibly a risk to my whole career I will say: this doesn’t seem great.”

    Former OpenAI employees and prominent AI safety researchers also joined the conversation, many expressing concern over the company’s alleged tactics. Helen Toner, the former OpenAI board member who resigned after a failed 2023 effort to oust CEO Sam Altman, wrote that some things the company does are great, but “the dishonesty & intimidation tactics in their policy work are really not.” 

    And at least one other nonprofit founder also weighed in: Tyler Johnston, founder of AI watchdog group the Midas Project, responded to Calvin’s thread with his own, saying: “[I] got a knock at my door in Oklahoma with a demand for every text/email/document that, in the ‘broadest sense permitted,’ relates to OpenAI’s governance and investors.” As with Calvin, he added, he received the personal subpoena, and the Midas Project was also served.

    “Had they just asked if I’m funded by Musk, I would have been happy to give them a simple ‘man I wish’ and call it a day,” he wrote. “Instead, they asked for what was, practically speaking, a list of every journalist, congressional office, partner organization, former employee, and member of the public we’d spoken to about their restructuring.”

    OpenAI referred Fortune to a post by chief strategy officer Jason Kwon on Friday in which Kwon said Encode’s decision to support Musk in the lawsuit, and the organization’s not “fully disclosed” funding, “raises legitimate questions about what is going on.”

    “We wanted to know, and still are curious to know, whether Encode is working in collaboration with third parties who have a commercial competitive interest adverse to OpenAI,” Kwon wrote, noting that subpoenas are a standard method of gathering information in any litigation. “The stated narrative makes it sound like something it wasn’t.” Kwon included an excerpt of the subpoena that he said showed all the requests for documents OpenAI made.

    As reported by the San Francisco Standard in September, Calvin was served with a subpoena from OpenAI in August, delivered by a sheriff’s deputy as he and his wife were sitting down to dinner. Encode, the organization he works for, was also served. The article reported that OpenAI appeared concerned that some of its most vocal critics were being funded by Elon Musk and other billionaire competitors—and was targeting those nonprofit groups despite offering little evidence to support the claim.

    Calvin wrote Friday that Encode—which he emphasized is not funded by Musk—had criticized OpenAI’s restructuring and worked on AI regulations, including SB 53. In the subpoena, OpenAI asked for all of Calvin’s private communications on SB 53.

    “I believe OpenAI used the pretext of their lawsuit against Elon Musk to intimidate their critics and imply that Elon is behind all of them,” he said, referring to the ongoing legal battle between OpenAI and Musk over the company’s original nonprofit mission and governance. Encode had filed an amicus brief in the case supporting some of Musk’s arguments.

    In a conversation with Fortune, Calvin emphasized that what has not been sufficiently covered is how inappropriate OpenAI’s actions were in connection with SB 53, which was signed into law by Gov. Gavin Newsom at the end of September. The law requires certain developers of “frontier” AI models to publish a public frontier AI framework and a transparency report when deploying or substantially modifying a model, report critical safety incidents to the state, and share assessments of catastrophic risks under the state’s oversight.

    Calvin alleges that OpenAI sought to weaken those requirements. In a letter to Governor Newsom’s office while the bill was still under negotiation, which was shared on X in early September by a former AI policy researcher, the company urged California to treat companies as compliant with the state’s rules if they had already signed a safety agreement with a U.S. federal agency or joined international frameworks such as the EU’s AI Code of Practice. Calvin argues that such a provision could have significantly narrowed the law’s reach—potentially exempting OpenAI and other major AI developers from key safety and transparency requirements.

    “I didn’t want to go into a ton of detail about it while SB 53 negotiations were still ongoing and we were trying to get it through,” he said. “I didn’t want it to become a story about Encode and OpenAI fighting, rather than about the merits of the bill, which I think are really important. So I wanted to wait until the bill was signed.”

    He added that another reason he decided to speak out now was a recent LinkedIn post from Chris Lehane, OpenAI’s head of global affairs, describing the company as having “worked to improve” SB 53—a characterization Calvin said felt deeply at odds with his experience over the past few months. 

    Encode was founded by Sneha Revanur, who launched the organization in 2020 when she was 15 years old. “She is not a full-time employee yet because she’s still in college,” said Sunny Gandhi, Encode’s vice president of political affairs. “It’s terrifying to have a half a trillion dollar company come after you,” Gandhi said.

    Encode formally responded to OpenAI’s subpoena, Calvin said, stating that it would not be turning over any documents because the organization is not funded by Elon Musk. “They have not said anything since,” he added. 

    Writing on X, OpenAI’s Achiam publicly urged his company to engage more constructively with its critics. “Elon is certainly out to get us, and the man has got an extensive reach,” he wrote. “But there is so much that is public that we can fight him on. And for something like SB 53, there are so many ways to engage productively.” He added, “We can’t be doing things that make us into a frightening power instead of a virtuous one. We have a duty and a mission to all of humanity, and the bar to pursue that duty is remarkably high.”

    Calvin described the episode as the “most stressful period of my professional life.” He added that he uses and gets value from OpenAI products and that the company conducts and publishes AI safety research that is “worthy of genuine praise.” Many OpenAI employees, he said, care a lot about OpenAI being a force for good in the world. 

    “I want to see that side of OAI, but instead I see them trying to intimidate critics into silence,” he wrote. “Does anyone believe these actions are consistent with OpenAI’s nonprofit mission to ensure that AGI benefits humanity?”

    [ad_2]

    Sharon Goldman

    Source link