ReportWire

Tag: openai

  • OpenAI’s TikTok of AI slop hit one million downloads faster than ChatGPT

    [ad_1]

    Sora, OpenAI’s app and social network for AI-generated videos, has been downloaded over one million times, according to Sora head Bill Peebles. The app reached one million downloads in less than five days, Peebles says, “even faster than ChatGPT did.” That’s despite OpenAI only making the app available in North America, and its decision to require users to have an invite to actually use it.

    Like TikTok, Sora offers an endless vertical feed of videos, only Sora’s videos are AI-generated rather than uploaded by users. Creating a 10-second video of your own is as simple as writing a prompt to OpenAI’s Sora 2 model in the app. And through the Sora’s Cameo feature, you can even create videos of yourself and anyone else who’s agreed to share their likeness to the service.

    The limited guardrails OpenAI has put on Sora has already led to a rash of videos featuring OpenAI’s Sam Altman and content that clearly infringes on copyright. The fact that Sora can so readily create videos of recognizable characters like Pikachu raises questions about what OpenAI’s model was trained on, and has unsurprisingly prompted pushback from the larger entertainment industry.

    In response, the company has updated Sora to give users more control over what videos their likeness can appear in. OpenAI plans to offer similar controls to rights holders, giving them “the ability to specify how their characters can be used (including not at all),” according to Altman. It’s not clear why these controls weren’t available when Sora launched, but both seem like good changes.

    Because of Sora’s invite system, it’s difficult to say if the over one million downloads the app has received translates to as many users. It’s not unusual for someone to download an app and never use it. Whatever the case, OpenAI’s bet on AI-generated videos seems like it might be a winning one, provided the company finds a way to actually make more money than it looses generating videos for Sora.

    [ad_2]

    Ian Carlos Campbell

    Source link

  • Breaking down copyright concerns over OpenAI’s Sora 2 app

    [ad_1]

    Sora 2 has taken the internet by storm. OpenAI launched the video-making tool last week, which allows users to put themselves or anyone else in scenes, real or imagined. Zoe Schiffer, director of business and industry at Wired Magazine, joins to discuss.

    [ad_2]

    Source link

  • Even after Stargate, Oracle, Nvidia, and AMD, OpenAI has more big deals coming soon, Sam Altman says | TechCrunch

    [ad_1]

    At nearly the same moment as Nvidia CEO Jensen Huang was expressing surprise over OpenAI’s multibillion-dollar deal with competitor AMD — shortly after his company agreed to invest up to $100 billion into the AI model maker — Sam Altman was saying that more such deals are in the works.

    Huang appeared on CNBC’s Squawk Box on Wednesday. When asked if he knew about the AMD deal before it was announced, he answered, “Not really.”  

    As TechCrunch previously reported, OpenAI’s deal with AMD is unusual. AMD has agreed to grant OpenAI large tranches of AMD stock — up to 10% of the company over a period of years contingent on factors like increases in stock price. In exchange, OpenAI will use and help develop the chipmaker’s next-generation AI GPUs chips. This makes OpenAI a shareholder in AMD.  

    Nvidia’s deal is the reverse. Nvidia has invested in the AI model-making startup, making it a shareholder in OpenAI. 

    While OpenAI has been using Nvidia gear for years through cloud providers like Microsoft Azure, Oracle OCI, and CoreWeave, “This is the first time we’re going to sell directly to them,” Huang explained. He added that his company would still continue to supply gear to the cloud makers, too.

    These direct sales, which include AI gear beyond GPUs like systems and networking, are intended to “prepare” OpenAI for the day when it is its own “self-hosted hyperscaler,” Huang said. In other words, when it’s using its own data centers. 

    But Huang admits that OpenAI doesn’t “have the money yet” to pay for all of this gear. He estimated that each gigawatt of AI data center will cost OpenAI “$50 to $60 billion,” to cover everything from the land and power to the servers and equipment.   

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    So far, in 2025, OpenAI has commissioned 10 gigawatts’ worth of U.S. facilities through its $500 billion Stargate deal with partners Oracle and SoftBank. (Plus, it penned a $300 billion cloud deal with Oracle.)

    Its partnership with Nvidia was for at least 10 gigawatts of AI data centers. Its partnership with AMD was for 6 gigawatts. Plus its “Stargate UK” partnership involves expanding data centers in the U.K., and it has other European commitments. By some estimates, OpenAI has this year inked $1 trillion worth of such deals.  

    Similar to the AMD deal, Nvidia’s deal has been criticized for being “circular,” Bloomberg reported. The critics say Nvidia is essentially underwriting OpenAI’s purchases, getting the AI startup’s stock for its efforts. 

    Altman to the world: Expect more

    As Huang was dissecting OpenAI’s infrastructure needs on CNBC, OpenAI CEO Sam Altman’s interview with Andreessen Horowitz’s a16z Podcast dropped.

    During the podcast, a16z co-founder Ben Horowitz told Altman that he’s “very impressed by deal structure improvement,” referring to these most recent deals. Andreessen Horowitz is an OpenAI investor, so it would be shocking if he wasn’t impressed. OpenAI has found a way to potentially obtain billions of dollars of equipment on someone else’s dime. Repeatedly. 

    When asked about these recent deals, Altman said, “You should expect much more from us in the coming months.” 

    Altman sees OpenAI’s future models and upcoming other products as so much more capable, thereby fueling so much more demand, that “we have decided that it is time to go make a very aggressive infrastructure bet,” he explained.  

    The problem is that OpenAI’s revenue today is currently nowhere near a $1 trillion, though it is, by all accounts, growing rapidly, reportedly hitting $4.5 billion in the first half of 2025.

    Yet Altman obviously believes that eventually all of this investment will pay for itself. “I’ve never been more confident in the research road map in front of us and also the economic value that will come from using those [future] models.” 

    But, he said, OpenAI can’t get to all of that economic lushness on its own.

    “To make the bet at this scale, we kind of need the whole industry, or big chunk of the industry, to support it. And this is from the level of electrons to model distribution and all the stuff in between, which is a lot. So we’re going to partner with a lot of people,” Altman said, with more deals expected in the coming months.

    So stand by, tech industry. OpenAI is still wheeling and dealing.

    [ad_2]

    Julie Bort

    Source link

  • You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out

    [ad_1]

    The complete copyright-free-for-all approach that OpenAI took to its new AI video generation model, Sora 2, lasted all of one week. After initially requiring copyright holders to opt out of having their content appear in Sora-generated videos, CEO Sam Altman announced that the company will be moving to an “opt-in” model that will “give rightsholders more granular control over generation of characters”—and Sora obsessives are not taking it particularly well.

    Given the type of content that was being generated with Sora and shared via the TikTok-style social app that OpenAI launched specifically to host user-generated Sora videos, the change shouldn’t come as a shock. Almost immediately, the platform was inundated with copyrighted material being used in ways that the rightsholders almost certainly did not care for, unless you think Nickelodeon really loved the subversiveness of Nazi SpongeBob. On Monday, the Motion Picture Association became one of the loudest voices calling for OpenAI to put an end to the potential infringement. It didn’t take long for OpenAI to respond and acquiesce.

    In a blog post, Altman said the new approach to copyrighted material in Sora will require rightsholders to opt-in to having their characters and content used—but he’s very sure that copyright holders love the videos, actually. “We are hearing from a lot of rightsholders who are very excited for this new kind of ‘interactive fan fiction’ and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all),” Altman wrote, stating that his company wants to “let rightsholders decide how to proceed.”

    Altman also admitted, “There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration.” It’s unclear if that will play with rightsholders. MPA CEO Charles Rivkin said in a statement that OpenAI “must acknowledge it remains their responsibility—not rightsholders’—to prevent infringement on the Sora 2 service,” and said “Well-established copyright law safeguards the rights of creators and applies here.”

    While OpenAI might be giving copyright holders more control of the outputs of its model, it doesn’t appear that they had much say on the inputs. A report from the Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. It’s not clear that OpenAI went out and got those rights to train Sora 2, but the generator is very good at spitting out accurate recreations of copyrighted material in a way that it could only do if it was fed a whole lot of existing content during training.

    The biggest AI training case thus far saw Anthropic pay out $1.5 billion to settle a copyright infringement case with authors of books the company pirated to train its models. The judge in that case did find that using copyrighted material for training without permission is fair use, though other courts may not agree with that call. Earlier this year, OpenAI asked the Trump administration to call AI model training fair use. So a lot of OpenAI’s strategy around Sora appears to be fucking around and hoping, if it makes the right allies, it’ll never have to find out.

    OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well.

    [ad_2]

    AJ Dellinger

    Source link

  • Fake Protest Videos Are the Latest AI Slop to Go Viral in MAGA World

    [ad_1]

    Last week, OpenAI released Sora 2, the latest version of its AI video creator, along with a new Sora app for making and sharing those videos. The new tool has led to an explosion in realistic AI videos on social media, including plenty of fresh discussion about intellectual property rights. But one of the oddest things to emerge with Sora 2 is a new crop of videos showing AI protesters.

    These aren’t just any protest videos. Specifically, supporters of President Donald Trump appear to be making videos of protesters being brutalized by the federal agents and troops who have been sent to U.S. cities. Trump has most recently tried to deploy National Guard troops to Portland and Chicago, and while those deployments have been delayed by the courts, those cities are still crawling with ICE agents terrorizing immigrant communities.

    One of the new fake protester videos, which has over 40 million views on Instagram, shows someone clad in black shouting in the face of a soldier dressed in fatigues.

    “What’s your name, soldier?” the AI protester yells repeatedly. The text on the screen reads “wait for it” before the AI soldier sprays the protester with orange pepper spray, yelling back “Sergeant Pepper.”

    The video has been shared on several platforms, including TikTok and X, where many people don’t seem to understand it’s AI. The actor James Woods, a Trump supporter who frequently shares right-wing memes on X, wrote on Tuesday about the video, “Couldn’t get better than this. Haiku-level brilliant.” Many of the comments seem to be just as oblivious to the fact that it’s AI.

    Another video that’s become popular on Facebook, Instagram, and X shows AI protesters shouting “no queso, no cheese,” apparently a racist riff on the common protest chant “no justice, no peace.” The AI protesters in the video are also sprayed with a chemical agent.

    The video has gotten over 1.5 million views on X alone, with the caption, “Lmfao, that was beautiful and before you ask I voted 100 for this!!”

    The phrase “I voted for this” has become a common thing for far-right supporters of Trump to say when something particularly brutal has happened to their political opponents.

    The Instagram version of the video includes the captionLiberals acting like clowns – got treated like clowns by federal agents – goodbye – FAFO,” an acronym for “fuck around and find out.” Some users on Instagram have pointed out that it’s AI, but the account insists in the comments that it’s real.

    It’s not real, obviously. And the big Sora watermark should be the clue to anyone who’s familiar with OpenAI. But unfortunately, that kind of watermark isn’t enough for most people these days to differentiate between fake and real content.

    The shorter Sora videos are also being compiled into larger compilations of AI-generated fakes, like in this one on X that includes jokes about the protesters being paid. It’s a common right-wing accusation that all the people protesting Donald Trump are actually paid to be there, often by liberal activists like George Soros.

    The most curious thing about all of these fake protester videos is that there are plenty of real videos to share. For example, a pastor in Chicago was shot in the head with pepper balls by agents last month. Rev. David Black was protesting outside an ICE facility near Chicago, Illinois, and was simply speaking peacefully when he was attacked.

    Another one of several videos showing protesters getting brutalized while doing nothing wrong was recently captured in Portland. A woman is seen simply talking with police before she’s sprayed for absolutely no discernible reason.

    But real videos of a pastor or a woman getting treated with unnecessary violence don’t really fit the narrative that President Trump and his fascist goons are trying to sell. Trump has claimed the reason he’s sending agents and troops to U.S. cities is that they’re overrun with crime. And he just wants to restore law and order.

    In reality, violent crime is near a 50-year low, and Trump is simply trying to strike fear into the hearts of ordinary Americans as his thugs disrupt countless lives. That disruption is having an economic impact, as restaurants in Chicago are comparing the loss of business to the start of the covid-19 pandemic.

    Perhaps that’s why we’re seeing so many fake videos of protests on social media right now. They need a pretext to justify their brutal crackdown on working people. Trump himself seems to have been convinced to deploy the National Guard to Oregon because he saw too many old videos of Portland from the summer of 2020 on TV. And when it comes to Trump, he’s pretty easy to fool.

    Trump has even shared an AI video of himself promoting a magical “med bed” that can cure all diseases. The president’s Truth Social account deleted the video, but it’s still not clear why he shared the conspiracy theory in the first place. It’s entirely possible he thought it was real. We simply don’t know.

    Remember when the president saw a photoshopped image of the letters “MS-13” on someone’s hand back in April and insisted it was real?

    President Donald Trump holds up an image of a hand that’s been photoshopped to add the characters “MS-13” at the White House on April 18, 2025. Image: Truth Social

    Trump is not a smart man. And his followers are even dumber, as they spread AI slop far and wide, creating an alternative reality where sadistic troops dole out punishment to the left on American streets. Unfortunately, you don’t need AI to see so much brutality right now. And if Trump has his way, you’re going to see a lot more of it very soon.

    [ad_2]

    Matt Novak

    Source link

  • OpenAI’s Sora App Drags Us Into the Litigation Phase of AI

    [ad_1]

    Well, the AI wars just got worse. Just when I thought the AI platformers had figured out how to temper their conquests and deliver tools that would result in long-term wins for everyone, OpenAI went and launched Sora 2, a one-stop shop for prompt-based short-video copyright infringement on the iPhone app store and it skyrocketed to number one like a bullet with 164,000 downloads in 48 hours. 

    If you were busy this weekend and missed the whole fiasco around OpenAI’s Sora private app release, you missed a parade of prompt-driven AI-generated short videos featuring Ronald McDonald fleeing the police in a burger-shaped car – along with all sorts of protected IP like Nintendo, Southpark, even the Simpsons characters doing whatever meme-able things the app’s invite-only users could unleash on an amused public.

    Oh, the lawyers were not amused.

    Especially those who work for the companies holding the IP copyrights for those mostly animated characters. Also unimpressed were the lawyers who work for famous people who were about to be placed in compromising positions – once some idiot decided the world would get a laugh out of seeing Taylor Swift dressed as a Nazi and waving a banana while shoplifting in an adult video store. 

    Yeah, the AI wars went nuclear. No sign on when they get better. Because maybe all OpenAI did was rip a page from the tried-and-true “apologize after instead of ask for permission first” playbook. But I believe they took that old chestnut of a strategy to the next level, using the Sora private release as a test of opt-out versus opt-in for AI.

    Maybe they knew exactly what they were doing. And maybe they got us again. Reckless speculation to follow.

    Modern Generative AI Is Built On IP Theft

    Man, it sounds harsh when you say it out loud but there really isn’t a counter-argument anymore.

    Back in 2010, I co-invented some of the first public-facing generative AI. However, unlike today’s version, our models were developed solely on the private data of our customers, data which never left their possession, giving them total control over how that data was exposed to either their own customers or the general public. 

    Now, if you decide that you want to sell AI not just to specific customers, but to the whole wide world, you’re gonna need – you guessed it – the whole wide world’s data.

    How do you get all that data? Not only that, how do you get permission to use all that data?

    Well, in my experience, you would need to scrape first and apologize later. 

    These shenanigans all came to a head in 2024, and by the end of the year, people like me were raising their hands and asking if we were just going to let everyone get away with the mass theft of all the world’s IP – while also noting that said IP was “housed” in a notoriously poor and unverified data store.

    But the thing is, people like me already knew the answer, because while this was the first time we heard the opt-out/opt-in argument as it related to AI, it was the same opt-out/opt-in argument we had already heard about SEO. SEO was such a lure of cash to be made that we not only let the bots in, we tweaked the content and added the keywords to make it easier for the bots to collect whatever IP they wanted. Just give us that coveted high-ranking search link, please!

    We let it be OK to do that.

    Not only did the AI platformers ride the backs of that process, allegedly, but when they had what they needed, they broke the original promise. No more links for you!

    What do we get in return? Apparently, users get a chance to “engage with their family and friends through their own imaginations.”

    This Is Not About User Imagination

    It never was.

    It’s the same story Facebook gave us when they started letting brands make social accounts, way back in the olden times.

    It’ll be fun. You can interact with Diet Pepsi the same way you interact with your best friend from the third grade. Or your mom. It will deepen relationships with end users and the brands they love. 

    Come on.

    It hasn’t stopped. Just last week, when Hollywood actors and actresses lost their minds over the launch of AI actress Tilly Norwood, the let’s-all-just-calm-down response from the creators was: “AI offers another way to imagine and build stories.”

    I gotta call bullshit here.

    How is it not a way to generate content without paying the people who own the IP – in this case the actors and actresses Tilly was trained on?

    Same thing here with Sora. This has, in my opinion, nothing to do with interacting with friends in a brand new way. It has everything to do with generating content without paying the creators of the IP.

    Opt-Out As Policy Is a Joke

    And it always has been.

    Lawyers: Wait, you can’t do opt-out. It’s a completely onerous burden on the owners of the IP.

    AI platforms: Oops. We just did. We’ll fix it. Here’s some money.

    I think, at this point, generative AI might be serving as a testing ground for the next phase of AI – prescriptive, predictive, autonomous, and agentic, or the more “thinky” AI.

    The rest of that “real” AI is where the “real” money is, but like generative AI, it doesn’t work without a lot of data, and in many cases, it doesn’t work well without proprietary data. 

    And paying the owners for that proprietary IP is really expensive.

    I don’t think copyright law makes a dent here. Copyright law is nothing more than a warning and an excuse to start a legal battle. Besides, the toothpaste is already out of the tube. Because that’s not Spongebob Squarepants in that Sora video, it’s ShapeySlacks McPhilFish, or whatever derivative you want to slap on it.

    But, that won’t stop the IP owners from trying. Last month, Rolling Stone-owned Penske sued Google. And in the same week Hollywood freaked out about Tilly Norwood, Disney sued Midjourney and sent a cease-and-desist to Character.

    Welcome to the litigation phase of Generative AI, folks. If that’s where we’ve arrived then I don’t see it getting better any time soon. So protect your IP, because during the next phase of “real” AI, that proprietary data is going to be a lot more valuable. 

    If you found some enjoyment in this reckless speculation, please join my email list so I can shoot you a quick heads up when I write something completely from my own human brain. 

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    [ad_2]

    Joe Procopio

    Source link

  • You can’t libel the dead. But that doesn’t mean you should deepfake them. | TechCrunch

    [ad_1]

    Zelda Williams, daughter of the late actor Robin Williams, has a poignant message for her father’s fans.

    “Please, just stop sending me AI videos of Dad. Stop believing I wanna see it or that I’ll understand. I don’t and I won’t,” she wrote in a post on her Instagram story on Monday. “If you’ve got any decency, just stop doing this to him and to me, to everyone even, full stop. It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want.”

    It’s probably not a coincidence that Williams was moved to post this just days after the release of OpenAI’s Sora 2 video model and Sora social app, which gives users the power to generate highly realistic deepfakes of themselves, their friends, and certain cartoon characters.

    That also includes dead people, who are seemingly fair game because it is not illegal to libel the deceased, according to the Student Press Law Center.

    Sora will not let you generate videos of living people — unless it is of yourself, or a friend who has given you permission to use their likeness (or “cameo,” as OpenAI calls it). But these limits don’t apply to the dead, who can mostly be generated without roadblocks. The app, which is still only available via invite, has been flooded with videos of historical figures like Martin Luther King, Jr., Franklin Delano Roosevelt, and Richard Nixon, as well as deceased celebrities like Bob Ross, John Lennon, Alex Trebek, and yes, Robin Williams.

    How OpenAI draws the line on generating videos of the dead is unclear. Sora 2 won’t, for example, generate former President Jimmy Carter, who died in 2024, or Michael Jackson, who died in 2009, though it did create videos with the likeness of Robin Williams, who died in 2014, according to TechCrunch’s tests. And while OpenAI’s cameo feature allows people to set instructions for how they appear in videos others generate of them — guardrails that came in response to early criticism of Sora — the deceased have no such say. I’ll bet Richard Nixon would be rolling over in his grave if he could see the deepfake I made of him advocating for police abolition.

    Deepfakes of Richard Nixon, John Lennon, Martin Luther King, Jr., and Robin Williams
    Deepfakes of Richard Nixon, John Lennon, Martin Luther King, Jr., and Robin WilliamsImage Credits:Sora, screenshots by TechCrunch

    OpenAI did not respond to TechCrunch’s request for comment on the permissibility of deepfaking dead people. However, it’s possible that deepfaking dead celebrities like Williams is within the firm’s acceptable practices; legal precedent shows that the company likely wouldn’t be held liable for the defamation of the deceased.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    “To watch the legacies of real people be condensed down to ‘this vaguely looks and sounds like them so that’s enough,’ just so other people can churn out horrible TikTok slop puppeteering them is maddening,” Williams wrote.

    OpenAI’s critics accuse the company of taking a fast-and-loose approach on such issues, which is why Sora was quickly flooded with AI clips of copyrighted characters like Peter Griffin and Pikachu upon its release. CEO Sam Altman originally said that Hollywood studios and agencies would need to explicitly opt out if they didn’t want their IP to be included in Sora-generated videos. The Motion Picture Association has already called on OpenAI to take action on this issue, declaring in a statement that “well-established copyright law safeguards the rights of creators and applies here.” He has since said the company will reverse this position.

    Sora is, perhaps, the most dangerous deepfake-capable AI model accessible to people so far, given how realistic its outputs are. Other platforms like xAI lag behind, but have even fewer guardrails than Sora, making it possible to generate pornographic deepfakes of real people. As other companies catch up to OpenAI, we will set a horrifying precedent if we treat real people — living or dead — like our own personal playthings.

    [ad_2]

    Amanda Silberling

    Source link

  • Top analyst on concerns about Nvidia fueling an AI bubble: ‘We’ve seen this movie before. It was called Enron, Tyco’ | Fortune

    [ad_1]

    A top Wall Street analyst has sounded an alarm over the U.S. equity bull market, warning that its remarkable run is built on a precariously narrow foundation: a surge in spending on, and optimistic assumptions about, infrastructure for artificial intelligence (AI). This spending has fueled a boom in the shares of most of the so-called Magnificent 7 and a few dozen related businesses, which have now come to account for roughly 75% of the S&P 500’s returns since the rally of the last few years began.

    The commentary on September 29 by Morgan Stanley Wealth Management’s chief investment officer, Lisa Shalett, frames the current market boom as a “one-note narrative” almost entirely dependent on massive capital expenditures in generative AI, raising questions about its durability as economic and competitive risks start to mount. Shalett’s critique came squarely in the middle of some people in the AI field — and many financial commentators around Wall Street —fretting at market exuberance and beginning to talk openly about a bubble.

    In an interview with Fortune, Shalett said she was “very concerned” about this theme in markets, saying her office had broadened from a belief that the market would only bid up seven or 10 stocks to roughly 40. “At the end of the day … this is not going to be pretty” if and when the generative AI capital expenditure story falters, she said.

    Shalett said she’s worried about a “Cisco moment” like when the dotcom bubble burst in 2000, referring to the company that was briefly the most valuable company in the world before an 80% stock plunge. [By “Cisco moment” did she mean a whole bunch of circular financing coming back to bite the company? If so, that would be worth adding/briefly explaining.] When asked how close we are to such a moment, Shalett said probably not in the next nine months, but very possibly in the next 24. When you look at the actual spending and the amount of capital coming into the space, “we’re a lot closer to the seventh inning than the first or second inning,” she said.

    ‘Starting to do what all ultimate bad actors do’

    Shalett’s comments centered on several recent multibillion-dollar deals to scale up data-center infrastructure. As notable substacker and former Atlantic writer Derek Thompson recently noted in a post titled “This is how the AI bubble will pop,” so much money is being spent to support AI’s energy-consumption needs that it’s the equivalent of a new Apollo space mission every 10 months. (Tech companies are spending roughly $400 billion this year alone on data-center infrastructure, while the Apollo program allocated about $300 billion in today’s dollars to get to the moon from the 1960s to the ’70s.)

    What’s more than a little concerning to Shalett is that one company alone, Nvidia—the most valuable company in the history of the world, with an over $4.5 trillion market cap—is at the center of a significant number of these deals. In September alone, Nvidia invested $100 billion in OpenAI in a massive deal, just days after pledging $5 billion to Intel (the Intel agreement was tied to chips, not data-center infrastructure, per se).

    Fortune‘s Jeremy Kahn reported in late September on significant concerns about “circular” financing, or Nvidia’s cash essentially being recycled throughout the AI industry. Shalett sees this as a major concern and a major sign that the business cycle is headed toward some kind of endgame. “The guy at the epicenter, Nvidia, is basically starting to do what all ultimate bad actors do in the final inning, which is extending financing, they’re buying their investors.”

    Shalett expanded on her concerns by saying that companies around Nvidia “are starting to become interwoven.” She noted that OpenAI is partially owned by Microsoft, but now Nvidia has also made an investment in the startup, while Oracle and AMD each have their own purchasing agreements with OpenAI. But OpenAI also has a data-center deal with tech giant Oracle, with the “bad news,” Shalett notes, that this deal is “totally debt-financed.” OpenAI also struck a deal in October with chip-maker AMD that allows OpenAI to buy up to 10% of AMD. “Essentially, Nvidia’s main competitor is going to be partially owned by OpenAI, which is partially owned by Nvidia. So, Nvidia can ‘own’ a piece of its largest competitor. It is totally circular and increases systemic risk.”

    When reached for comment, a spokesperson for Nvidia said, “We do not require any of the companies we invest in to use Nvidia technology.”

    Nvidia CEO Jensen Huang discussed the OpenAI investment in an appearance on the Bg2 podcast with Brad Gerstner and Clark Tang on September 25, calling it an “opportunity to invest” and part of a partnership geared toward helping OpenAI build their own AI infrastructure. When asked about the allegation of circular financing in general and the Cisco precedent in particular, Huang talked about how OpenAI will fund the deal, arguing that it will have to be funded by OpenAI’s future revenues, or “offtake,” which he pointed out are “growing exponentially,” and by its future capital, whether it’s raised by a sale of equity or debt. That will depends on investors’ confidence in OpenAI, he said, and beyond that, it’s “their company, it’s not my business. And of course, we have to stay very close to them to make sure that we build in support of their continued growth.”

    Shalett said that she and her team were “starting to watch” for signs of a bubble popping, highlighting the deal announced roughly a week before OpenAI struck its $100 billion data-center deal with Nvidia, when it struck another with Oracle worth $300 billion. Analysts at KeyBanc Capital Markets estimated that Oracle will have to borrow $100 billion of that amount—$25 billion a year for the next four years.

    “Every morning the opening screen on my Bloomberg is what’s going on with CDS spreads on Oracle debt,” Shalett said, referring to credit default swaps, the financial instrument that was obscure before the Great Financial Crisis, but infamous for the role it played in a global market meltdown. CDSs essentially serve as insurance to investors in case of insolvency by a market entity. “If people start getting worried about Oracle’s ability to pay,” Shalett said, “that’s gonna be an early indication to us that people are getting nervous.” She added that all the indications to her speak of the end of a cycle and history is littered with cautionary tales from such times.

    Oracle did not respond to requests for comment.

    90% growth since the last bear market

    Since the October 2022 bear market bottom and the launch of ChatGPT, according to Shalett’s calculations, the S&P 500 has soared 90%, but most of these gains have come from a small group of stocks. The so-called “Magnificent Seven”—including high-profile names like Nvidia and Microsoft—plus another 34 AI data-center ecosystem companies, are responsible for, as cited by Shalett and separately by JP Morgan Asset Management’s Michael Cembalest, about three-quarters of overall market returns, 80% of earnings growth, and a staggering 90% of capital spending growth in the index. Comparatively, the other 493 names in the S&P 500 are up just 25%—showing just how concentrated the rally has become.

    The so-called “hyperscaler” companies alone are now spending close to $400 billion annually on capex supporting AI infrastructure, Morgan Stanley Wealth Management calculated. The economic influence of AI capex is now immense, contributing an estimated 100 basis points—fully one percentage point—to second-quarter GDP growth, according to Morgan Stanley’s research. This pace outstrips the rate of underlying consumer spending growth by tenfold, underscoring its centrality to both market performance and broader economic data.

    “People conflate AI adoption, which is in the first inning, with the capex infrastructure buildout, which has been going full-out since 2022,” Shalett told Fortune. She cited concerns about the prominence of private equity and debt capital coming into play, as that “tends to produce bubbles, because it may be unspoken-for capacity.” In other words, people have money to burn and they’re throwing it at things that may not pay off.

    Shalett waved away macro theories about the labor market or the Federal Reserve. “We think that’s missing the forest for the trees because the forest is entirely rooted in this one story” about AI infrastructure. Morgan Stanley’s bull-case mid-2026 price target for the S&P 500 is an eye-popping 7,200, but Shalett highlights that even the most optimistic outlook admits that risk premiums, credit spreads, and market volatility do not seem to fully account for the vulnerabilities lurking beneath the AI-fueled advance.

    Shalett’s analysis suggests that AI capex maturity is approaching and some possible slowdowns are already visible. For instance, hyperscalers have already seen free-cash-flow growth turn negative, a sign that investment may have outpaced underlying technology returns. Strategas, an independent research firm, estimates that hyperscaler free cash flow is set to shrink by more than 16% over the next 12 months, putting pressure on lofty valuations and forcing investors to demand more discipline in how these funds are deployed.

    Shalett was asked about data centers’ disproportionate impact on GDP throughout 2025, which media blogger Rusty Foster of Today in Tabs described as: “Our economy might just be three AI data centers in a trench coat.” The Morgan Stanley exec said “That’s what makes this cycle so fragile,” adding that at some point, “we’re not gonna be building any data centers for a while.” After that, it’s just a question of whether you crash: “Do you have a mild 1991-92-style recession or does it really become bad?”

    A more bullish case

    Bank of America Research weighed in on the semiconductors sector in a Friday note, writing that vendor financing in the space, especially Nvidia’s $100 billion commitment to OpenAI, has been “raising eyebrows.” Nevertheless, the team, led by senior analyst Vivek Arya, argued that the deal is structured by performance and competitive need, rather than pure speculative frenzy.

    In an interview with Fortune, Arya explained why he wasn’t worried despite the “optics” being pretty obviously bad. “It’s very easy to say, ‘Oh, Nvidia is giving [OpenAI] money and they are buying chips with that money” and so on, but he argued the headlines are misleading about how much money is actually being spent and the $100 billion sticker price on the OpenAI deal “scared everyone.” Noting that the deal has multiple tranches that will play out over several years to come, he said it’s not like Nvidia is “just handing a $100 billion check to OpenAI [and saying] you know, go have fun.”

    “Nvidia didn’t fund all of it,” Arya said of the wider generative AI capex boom. Citing public filings, Arya argued that Nvidia’s entire investment in the AI ecosystem is in fact less than $8 billion or so over the last 12 months, not such a large figure after all. And he’s still bullish on Nvidia and OpenAI, he added, because he sees them as the winners of this particular story. “We think they are going to be among the four or five ecosystems that come up. It’s not like Nvidia is going and investing in every one of those ecosystems, right? They’re only investing in one of those five, which is, of course, the most disruptive,” that being OpenAI.

    When asked about his own fears of a bubble, Arya actually sounded a calmer but strikingly similar tune to Shalett. “I’m extremely comfortable with what will happen in the next 12 months,” Arya said, “And I have high sense of optimism about what will happen in the next five years. But can there be periods of digestion in between? Yeah.” Explaining that this is the nature of any infrastructure cycle, “it’s not always up and to the right.” In other words, after the next nine months in Shalett’s opinion and the next year in Arya’s, the data-center buildout endgame could be in play. “When these data centers are built,” Arya said, “they are not built for today’s demand. They’re built with some anticipation of demand that will develop in the next, you know, 12 to 18 months. So, are they going to be 100% utilized all the time? No.”

    Rising worries about a bubble

    Some of the biggest names in tech and Wall Street offered were hedging hard about the possibility of a bubble on Friday. Goldman Sachs CEO David Solomon and Jeff Bezos, both speaking at a tech conference in Turin, Italy, said they were seeing the same patterns as Shalett. Solomon said the massive amounts of spending weren’t fundamentally different from other booms and busts. “There will be a lot of capital that was deployed that didn’t deliver returns,” he said. That’s no different from how investment works. “We just don’t know how that will play out.”

    Bezos characterized it as “kind of an industrial bubble,” arguing that the infrastructure would pay off for many years to come.

    OpenAI CEO Sam Altman, who got markets jittery in late August when he mentioned the B-word, was asked again to comment on the subject while touring (what else?) a giant new data center in Texas. “Between the 10 years we’ve already been operating and the many decades ahead of us, there will be booms and busts,” Altman said. “People will overinvest and lose money, and underinvest and lose a lot of revenue.”

    For his part, Cisco CEO John Chambers, one of the faces of the dotcom bubble, told the Associated Press on October 3 that he sees “a lot of tremendous optimism” about AI that is similar to the “irrational exuberance on a really large scale” that marked the internet age. It indicates a bubble to him, but only “a future bubble for certain companies. Is there going to be train wreck? Yes, for those that aren’t able to translate the technology into a sustainable competitive advantage, how are you going to generate revenue after all the money you poured into it?”

    When asked whether the size of this potential bubble represents uncharted waters for the economy, especially considering the one-note nature of the long bull market, Shalett said Wall Streeters are always evaluating risk. But putting on her “American citizen hat,” she warned about the media consolidation that sees Oracle’s founder Larry Ellison also now playing a major role in TikTok (as part of a buying consortium of Trump-friendly billionaires) and Paramount in Hollywood and CBS News in New York (through his son, David Ellison, the media company’s new owner). Shalett said she’s worried about “groupthink” filtering into the functioning of markets. “That is not something that most of us have experienced in our lifetimes,” she said. “You stop factoring in risk premiums into markets, there is no bear case to anything.”

    [ad_2]

    Nick Lichtenberg

    Source link

  • Deloitte was caught using AI in $290,000 report to help the Australian government crack down on welfare after a researcher flagged hallucinations | Fortune

    [ad_1]

    Deloitte’s member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment. 

    The report was originally published on the Australian government’s Department of Employment and Workplace Relations website in July. A revised version was quietly published on Friday after Sydney University researcher of health and welfare law Chris Rudge said he alerted media outlets that the report was “full of fabricated references.”

    Deloitte reviewed the 237-page report and “confirmed some footnotes and references were incorrect,” the department said in a statement Tuesday.

    Deloitte did not immediately respond to Fortune’s request for comment.

    The revised version of the report includes a disclosure that a generative AI language system, Azure OpenAI, was used in its creation. It also removes the fabricated quotes attributed to a federal court judge and references to nonexistent reports attributed to law and software engineering experts. Deloitte noted in a “Report Update” section that the updated version, dated September 26, replaced the report published in July. 

    “The updates made in no way impact or affect the substantive content, findings and recommendations in the report,” Deloitte wrote.

    In late August the Australian Financial Review first reported that the document contained multiple errors, citing Rudge as the researcher who identified the apparent AI-generated inaccuracies. 

    Rudge discovered the report’s mistakes when he read a portion incorrectly stating Lisa Burton Crawford, a Sydney University professor of public and constitutional law, had authored a non-existent book with a title outside her field of expertise.

    “I instantaneously knew it was either hallucinated by AI or the world’s best kept secret because I’d never heard of the book and it sounded preposterous,” Rudge told The Associated Press on Tuesday. 

    The Big Four consulting firms and global management firms such as McKinsey have invested hundreds of millions of dollars into AI initiatives to develop proprietary models and increase efficiency. In September, Deloitte said it would invest $3 billion in generative AI development through fiscal year 2030. 

    Anthropic also announced a Deloitte partnership on Monday that includes making Claude available to more than 470,000 Deloitte professionals.

    In June, the UK Financial Reporting Council, an accountancy regulator, warned that the Big Four firms were failing to monitor how AI and automated technologies affected the quality of their audits. 

    Though the firm will refund its last payment installment to the Australian government, Senator Barbara Pocock, the Australian Greens party’s spokesperson on the public sector, said Deloitte should refund the entire $290,000.

    Deloitte “misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent,” Pocock told Australian Broadcasting Corp. “I mean, the kinds of things that a first-year university student would be in deep trouble for.”“The matter has been resolved directly with the client,” a spokesperson from Deloitte Australia told TheAssociated Press.

    Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.

    [ad_2]

    Nino Paoli

    Source link

  • OpenAI Gives Us a Glimpse of How It Monitors for Misuse on ChatGPT

    [ad_1]

    OpenAI’s latest report on malicious AI use underscores the tightrope that AI companies are walking between preventing misuse of their chatbots and reassuring users that their privacy is respected.

    The report, which dropped today, highlights several cases where OpenAI investigated and disrupted harmful activity involving its models, focusing on scams, cyberattacks, and government-linked influence campaigns. However, it arrives amid growing scrutiny over another type of AI risk, the potential psychological harms of chatbots. This year alone has seen several reports of users committing acts of self-harm, suicide, and murder after interacting with AI models. This new report, along with previous company disclosures, provides some additional insight into how OpenAI moderates chats for different kinds of misuse.

    OpenAI said that since it began reporting public threats in February 2024, it has disrupted and reported more than 40 networks that violated their usage policies. In today’s report, the company shared new case studies from the past quarter and details on how it detects and disrupts malicious use of its models.

    For example, the company identified an organized crime network, reportedly based in Cambodia, that tried to use AI to streamline its workflows. Additionally, a Russian political influence operation reportedly used ChatGPT to generate video prompts for other AI models. OpenAI also flagged accounts linked to the Chinese government that violated its policies on national security use, including requests to generate proposals for large-scale systems designed to monitor social media conversations.

    The company has previously said, including in its privacy policy, that it uses personal data, such as user prompts, to ‘prevent fraud, illegal activity, or misuse’ of its services. OpenAI has also said it relies on both automated systems and human reviewers to monitor activity. But in today’s report, the company offered slightly more insight into its thought process for preventing misuse while still protecting users more broadly.

    “To detect and disrupt threats effectively without disrupting the work of everyday users, we employ a nuanced and informed approach that focuses on patterns of threat actor behavior rather than isolated model interactions,” the company wrote in the report.

    While monitoring for national security breaches is one thing, the company also recently outlined how it addresses harmful use of its models by users experiencing emotional or mental distress. Just over a month ago, the company published a blog post detailing how it handles these types of situations. The post came amid media coverage of violent incidents reportedly linked to ChatGPT interactions, including a murder-suicide in Connecticut.

    The company said that when users write that they want to hurt themselves, ChatGPT is trained not to comply and instead acknowledge the user’s feelings and steer them toward help and real-world resources.

    When the AI detects someone is planning to harm others, the conversations are flagged for human review. If a human reviewer determines the person represents an imminent threat to others, they can report them to law enforcement.

    OpenAI also acknowledged that its model’s safety performance can degrade during longer user interactions and said it’s already working to improve its safeguards.

    [ad_2]

    Bruce Gil

    Source link

  • Excitement — and concerns — over OpenAI’s Sora 2 and other AI video tools

    [ad_1]

    The next frontier of online video further blurs the line between human- and AI-generated content. 

    In late September, Meta CEO Mark Zuckerberg announced “Vibes,” a feature that allows users to create and watch AI-generated videos. ChatGPT maker OpenAI quickly followed with the launch of Sora 2, which people can use to create videos with “cameos” of themselves, friends and others who grant permission. Despite only being available by invitation, the new tool promptly jumped to the top of Apple’s app store.

    The apps are part of a burgeoning family of AI tools that make it far easier for non-experts to create sophisticated videos, including hyperrealistic or fantastical content

    “You’re only limited by your imagination,” Hany Farid, a professor of electrical engineering and computer sciences at UC Berkeley, told CBS News. 

    In unveiling Sora 2, for example, OpenAI showed how simple prompts such as “a man rides a horse which is on another horse” or “figure skater performs a triple axle with a cat on her head” are used to create quirky, and convincing, videos. 

    Beyond offering a creative outlet, the tools also represent a new era for social media, with Sora 2 and Meta’s “Vibes” offering a TikTok-like experience. The main difference: The videos users scroll are all AI-generated. 

    Adam Nemeroff, an assistant provost and technology expert at Quinnipiac University, thinks Meta is planning for AI content generated through Vibes to eventually co-exist in users’ feeds with human-made videos. “I would imagine that would be the case, because Meta is in the business of attention.”

    Nemeroff also expects big tech players to eventually try to monetize AI-generated content through advertisements and brand placements.

    Farid noted that, despite the enormous growth of generative AI tools like ChatGPT, Anthropic’s Claude and Google’s Gemini, tech companies are still refining how to churn out profits from the rapid adoption of artificial intelligence. 

    OpenAI has said it plans to give Sora 2 users the option to “pay some amount to generate an extra video if there’s too much demand relative to available compute.” 

    Slop in the face?

    The emergence of AI-created videos is heightening concerns about a potential flood of low-quality “AI slop,” including “deepfake” content that could be mistaken as real. Meta, for instance, allows users to cross-post “Vibes” videos on other platforms, such as Facebook Stories.

    “They’re the kinds of things that you can kind of distract from other more reputable or better information from a quality standpoint,” Nemeroff said. “But they’re often popping up next to the same things in the same places.”

    A page on OpenAI’s website details some of the measures the company has taken with Sora 2 to limit the production of potentially harmful content and to help users distinguish AI content. “Every video generated with Sora includes both visible and invisible provenance signals,” according to the company.

    OpenAI and Meta did not respond to requests for comment on what steps they are taking to ensure the apps are used safely. 

    Disruption is messy

    Experts say advancements in AI-generated videos portend major changes for the entertainment industry and other online content players. 

    “Anybody with a keyboard and internet connection will be able to create a video of anybody saying or doing anything they want,” Farid said.

    That shift will be messy, with movie and TV industry professionals already insisting on industry guardrails to ensure AI doesn’t encroach on their livelihood. One immediate concerns for the industry is that Sora 2, which lets content creators use clips of copyrighted characters, initially appeared to put the burden of enforcing those rights on copyright holders. 

    “Since Sora 2’s release, videos that infringe our members’ films, shows and characters have proliferated on OpenAI’s service and across social media,” Charles Rivkin, chairman and CEO of the Motion Pictures Association, said in a statement on Tuesday. “While OpenAI clarified it will ‘soon’ offer rightsholders more control over character generation, they must acknowledge it remains their responsibility – not rightsholders’ – to prevent infringement on the Sora 2 service. OpenAI needs to take immediate and decisive action to address this issue.”

    In another recent controversy over the use of AI, Dutch producer and comedian Eline Van der Velden recently sparked backlash in Hollywood after she unveiled an AI-generated actress. The Screen Actors Guild responded by saying that “creativity is, and should remain, human-centered.”

    “I think there’s a disruption coming, and there will be some destruction and some creation,” Farid said. “And I think it’s coming for more than just the movie and music industry — it’s coming for a lot of industries.”

    [ad_2]

    Source link

  • OpenAI acquires finance startup Roi

    [ad_1]

    Artificial intelligence company OpenAI has acquired Roi, a startup focused on using AI to personalize investing. Founded to make investing more accessible, Roi built tools that deliver real-time, individualized financial insights and education, according this week’s Roi announcement. The company said the acquisition marks a milestone in its mission to bring personalization to software broadly, […]

    [ad_2]

    FinAi News, AI-assisted

    Source link

  • Machine Intuition: Can A.I. Out-Innovate Human Strategy?

    [ad_1]

    When algorithms start to imagine, human decision-making enters uncharted territory. Unsplash+

    In boardrooms, creativity is often conflated with charisma—a founder’s flash of insight, a strategist’s “feel” for the market. The rise of creative A.I. complicates that mythology. Systems that once mimicked patterns are beginning to originate them, not by feeling their way through ambiguity, but by searching vast spaces of possibilities with tireless composure. The question for leadership is no longer whether A.I. can imitate the past. It is whether machines can meaningfully extend the frontier of invention—and how executives should organize decision-making when they do.

    From imitation to invention

    The cleanest evidence that A.I. is stepping past imitation arrives where truth is checkable: mathematics, molecular science and materials discovery.

    In 2022, DeepMind’s AlphaTensor not only learned to multiply matrices faster but also discovered new, provably correct algorithms that improved upon long-standing human results across various matrix sizes. That is not style transfer but rather marks an algorithmic invention in a domain where proof, not opinion, decides progress.

    In late 2023, an A.I. system known as GNoME proposed 2.2 million crystal structures and identified roughly 381,000 as stable, nearly an order-of-magnitude expansion of the known “materials possibility space.” Labs have already begun synthesizing candidates for batteries and semiconductors, creating a faster loop between computational hypothesis and physical validation.

    In 2024, AlphaFold 3 advanced from single-protein structure prediction to modelling interactions among proteins, nucleic acids and small molecules. This capability matters for drug design because binding, not just shape, drives efficacy. The model’s accuracy on complex assemblies has energized pharmaceutical R&D, though access limits have drawn pushback from academics who want open tools.

    Progress is also visible in symbolic reasoning. DeepMind reported systems that solve Olympiad-level problems at a level comparable to an International Mathematical Olympiad silver medalist. At the same time, the research community continues to explore machine-generated conjectures, including the “Ramanujan Machine” work on fundamental constants.

    None of this makes A.I. creative in the human sense. It does, however, expand the adjacent possible, surfacing options that were invisible or unaffordable to explore manually. When machines push frontiers in domains with crisp feedback—proofs or measured properties—boards should treat them not as autocomplete engines, but as option-generation machines for strategy.

    A more recent wave of “reasoning models” underscores the shift. OpenAI’s “o” line prioritizes deliberate chains of thought and planning over fast pattern matching, improving performance on mathematics and coding tasks (empirical evidence). Whatever the brand names, the direction of travel is clear: more search, more planning, more verifiable problem-solving—and less reliance on past style to predict the future.

    What machines still cannot feel

    Creativity at the level that moves markets also rests on three human anchors:

    • Intuition: tacit pattern recognition shaped by lived experience and domain immersion.
    • Emotion: the energy to pick a fight with the status quo, to persist when the spreadsheet says “no.”
    • Cultural context: sensitivity to norms, taste and symbolism that gives an idea social traction.

    A.I. can simulate tone and recall cultural references. Still, it has no stake in the outcome and no phenomenology—no gut to trust, no fear to overcome, no values to defend. That absence is evident in strategy, where the “right” move hinges on timing, narrative and coalition-building as much as on optimization.

    The practical stance, therefore, is not man versus machine, but machine-extended human judgement. Executives should treat creative A.I. as a means to broaden the search over hypotheses and prototypes, then apply human judgment, ethics and narrative sense to decide which bets to place and how to mobilize organizations around them.

    How leaders should exploit machine invention—without outsourcing judgment

    1) Run invention portfolios, not tool pilots.
    The AlphaTensor and GNoME results serve as reminders that A.I.’s edge lies in search. Build portfolios where models explore thousands of algorithmic or design candidates in parallel, with clear funnels for lab validation or market testing. Resist vanity pilots; instrument programs like a venture portfolio with kill criteria, milestone economics and fast capital recycling.

    2) Separate generation from selection.
    Let models overgenerate options; reserve selection for cross-functional councils that combine domain experts with brand, legal and policy voices. In drug discovery, for example, computational signals are necessary, but go-to-market narratives, regulatory risk and patient trust still decide value. AlphaFold 3’s critics highlight that access and transparency are strategic variables, not just technical ones.

    3) Put proof and measurement at the core.
    Favor use cases with verifiable feedback, such as proofs, A/B tests and measurable properties, before pushing into messier cultural domains. The faster the loop from hypothesis to truth signal, the more compounding advantage you build. That is why material and algorithm discovery have progressed rapidly, while brand-level creativity remains a human-led endeavor.

    4) Couple A.I. with automated execution.
    The materials ecosystem illustrates the compounding effect when A.I. designs are paired with automated synthesis and testing. The playbook for enterprises is similar: link generative systems to simulation, robotic process automation or programmatic experimentation to prevent ideas from dying in slide decks.

    5) Govern for explainability where it matters—and for outcomes where it doesn’t.
    Demand explanations in regulated or safety-critical contexts. Elsewhere, prioritize outcomes with robust testing and guardrails. AlphaTensor’s value lies in proofs; a marketing concept’s value lies in performance lift, not in the model’s narrative about why it works.

    6) Incentivize “taste” as a strategic moat.
    As models make it cheap to generate competent options, advantage shifts to taste—the human ability to recognize what resonates in a culture. Recruit and reward this scarce judgment. Machines can propose; only leaders can pick the hill to die on.

    What this means for decision-making

    The companies that convert creative A.I. into a durable advantage will do three things differently.

    • Treat search as a first-class strategic function. Leaders will invest in compute, data and optimization talent the way prior generations invested in distribution—because the ability to search better than competitors becomes a compounding differentiator in R&D, pricing, logistics and design.
    • Reframe “intuition” as a disciplined interface. Human intuition does not retire; it selects, sequences and stories the outputs of machine search. That interface needs structure: pre-registered criteria, red-team rituals, ethical review and explicit narrative strategy.
    • Professionalize uncertainty. Creative A.I. expands the option set and the error surface. Governance must evolve from model-centric compliance to portfolio-centric risk control, with exposure limits, scenario triggers and graceful rollback plans. The lesson from AlphaFold 3’s access debate is that licensing, openness and ecosystem design are themselves strategic levers, not afterthoughts.

    The bottom line is not that machines have acquired emotions or culture. They have acquired something strategically scarce: the capacity to search, prove and propose at a superhuman scale in domains where truth can come back to haunt them. That capability does not substitute for human attributes; it amplifies them. The winning organizations will be those that marry machine-scale exploration with human-grade selection, treating A.I. neither as a muse nor as a mask, but as the most relentless research partner strategy has ever had.

    Machine Intuition: Can A.I. Out-Innovate Human Strategy?

    [ad_2]

    Gonçalo Perdigão

    Source link

  • Jony Ive Says He Wants His OpenAI Devices to ‘Make Us Happy’

    [ad_1]

    At OpenAI’s developer conference in San Francisco on Monday, CEO Sam Altman and ex-Apple designer Jony Ive spoke in vague terms about the “family of devices” the pair are currently working to develop.

    “As great as phones and computers are, there’s something new to do,” Altman said on stage with Ive. The duo confirmed that OpenAI is working on more than one hardware product but finer details, ranging from use cases to specifications, remain under wraps.

    “Hardware is hard. Figuring out new computing form factors is hard,” said Altman in a media briefing earlier in the day. “I think we have a chance to do something amazing, but it will take a while.”

    Ive said that his team has generated “15 to 20 really compelling product” ideas on the journey to find the right kind of hardware to focus the company’s efforts on.

    “I don’t think we have an easy relationship with our technology at the moment,” said Ive. “Rather than seeing AI as an extension of those challenges, I see it very differently.” Ive explained that one reason he wanted to design an AI-powered device with OpenAI is to transform the relationship people currently have to the devices they use every day.

    While Ive acknowledged the potential for AI to boost productivity, efficiency doesn’t appear to be his core goal with these devices. Rather, he hopes for them to bring more social good into the world. The devices should “make us happy, and fulfilled, and more peaceful, and less anxious, and less disconnected,” he said.

    Earlier reporting indicated that OpenAI is planning to manufacture a new category of hardware that doesn’t resemble a phone or laptop. In a recent preview for OpenAI staff, Altman hinted that the product would be aware of a user’s surroundings and day-to-day experiences, according to The Wall Street Journal. The device might be screenless and rely on inputs from cameras and microphones.

    OpenAI also hasn’t said publicly when it plans to launch the devices, though late 2026 may reportedly be the target launch, according to the Financial Times. The publication recently reported that development of the device has been stymied by technical issues.

    [ad_2]

    Reece Rogers, Boone Ashworth

    Source link

  • AI chipmaker AMD strikes major deal with OpenAI

    [ad_1]

    Microchip maker AMD got a huge boost in the race to supply the artificial intelligence revolution. OpenAI, the company behind major AI platform ChatGPT, announced a deal for AMD to provide the AI startup with its high-performance processing chips starting next year. Jo Ling Kent discusses the impact.

    [ad_2]

    Source link

  • OpenAI Goes All-In on Vibe Coding, Says ‘Mature Experiences’ Are on the Horizon

    [ad_1]

    OpenAI’s DevDay 2025 featured a major focus on vibe coding. The company, which boasts that it now has more than 800 million weekly active users for ChatGPT, announced a variety of new tools for developers during its annual event in San Francisco. Headlining the announcements: the ability to build with apps directly in ChatGPT (including eventually allowing “mature experiences” once age verification is in place) and the introduction of a toolkit that will help users build and deploy their own AI agents.

    In OpenAI’s apparent effort to turn ChatGPT into a full-on frontend development environment, the company announced its new Apps SDK (Software Development Kit) that will allow devs to pull in supported third-party apps to complete tasks. In a demo, the company showed ChatGPT working with Zillow to generate a map of homes available for sale in Pittsburgh. Zillow created an interactive map based on the prompt, and the user was able to ask additional questions based on the map. The functionality should allow users to create tools using third-party apps, which they can preview directly within ChatGPT.

    According to OpenAI, Apps SDK is available immediately for Free, Go, Plus, and Pro plans. Support will be available out of the gate for Booking.com, Canva, Coursera, Figma, Expedia, Spotify, and Zillow. The company also said that it plans to offer support for DoorDash, OpenTable, Target, and Uber in the near future. For now, users will only be able to make and use the apps in preview, but it plans to allow developers to submit apps later this year, with a directory for apps planned so that developers can share their vibe-based creations.

    There are lots of details yet to come regarding what comes from Apps SDK. Altman promised monetization guidelines, for instance, are in the pipeline. Also on the way: “mature experiences.” According to OpenAI’s App developer guidelines, “Apps must be suitable for general audiences, including users aged 13–17. Apps may not explicitly target children under 13.” But that won’t be the case forever. “Support for mature (18+) experiences will arrive once appropriate age verification and controls are in place,” it reads.

    The company recently introduced age verification tools designed to shift underage users into a ChatGPT experience with much stricter guidelines following a wrongful death lawsuit filed against the company by the family of a teenager who died by suicide after extensive conversations with the chatbot. It appears that once it hammers out those details, it’ll open the floodgates to more “adult” functions.

    In addition to Apps SDK, the company also rolled out its AgentKit API (Application Programming Interface), which will allow users to build their own agentic AI tools. It’s a significant expansion of OpenAI’s Agent, which it introduced earlier with the promise that the system could navigate the web autonomously to complete tasks assigned to it by the user.

    Sticking with the vibe coding theme, AgentKit’s primary feature is its Agent Builder, which allows users to program their AI agent’s functionality through a visual interface. Altman described it as being like Canva for building agents, making it more accessible to those who are less technical.

    [ad_2]

    AJ Dellinger

    Source link

  • AMD Inks Huge Compute Power Deal With OpenAI, Mirroring Nvidia’s Move

    [ad_1]

    OpenAI’s Sam Altman and AMD’s Lisa Su testify before the Senate on May 08, 2025 in Washington, DC. Photo by Chip Somodevilla/Getty Images

    Nvidia may be dominating the graphics processing unit (GPU) market right now, but its closest rival, AMD, is catching up. Today, (Oct. 6), AMD announced a landmark collaboration with OpenAI that mirrors a recent deal between OpenAI and Nvidia. Under the agreement, AMD will deploy six gigawatts of computing power to OpenAI, which will in turn have the option to acquire up to 10 percent of AMD’s stock—a stake worth roughly $33 billion now after the announcement sent AMD shares to soar 24 percent.

    The partnership gives OpenAI a critical boost in computing resources as it continues to roll out new A.I. models and tools. “This partnership is a major step in building the compute capacity needed to realize A.I.’s full potential,” OpenAI CEO Sam Altman said in a statement.

    OpenAI’s first one-gigawatt deployment is scheduled for the second half of 2026 and will use AMD’s MI450 chips. This initial rollout will coincide with a vesting schedule of AMD stock for OpenAI, allowing OpenAI to acquire up to 160 million shares as deployments scale to six gigawatts. The stock grant will vest based on OpenAI hitting technical and commercial milestones. The full deal will only be executed if AMD’s stock reaches $600 per share. AMD shares are currently traded at $204 apiece.

    The AMD partnership is the latest in a string of blockbuster A.I. deals. Nvidia recently announced its own long-term pact with OpenAI, pledging up to $100 billion in investments over the next decade. In return, OpenAI will obtain as much as 10 gigawatts of computing power from Nvidia’s systems.

    Global venture capital funding rose 38 percent year-over-year to $97 billion in the third quarter, according to Crunchbase, with nearly half of that money flowing into A.I. ventures. Analysts say the current boom evokes the early days of the internet.

    “We still believe we are in the early innings of this spending cycle,” said Dan Ives, an analyst with Wedbush Securities, in a client note. AMD’s new deal with OpenAI marks a “1996 moment” for the tech world, he added, likening today’s A.I. momentum to the foundational years of the tech economy.

    Nvidia’s shares slipped more than 1 percent today following AMD’s announcement, but the company still holds a commanding lead with more than 90 percent of the global GPU market. Nvidia’s early success in meeting A.I.-fueled GPU demand has propelled its market cap to $4.5 trillion and fueled $41 billion in data center revenue between May and July. AMD, in comparison, has a market cap of $334 billion and brought in $3.2 billion in data center revenue in its most recent quarter.

    Lisa Su, who has led AMD as CEO since 2014, is confident that the OpenAI deal will accelerate that growth. Her company has a “clear line of sight” to achieve tens of billions of dollars in data center revenue by 2027, Su told analysts today, adding that these numbers could grow even higher. “In addition to the OpenAI opportunity, and the very significant revenue addition there, we expect to generate well over $100 billion in the next several years,” she said.

    AMD Inks Huge Compute Power Deal With OpenAI, Mirroring Nvidia’s Move

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • OpenAI Wants ChatGPT to Be Your Future Operating System

    [ad_1]

    OpenAI didn’t share details around any revenue-share agreements with Canva, Zillow, Spotify, and the other apps it highlighted today.

    The new SDK announcement signals a deeper commitment to working with established enterprises and app makers—and an emphasis on keeping users within ChatGPT itself. If the web and mobile eras of the past 30 years were defined by users browsing the web or being locked into a mobile app experience, OpenAI is now combining the two into its own kind of chat-driven operating system.

    Nick Turley, OpenAI’s head of product for ChatGPT, said in a briefing after the keynote that the company “never meant to build a chatbot; we meant to build a super assistant, and we got a little sidetracked.” He indicated that OpenAI is most excited about what it has achieved in natural language processing, but that the $500 billion startup will continue to experiment with different user interfaces around that.

    “Will people spend all of their time in ChatGPT? I don’t think so,” Turley said. “I can imagine you starting your day with ChatGPT,” then being guided toward other apps and websites.

    Beyond reimagining existing apps, OpenAI hopes to put itself at the center of efforts to build agents that use AI to complete tasks on a user’s behalf. The company unveiled several tools for building agents including AgentKit, a drag-and-drop interface for building advanced AI tools.

    Capturing developer mind-share is also, of course, about coding tools. At Monday’s event, OpenAI announced that Codex, a model optimized to write code, would come out of research preview and become generally available. The company also announced new Codex tools, including a way to ask questions about code and edit it via Slack messages, an SDK for the Codex model, and new analytics tools to allow companies to monitor their employees’ Codex usage.

    [ad_2]

    Lauren Goode, Will Knight

    Source link

  • OpenAI’s Blockbuster AMD Deal Is a Bet on Near-Limitless Demand for AI

    [ad_1]

    The data center gold rush hinges in part on the idea that models will improve in line with the laws of scaling—so long as they’re trained on more data and compute. “There’s this basic story that scale is the thing, and that’s been true for a while, and it might still be true,” Jonathan Koomey, a visiting professor at UC Berkeley who studies computing and data center efficiency, told WIRED in September, talking about an OpenAI-Oracle deal to create three new data center sites. “That’s the bet that many of the US AI companies are making.”

    Derek Thompson, a prominent economics reporter, noted in a recent post that the tech industry is projected to spend around $400 billion on AI infrastructure this year—while demand for AI services from US consumers stands at just about $12 billion a year, according to one study.

    OpenAI had a strategic relationship with AMD prior to today’s announcement. At AMD’s Advancing AI event in Silicon Valley in June, Altman briefly joined Su on stage. Su said that AMD has been getting feedback from customers for a few years as the company has been designing the upcoming MI400 series of chips, and that OpenAI is one of those marquee customers.

    Altman noted on stage that the industry’s move towards reasoning models has been putting pressure on AI model makers in terms of efficiency and long-context roll outs, and that OpenAI needs “tons of compute, tons of memory, and tons of CPUs,” in addition to the Nvidia GPUs that the generative AI industry is so reliant on.

    Su, at that event, described Altman as a “great friend” and an “icon in AI.”

    Lauren Goode contributed to this report.

    [ad_2]

    Will Knight

    Source link

  • Inside Microsoft’s AI bet with CTO Kevin Scott at Disrupt 2025 | Disrupt 2025

    [ad_1]

    Microsoft CTO Kevin Scott joins the Disrupt Stage at TechCrunch Disrupt 2025 to share how one of the world’s largest technology companies is navigating the AI revolution and what it means for startups and the future of innovation. From its landmark partnership with OpenAI to reshaping enterprise and consumer products with AI, Scott will pull back the curtain on where Microsoft sees the biggest opportunities.

    This is not a session to miss. Lean in on one of the biggest discussions around AI from an enterprise perspective. Register now to save up to $444 on your pass — or up to 30% on group passes.

    From Microsoft to startups: lessons from a 20-year career

    He’ll also dive into how startups can strategically build on Microsoft’s platforms — from Azure AI to developer tools — and what’s next in the high-stakes race to define the future of artificial intelligence.

    As one of the most influential technology leaders in the world, Scott brings more than two decades of experience at Microsoft, LinkedIn, Google, and AdMob. Beyond his role as CTO, he is also a podcast host (Behind the Tech), an author (Reprogramming the American Dream), and an active investor and advisor.

    Catch Kevin Scott live and save up to $444 on your Disrupt Pass

    Join more than 10,000 founders, investors, and operators gathering at TechCrunch Disrupt 2025 for a must-see session on the future of AI in consumer tech, commerce, and brand-driven innovation. On the Disrupt Stage, Kevin Scott will share his vision for how AI will transform industries, empower builders, and shape the next decade of innovation. Don’t miss your chance to save — register today to get up to $444 off your pass, or up to 30% off when you bring your team.

    [ad_2]

    TechCrunch Events

    Source link