ReportWire

Tag: model behavior

  • The AI Data Center Boom Is Warping the US Economy

    [ad_1]

    The amount of capital pouring into AI data center projects is staggering. Last week, Microsoft, Alphabet, Meta, and Amazon reported their 2025 capital expenditures would total roughly $370 billion, and they expect that number to keep rising in 2026. The biggest spender last quarter was Microsoft, which put nearly $35 billion into data centers and other investments, equivalent to 45 percent of its revenue.

    Rarely, if ever, has a single technology absorbed this much money this quickly. Warnings of an AI bubble are getting louder every day, but whether or not a crash eventually happens, the frenzy is already reshaping the US economy. Harvard economist Jason Furman estimates that investment in data centers and software processing technology accounted for nearly all of US GDP growth in the first half of 2025.

    Today, we’re looking at how data centers are impacting three crucial areas: public markets, jobs, and energy.

    Cashing Out

    The US stock market is booming, mostly thanks to AI. Since ChatGPT launched in November 2022, AI-related stocks have accounted for 75 percent of S&P 500 returns and 80 percent of earnings growth, according to JPMorgan’s Michael Cembalest. The question now is whether that growth will be sustainable as tech firms continue spending heavily on AI infrastructure.

    At the start of this year, tech giants were financing their AI projects mostly with cash they had on hand. As financial journalist Derek Thompson pointed out, the ten largest US public companies kicked off 2025 with historically high free cash flow margins. In other words, their businesses were so profitable that they had billions of dollars sitting around to put towards Nvidia GPUs and data center buildouts.

    That trend has largely continued through 2025. Alphabet, for example, told investors last week that its capital expenditures this year would be as much as $93 billion, an increase from its previous estimate of $75 billion. But it also reported that revenue was up 33 percent year over year. Put another way, Silicon Valley is both spending more and earning more. That means everything is fine, right?

    Not exactly. For one thing, tech giants appear to be using accounting tricks to make their financials look rosier than they may really be in reality. A significant portion of AI investment flows to Nvidia, which releases new versions of its GPUs approximately every two years. But companies like Microsoft and Alphabet are currently estimating that their chips will last six years. If they need to upgrade sooner to stay competitive—a likely possibility—that could wind up eating into their profits and weaken their overall performance.

    [ad_2]

    Louise Matsakis

    Source link

  • OpenAI Is Preparing to Launch a Social App for AI-Generated Videos

    [ad_1]

    OpenAI is preparing to launch a stand-alone app for its video generation AI model Sora 2, WIRED has learned. The app, which features a vertical video feed with swipe-to-scroll navigation, appears to closely resemble TikTok—except all of the content is AI-generated. There’s a For You–style page powered by a recommendation algorithm. On the right side of the feed, a menu bar gives users the option to like, comment, or remix a video.

    Users can create videoclips up to 10 seconds long using OpenAI’s next-generation video model, according to documents viewed by WIRED. There is no option to upload photos or videos from a user’s camera roll or other apps.

    The Sora 2 App has an identity verification feature that allows users to confirm their likeness. If a user has verified their identity, they can use their likeness in videos. Other users can also tag them and use their likeness in clips. For example, someone could generate a video of themselves riding a roller coaster at a theme park with a friend. Users will get a notification whenever their likeness is used—even if the clip remains in draft form and is never posted, sources say.

    OpenAI launched the app internally last week. So far, it’s received overwhelmingly positive feedback from employees, according to documents viewed by WIRED. Employees have been using the tool so frequently that some managers have joked it could become a drain on productivity.

    OpenAI declined to comment.

    OpenAI appears to be betting that the Sora 2 app will let people interact with AI-generated video in a way that fundamentally changes their experience of the technology—similar to how ChatGPT helped users realize the potential of AI-generated text. Internally, sources say, there’s also a feeling that President Trump’s on-again, off-again deal to sell TikTok’s US operations has given OpenAI a unique opportunity to launch a short-form video app—particularly one without close ties to China.

    OpenAI officially launched Sora in December of last year. Initially, people could only access it via a web page, but it was soon incorporated directly into the ChatGPT app. At the time, the model was among the most state-of-the-art AI video generators, though OpenAI noted it had some limitations. For example, it didn’t seem to fully understand physics and struggled to produce realistic action scenes, especially in longer clips.

    OpenAI’s Sora 2 app will compete with new AI video offerings from tech giants like Meta and Google. Last week, Meta introduced a new feed in its Meta AI app called Vibes, which is dedicated exclusively to creating and sharing short AI-generated videos. Earlier this month, Google announced that it was integrating a custom version of its latest video generation model, Veo 3, into YouTube.

    TikTok, on the other hand, has taken a more cautious approach to AI-generated content. The video app recently redefined its rules around what kind of AI-generated videos it allows on the platform. It now explicitly bans AI-generated content that’s “misleading about matters of public importance or harmful to individuals.”

    Oftentimes, the Sora 2 app refuses to generate videos due to copyright safeguards and other filters, sources say. OpenAI is currently fighting a series of lawsuits over alleged copyright infringements, including a high-profile case brought by The New York Times. The Times case centers on allegations that OpenAI trained its models on the paper’s copyrighted material.

    OpenAI is also facing mounting criticism over child safety issues. On Monday, the company released new parental controls, including the option for parents and teenagers to link their accounts. The company also said that it is working on an age-prediction tool that could automatically route users believed to be under the age of 18 to a more restricted version of ChatGPT that doesn’t allow for romantic interactions, among other things. It is not known what age restrictions might be incorporated into the Sora 2 app.


    This is an edition of the Model Behavior newsletter. Read previous newsletters here.

    [ad_2]

    Zoë Schiffer, Louise Matsakis

    Source link

  • Why One VC Thinks Quantum Is a Bigger Unlock Than AGI

    [ad_1]

    Depending on how you think about it, there’s half a dozen or more approaches to the hardware. And I became excited that within the hardware approach, the neutral atom approach was high potential. So we backed [Thompson’s] company called Logiqal.

    What happens if you’re right?

    I’m a venture investor, and we believe in convexity—taking risks on things that most likely won’t work, but if they do work could be 500x in value.

    It’s a real earth-moving innovation if there’s a chance that quantum computers find the path toward success. You unlock these thinking engines, these computational engines that can run the future of material sciences, the future of pharmaceutical innovation, the future of logistics, the future of financial markets in ways that we’ve never seen before.

    You can see a future where you could create pharmaceutical advancements that could elongate life 20 to 30 years. You could see changes in material sciences where we could invent new products. It could help us get to Mars! That is what quantum computing unlocks.

    The way you talk about quantum computing sounds a lot like how many AI enthusiasts talk about artificial general intelligence.

    In many ways, quantum is today where AI was back in 2015, which is a lot of really big research and science projects and starting to have practical applications rather than just pure research.

    You mentioned that it’s hard to fake being a quantum expert. I would posit that it is not as hard to fake being an AI expert. How do you decide who to back?

    There are so many companies that are being built and born in AI that when you extrapolate them 5, 10 years will not have a true genuine moat outside of brand or speed. Brand and speed are rarely strong enough moats to build a generational company.

    I’ll give you an example. BrightAI creates stickers that are roughly this big [she makes a circle with her fist]. The company puts a sticker on every telephone pole, on every HVAC system, on every water line system, and then observes it for long periods of time, 5, 10, 15, 20 years [and flags potential issues]. That’s a pretty good moat. You’re not ripping all those stickers off.

    For the most part, the value in AI accrues to the incumbents. Penny, my cofounder, is on the board of Microsoft. If you think about it, Microsoft and Google—Google has 3 billion users. Microsoft has a billion users. They can launch a product that is OK, not excellent, and they still have a pricing power, a distribution power. And so we very much think about the world where when the elephants dance. Don’t be an ant.

    How do you use AI?

    For everything. There’s nothing you don’t use AI for, nothing. From every question, I mean, today I probably used it 25 times.

    It’s replaced Google for you?

    Everything. Everything. Deep research, sourcing. Today I was looking up what jobs are declining fastest in the world. Truly, I would say it’s not a dozen times a day. It’s dozens of times a day.


    This is an edition of the Model Behavior newsletter. Read previous newsletters here.

    [ad_2]

    Zoë Schiffer

    Source link

  • OpenAI’s Teen Safety Features Will Walk a Thin Line

    [ad_1]

    OpenAI announced new teen safety features for ChatGPT on Tuesday as part of an ongoing effort to respond to concerns about how minors engage with chatbots. The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an “age-appropriate” system that blocks graphic sexual content. If the system detects that the user is considering suicide or self-harm, it will contact the user’s parents. In cases of imminent danger, if a user’s parents are unreachable, the system may contact the authorities.

    In a blog post about the announcement, CEO Sam Altman wrote that the company is attempting to balance freedom, privacy, and teen safety.

    “We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict,” Altman wrote. “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”

    While OpenAI tends to prioritize privacy and freedom for adult users, for teens the company says it puts safety first. By the end of September, the company will roll out parental controls so that parents can link their child’s account to their own, allowing them to manage the conversations and disable features. Parents can also receive notifications when “the system detects their teen is in a moment of acute distress,” according to the company’s blog post, and set limits on the times of day their children can use ChatGPT.

    The moves come as deeply troubling headlines continue to surface about people dying by suicide or committing violence against family members after engaging in lengthy conversations with AI chatbots. Lawmakers have taken notice, and both Meta and OpenAI are under scrutiny. Earlier this month, the Federal Trade Commission asked Meta, OpenAI, Google, and other AI firms to hand over information about how their technologies impact kids, according to Bloomberg.

    At the same time, OpenAI is still under a court order mandating that it preserve consumer chats indefinitely—a fact that the company is extremely unhappy about, according to sources I’ve spoken to. Today’s news is both an important step toward protecting minors and a savvy PR move to reinforce the idea that conversations with chatbots are so personal that consumer privacy should only be breached in the most extreme circumstances.

    “A Sexbot Avatar in ChatGPT”

    From the sources I’ve spoken to at OpenAI, the burden of protecting users weighs heavily on many researchers. They want to create a user experience that is fun and engaging, but it can quickly veer into becoming disastrously sycophantic. It’s positive that companies like OpenAI are taking steps to protect minors. At the same time, in the absence of federal regulation, there’s still nothing forcing these firms to do the right thing.

    In a recent interview, Tucker Carlson pushed Altman to answer exactly who is making these decisions that impact the rest of us. The OpenAI chief pointed to the model behavior team, which is responsible for tuning the model for certain attributes. “The person I think you should hold accountable for those calls is me,” Altman added. “Like, I’m a public face. Eventually, like, I’m the one that can overrule one of those decisions or our board.”

    [ad_2]

    Kylie Robison

    Source link

  • Should AI Get Legal Rights?

    [ad_1]

    In one paper Eleos AI published, the nonprofit argues for evaluating AI consciousness using a “computational functionalism” approach. A similar idea was once championed by none other than Putnam, though he criticized it later in his career. The theory suggests that human minds can be thought of as specific kinds of computational systems. From there, you can then figure out if other computational systems, such as a chabot, have indicators of sentience similar to those of a human.

    Eleos AI said in the paper that “a major challenge in applying” this approach “is that it involves significant judgment calls, both in formulating the indicators and in evaluating their presence or absence in AI systems.”

    Model welfare is, of course, a nascent and still evolving field. It’s got plenty of critics, including Mustafa Suleyman, the CEO of Microsoft AI, who recently published a blog about “seemingly conscious AI.”

    “This is both premature, and frankly dangerous,” Suleyman wrote, referring generally to the field of model welfare research. “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”

    Suleyman wrote that “there is zero evidence” today that conscious AI exists. He included a link to a paper that Long coauthored in 2023 that proposed a new framework for evaluating whether an AI system has “indicator properties” of consciousness. (Suleyman did not respond to a request for comment from WIRED.)

    I chatted with Long and Campbell shortly after Suleyman published his blog. They told me that, while they agreed with much of what he said, they don’t believe model welfare research should cease to exist. Rather, they argue that the harms Suleyman referenced are the exact reasons why they want to study the topic in the first place.

    “When you have a big, confusing problem or question, the one way to guarantee you’re not going to solve it is to throw your hands up and be like ‘Oh wow, this is too complicated,’” Campbell says. “I think we should at least try.”

    Testing Consciousness

    Model welfare researchers primarily concern themselves with questions of consciousness. If we can prove that you and I are conscious, they argue, then the same logic could be applied to large language models. To be clear, neither Long nor Campbell think that AI is conscious today, and they also aren’t sure it ever will be. But they want to develop tests that would allow us to prove it.

    “The delusions are from people who are concerned with the actual question, ‘Is this AI, conscious?’ and having a scientific framework for thinking about that, I think, is just robustly good,” Long says.

    But in a world where AI research can be packaged into sensational headlines and social media videos, heady philosophical questions and mind-bending experiments can easily be misconstrued. Take what happened when Anthropic published a safety report that showed Claude Opus 4 may take “harmful actions” in extreme circumstances, like blackmailing a fictional engineer to prevent it from being shut off.

    [ad_2]

    Kylie Robison

    Source link