ReportWire

Tag: openai

  • OpenAI’s New Social Network Is Reportedly TikTok If It Was Just an AI Slop Feed

    [ad_1]

    Welcome to the age of anti-social media. According to a report from Wired, OpenAI is planning on launching a standalone app for its video generation tool Sora 2 that will include a TikTok-style video scroll that will let people scroll through entirely AI-generated videos. The quixotic effort follows Meta’s recent launch of an AI-slop-only feed on its Meta AI app that was met with nearly universal negativity.

    Per Wired, the Sora 2 app will feature the familiar swipe-up-to-scroll style navigation that is featured for most vertical video platforms like TikTok, Instagram Reels, or YouTube Shorts. It’ll also use a personalized recommendation algorithm to feed users content that might appeal to their interests. Users will be able to like, comment, or “remix” a post—all very standard social media fare.

    The big difference is that all of the content on the platform will be AI-generated via OpenAI’s video generation model that can take text, photos, or existing video and AI-ify it. The videos will be up to 10 seconds long, presumably because that’s about how long Sora can hold itself together before it starts hallucinating weird shit. (The first version of Sora allows videos up to 60 seconds, but struggles to produce truly convincing and continuous imagery for that long.) According to Wired, there is no way to directly upload a photo or video and post it unedited.

    Interestingly, OpenAI has figured out how to work a social element into the app, albeit in a way that has a sort of inherent creepiness to it. Per Wired, the Sora 2 app will ask users to verify their identity via facial recognition to confirm their likeness. After confirming their identity, their likeness can be used in videos. Not only can they insert themselves into a video, but other users can tag you and use your likeness in their videos. Users will reportedly get notified any time their likeness is used, even if the generated video is saved to drafts and never posted.

    How that will be implemented when and if the app launches to the public, we’ll have to see. But as reported, it seems like an absolute nightmare. Basically, the only thing that the federal government has managed to find any sort of consensus around when it comes to regulating AI is offering some limited protections against non-consensual deepfakes. As described, that kind of seems like one feature of Sora 2 is letting your likeness be manipulated by others. Surely there will be some sort of opt-out available or ability to restrict who can use your likeness, right?

    According to Wired, there will be some protections as to the type of content that Sora 2 will allow users to create. It is trained to refuse to violate copyright, for instance, and will reportedly have filters in place to restrict certain types of videos from being produced. But will it actually offer sufficient protection to people? OpenAI made a big point to emphasize how it added protections to the original Sora model to prevent it from generating nudity and explicit images, but tests of the system managed to get it to create prohibited content anyway at a low-but-not-zero rate.

    Gizmodo reached out to OpenAI to confirm its plans for the app, but did not receive a response at the time of publication. There has been speculation for months about the launch of Sora 2, with some expectation that it would be announced at the same time as GPT-5. For now, it and its accompanying app remain theoretical, but there is at least one good idea hidden in the concept of the all-AI social feed, albeit probably not in the way OpeAI intended it: Keep AI content quarantined.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI Rolls Out ChatGPT’s Ability to Buy Stuff for You

    [ad_1]

    OpenAI just made it possible to buy things directly from ChatGPT.

    Starting today, all ChatGPT users in the U.S. can use a new feature called Instant Checkout to purchase items from Etsy sellers without leaving the chat. OpenAI says more than a million Shopify merchants, including Glossier, SKIMS, and Spanx, are coming soon.

    For now, Instant Checkout only supports single-item purchases, but OpenAI plans to add multi-item carts and expand to more merchants and regions.

    The company also announced it’s open-sourcing the technology that powers Instant Checkout, the Agentic Commerce Protocol. Developed with payment processor Stripe, the protocol is meant to serve as a standard for AI-driven shopping and to make it easier for developers to integrate their stores with ChatGPT.

    This move puts OpenAI one step closer to its bigger goal of creating a fully functional AI agent. The industry as a whole is racing to launch so-called AI agents, virtual assistants that can theoretically handle tasks like writing reports, booking travel, shopping online, and scheduling appointments.

    Just last week, OpenAI rolled out ChatGPT Pulse, which conducts relevant research for users and connects to their email, calendars, and other apps to deliver a daily morning briefing. Another feature introduced this year, ChatGPT Agent, also links to users’ apps but still needs explicit prompts to carry out tasks.

    And in January, the company unveiled OpenAI Operator, a tool that can fill out online forms and place orders on its own—though shoppers still have to manually enter payment info at checkout.

    But one thing is becoming clear as the age of AI agents approaches: they’ll need access to a lot of our personal data to work properly, if they work at all.

    How Instant Checkout works

    A lot of ChatGPT users already turn to the chatbot for online shopping recommendations.

    Now, when a user asks something like “gift ideas for a housewarming” or “best running shoes under $100,” products that support Instant Checkout will display a “Buy” option. Users who tap on “Buy” will then confirm their order, shipping, and payment details directly in chat. Those with a ChatGPT subscription can pay with the card already on file or choose another payment method.

    The seller then handles the order, shipping, and fulfillment like they normally would. ChatGPT just acts as a middleman, providing the seller with the buyer’s information.

    The service is free for users, but sellers will have to pay a small fee on completed purchases. OpenAI also says that items supporting Instant Checkout won’t be given preference in product results and won’t impact its recommendations overall.

    However, when ranking sellers of the same product, “whether Instant Checkout is enabled” will be considered to “optimize the user experience.”

    [ad_2]

    Bruce Gil

    Source link

  • OpenAI will reportedly release a TikTok-like social app alongside Sora 2

    [ad_1]

    In a development that should surprise no one, OpenAI is preparing to release a standalone social app powered by its upcoming Sora 2 video model, Wired reports. The app reportedly “closely resembles” TikTok, with a vertical video feed and swipe-to-scroll navigation. The catch? It will only feature AI-generated content; there’s apparently no option for the user to upload photos or videos from their phone’s camera roll.

    Wired adds OpenAI will limit Sora 2 to generating clips that are 10 seconds long or shorter for use inside of the app. It’s unclear what the model’s limit will be outside of the app. TikTok, following an original limit of 15 seconds, allows people to upload clips that are up to . The new app is also said to include an identity verification tool. Should a user take advantage of the feature, Sora 2 will be able to use their likeness in videos it generates. In turn, that means other people will be able to tag those users and use their likeness when they go to remix one of their videos. As a safety precaution, OpenAI will push a notification to users whenever their likeness is used by someone else, even in situations where someone makes a video but never posts it to the app’s feed.

    According to Wired, the software will refuse to generate some videos due to copyright restrictions. However, it’s not clear just how robust these protections will be, with OpenAI will require rights holders to opt out of their content appearing in videos Sora 2 generates.

    As for why OpenAI would release a social media app, Wired suggests the company saw an opportunity after President Trump repeatedly extended the deadline for ByteDance to bring . By adding a social component to Sora, OpenAI may also be hoping to dissuade people from trying other models since leaving its new app would mean abandoning whatever community forms around it.

    [ad_2]

    Igor Bonifacic

    Source link

  • OpenAI Adds Parental Controls to ChatGPT for Teen Safety

    [ad_1]

    OpenAI said Monday it’s adding parental controls to ChatGPT that are designed to provide teen users of the popular platform with a safer and more “age-appropriate” experience.

    The company is taking action after AI chatbot safety for young users has hit the headlines. The technology’s dangers have been recently highlighted by a number of cases in which teenagers took their lives after interacting with ChatGPT.

    In the United States, the Federal Trade Commission has even opened an inquiry into several tech companies about the potential harms to children and teenagers who use their AI chatbots as companions.

    In a blog post posted Monday, OpenAI outlined the new controls for parents. Here is a breakdown:

    Getting started

    The parental controls will be available to all users, but both parents and teens will need their own accounts to take advantage of them.

    To get started, a parent or guardian needs to send an email or text message to invite a teen to connect their accounts. Or a teenager can send an invite to a parent. Users can send a request by going into the settings menu and then to the “Parental controls” section.

    Teens can unlink their accounts at any time, but parents will be notified if they do.

    Automatic safeguards

    Once the accounts are linked, the teen account will get some built-in protections, OpenAI said.

    Teen accounts will “automatically get additional content protections, including reduced graphic content, viral challenges, sexual, romantic or violent role-play, and extreme beauty ideals, to help keep their experience age-appropriate,” the company said.

    Parents can choose to turn these filters off, but teen users don’t have the option.

    OpenAI warns that such guardrails are “not foolproof and can be bypassed if someone is intentionally trying to get around them.” It advised parents to talk with their children about “healthy AI use.”

    Adjusting settings

    Parents are getting a control panel where they can adjust a range of settings as well as switch off the restrictions on sensitive content mentioned above.

    For example, does your teen stay up way past bedtime to use ChatGPT? Parents can set a quiet time when the chatbot can’t be used.

    Other settings include turning off the AI’s memory so conversations can’t be saved and won’t be used in future responses; turning off the ability to generate or edit images; turning off voice mode; and opting out of having chats used to train ChatGPT’s AI models.

    Get notified

    OpenAI is also being more proactive when it comes to letting parents know that their child might be in distress.

    It’s setting up a new notification system to inform them when something might be “seriously wrong” and a teen user might be thinking about harming themselves.

    A small team of specialists will review the situation and, in the rare case that there are “signs of acute distress,” they’ll notify parents by email, text message and push alert on their phone — unless the parent has opted out.

    OpenAI said it will protect the teen’s privacy by only sharing the information needed for parents or emergency responders to provide help.

    “No system is perfect, and we know we might sometimes raise an alarm when there isn’t real danger, but we think it’s better to act and alert a parent so they can step in than to stay silent,” the company said.

    Copyright 2025. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. 

    [ad_2]

    Associated Press

    Source link

  • OpenAI teams up with Stripe for agentic commerce

    [ad_1]

    OpenAI today unveiled Instant Checkout, allowing U.S. users of ChatGPT — on free, Plus and Pro tiers — to make purchases directly in the chat interface from participating merchants. The rollout begins with U.S. Etsy sellers, while support for over a million Shopify merchants such as Glossier, SKIMS, Spanx and Vuori is expected soon. OpenAI also […]

    [ad_2]

    Bank Automation News, AI-assisted

    Source link

  • OpenAI teams up with Stripe for agentic commerce – FinAi News

    [ad_1]

    OpenAI today unveiled Instant Checkout, allowing U.S. users of ChatGPT — on free, Plus and Pro tiers — to make purchases directly in the chat interface from participating merchants. The rollout begins with U.S. Etsy sellers, while support for over a million Shopify merchants such as Glossier, SKIMS, Spanx and Vuori is expected soon. OpenAI also […]

    [ad_2]

    Bank Automation News, AI-assisted

    Source link

  • OpenAI takes on Google, Amazon with new agentic shopping system | TechCrunch

    [ad_1]

    ChatGPT users in the U.S. can now make Etsy and Shopify purchases within conversations, marking a next step towards the future of online shopping – both for consumers and the platforms that control product discovery, recommendation, and payments. In other words, OpenAI might be on the path to reshaping who holds power in e-commerce. 

    OpenAI’s new “Instant Checkout” feature is available to ChatGPT Pro, Plus, and Free logged-in users buying from U.S.-based Etsy sellers, with more than 1 million Shopify merchants like Glossier, Skims, Spanx, and Vuori “coming soon,” per OpenAI.  

    Instant Checkout builds on previous shopping features on ChatGPT that surfaced relevant products, images, reviews, prices, and direct links to merchants in response to shopping questions like “what should I get my friend who loves ceramics?” or “best sneakers to wear to the office.” Now, instead of having to leave the conversation, users can just tap “Buy” to confirm their order, shipping, and payment details (options include Apple Pay, Google Pay, Stripe, or credit card) to complete the purchase.  

    Last year, Perplexity introduced a similar in-chat shopping and payments feature. Microsoft also offers merchants the ability to create in-chat storefront capabilities with the Copilot Merchant Program. 

    This type of frictionless experience has the potential to spark a new movement in how people shop online – one that moves away from search engines like Google and e-commerce platforms like Amazon towards conversational agents with curated recommendations, comparisons, and easy checkout experiences. 

    It’s also setting the stage for new power brokers to emerge in e-commerce. Google and Amazon have long been the gatekeepers for retail discovery. If more purchases start inside AI chatbots, the firms behind them will suddenly have more control over what products are surfaced and what commissions or fees they charge.  

    Both Amazon and Google have previously leveraged their dominance to favor their own products or preferred partners, pushing down competitors in search results or charging steep fees to sellers simply to maintain visibility. OpenAI said in a blog post that the product results it surfaces are “organic and unsponsored, ranked purely on relevance to the user,” and that it will charge merchants a “small fee” for completed purchases.  

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    TechCrunch has reached out to OpenAI for more information. 

    Along with OpenAI’s introduction of in-chat checkout, the AI firm also noted that it will open-source its Agentic Commerce Protocol (ACP), the tech that powers Instant Checkout built with Stripe, so that other merchants and developers can integrate agentic checkout. 

    “Stripe is building the economic infrastructure for AI,” Will Gaybrick, president of technology and business at Stripe, said in a statement. “That means re-architecting today’s commerce systems and creating new AI-powered experiences for billions of people.” 

    While some may balk at handing ChatGPT private payment information, the company says orders, payments, and fulfillment are handled by the merchant using their existing systems. ChatGPT merely acts as an agent, an intermediary that can securely pass along information between user and merchant.  

    Open-sourcing ACP makes it easier for merchants to integrate with ChatGPT, widening the adoption of AI chatbots that function as a virtual storefront. It also expands OpenAI’s potential control as a gatekeeper for retail discovery and checkout, and could position the firm to be the de facto architect of the AI commerce ecosystem.  

    That would put it in tension with Google yet again, as the tech giant has recently launched its own open protocol for purchases initiated by AI agents, dubbed Agent Payments Protocol (AP2).

    [ad_2]

    Rebecca Bellan

    Source link

  • OpenAI Is Preparing to Launch a Social App for AI-Generated Videos

    [ad_1]

    OpenAI is preparing to launch a stand-alone app for its video generation AI model Sora 2, WIRED has learned. The app, which features a vertical video feed with swipe-to-scroll navigation, appears to closely resemble TikTok—except all of the content is AI-generated. There’s a For You–style page powered by a recommendation algorithm. On the right side of the feed, a menu bar gives users the option to like, comment, or remix a video.

    Users can create videoclips up to 10 seconds long using OpenAI’s next-generation video model, according to documents viewed by WIRED. There is no option to upload photos or videos from a user’s camera roll or other apps.

    The Sora 2 App has an identity verification feature that allows users to confirm their likeness. If a user has verified their identity, they can use their likeness in videos. Other users can also tag them and use their likeness in clips. For example, someone could generate a video of themselves riding a roller coaster at a theme park with a friend. Users will get a notification whenever their likeness is used—even if the clip remains in draft form and is never posted, sources say.

    OpenAI launched the app internally last week. So far, it’s received overwhelmingly positive feedback from employees, according to documents viewed by WIRED. Employees have been using the tool so frequently that some managers have joked it could become a drain on productivity.

    OpenAI declined to comment.

    OpenAI appears to be betting that the Sora 2 app will let people interact with AI-generated video in a way that fundamentally changes their experience of the technology—similar to how ChatGPT helped users realize the potential of AI-generated text. Internally, sources say, there’s also a feeling that President Trump’s on-again, off-again deal to sell TikTok’s US operations has given OpenAI a unique opportunity to launch a short-form video app—particularly one without close ties to China.

    OpenAI officially launched Sora in December of last year. Initially, people could only access it via a web page, but it was soon incorporated directly into the ChatGPT app. At the time, the model was among the most state-of-the-art AI video generators, though OpenAI noted it had some limitations. For example, it didn’t seem to fully understand physics and struggled to produce realistic action scenes, especially in longer clips.

    OpenAI’s Sora 2 app will compete with new AI video offerings from tech giants like Meta and Google. Last week, Meta introduced a new feed in its Meta AI app called Vibes, which is dedicated exclusively to creating and sharing short AI-generated videos. Earlier this month, Google announced that it was integrating a custom version of its latest video generation model, Veo 3, into YouTube.

    TikTok, on the other hand, has taken a more cautious approach to AI-generated content. The video app recently redefined its rules around what kind of AI-generated videos it allows on the platform. It now explicitly bans AI-generated content that’s “misleading about matters of public importance or harmful to individuals.”

    Oftentimes, the Sora 2 app refuses to generate videos due to copyright safeguards and other filters, sources say. OpenAI is currently fighting a series of lawsuits over alleged copyright infringements, including a high-profile case brought by The New York Times. The Times case centers on allegations that OpenAI trained its models on the paper’s copyrighted material.

    OpenAI is also facing mounting criticism over child safety issues. On Monday, the company released new parental controls, including the option for parents and teenagers to link their accounts. The company also said that it is working on an age-prediction tool that could automatically route users believed to be under the age of 18 to a more restricted version of ChatGPT that doesn’t allow for romantic interactions, among other things. It is not known what age restrictions might be incorporated into the Sora 2 app.


    This is an edition of the Model Behavior newsletter. Read previous newsletters here.

    [ad_2]

    Zoë Schiffer, Louise Matsakis

    Source link

  • ChatGPT introduces new parental controls for teens

    [ad_1]



    ChatGPT introduces new parental controls for teens – CBS News










































    Watch CBS News



    Parents can now connect their ChatGPT accounts to their children’s and get notifications when sensitive issues are raised. Jo Ling Kent has more from Los Angeles.

    [ad_2]
    Source link

  • How South Korea plans to best OpenAI, Google, others with homegrown AI  | TechCrunch

    [ad_1]

    From tech giants to startups, South Korean players are developing large language models tailored to their own language and culture, ready to compete with global heavyweights like OpenAI and Google. 

    Last month, the nation launched its most ambitious sovereign AI initiative to date, pledging ₩530 billion, (about $390 million), to five local companies building large-scale foundational models.  

    The move underscores Seoul’s desire to cut reliance on foreign AI technologies, hoping to strengthen national security and keep a tighter control over data in the AI era.  

    The organizations picked by the Ministry of Science and ICT to compete were LG AI Research, SK Telecom, Naver Cloud, NC AI, and the startup Upstage

    Every six months, the government will review the first cohort’s progress, cut underperformers, and continue funding the frontrunners until just two remain to lead the country’s sovereign AI drive. 

    Each player is bringing a different advantage to South Korea’s AI race. TechCrunch spoke with several of the selected companies about how they plan to take on OpenAI, Google, Anthropic and the rest on their home turf. NC AI declined to comment.

    LG AI Research: Exaone 

    LG AI Research, the R&D unit of South Korean giant LG Group, offers Exaone 4.0, a hybrid reasoning AI model. The latest version blends broad language processing with the advanced reasoning features first introduced in the company’s earlier Exaone Deep model. 

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Exaone 4.0 (32B) already scores reasonably well against competitors on Artificial Analysis’s Intelligence Index benchmark (as does Upstage’s Solar Pro2). But it plans to improve and move up the ranks through its deep access to real-world industry data ranging from biotech to advanced materials and manufacturing.  

    It’s coupling that data with a focus on refining the data before feeding to the models to train. Instead of chasing sheer scale, LG wants to make the entire process more intelligent, so its AI can deliver real, practical value that goes beyond what general-purpose models can offer. “This is our fundamental approach,” co-head Honglak Lee told TechCrunch. 

    LG is improving its models via familiar tactics: offering them through APIs, then using the real-world data generated by users of those services to train the model to improve.  

    “As LG’s models improve, our partners can deliver better services, which in turn generate greater economic value and even richer data,” he said. 

    However, instead of chasing massive GPU clusters, LG AI Research is focusing on efficiency, getting the most out of every chip, and creating industry-specific models, he mentioned. The goal isn’t to outspend the global giants but to outsmart them with high-performing, yet more efficient, AI. 

    South Korea’s telco giant SK Telecom (SKT) launched its personal AI agent A. (pronounced A-dot) service way back in late 2023 and just rolled out its new large language model, A.X, this July.  

    Built on top of the Chinese open source model from Alibaba Cloud, Qwen 2.5, A.X 4.0 comes in two models, a hefty 72-billion-parameter version and a lighter 7B version.  

    SK says that A.X 4.0 processes Korean inputs about 33% more efficiently than GPT-4o did, underscoring its local language edge. (OpenAI’s GPT 5.0 comparison data is not available.) SKT also open sourced its A.X 3.1 models earlier this summer. Meanwhile, the A. service offers features like AI call summaries and auto-generated notes. As of August 2025, it’s already pulled in about 10 million subscribers. 

    SK’s edge is its versatility, because it has access to information from its telecom network ranging from navigation to taxi-hailing. 

    “SK Telecom’s role is to act as a bridge between cutting-edge model research and real-world impact. With our telecom infrastructure, extensive user base and proven service like A., we bring AI directly into everyday life, whether in customer service, mobility, or manufacturing,” Taeyoon Kim, head of the foundation model office at SK Telecom, told TechCrunch. 

    SK Telecom is also investing in AI infrastructure, using GPUaaS, South Korea’s largest GPU-based service, and building a new hyperscale AI data center with AWS. Whatever it lacks, it is partnering to obtain.  

    “We’re building a full-stack ecosystem with Korean AI chipmaker Rebellions, securing trusted data partnerships through work with the government and universities, and fostering a global research network,” said Kim. “That includes projects like our collaboration with MIT (MGAIC), which applies foundation models to advanced manufacturing and battery and semiconductor innovation.” 

    Naver Cloud: HyperCLOVA X 

    Naver Cloud, the cloud services arm of South Korea’s leading internet company, introduced its large language model, HyperClova, in 2021. Two years later, it unveiled an upgraded version, HyperCLOVA X, along with new products powered by the technology: CLOVA X, an AI chatbot, and Cue, a generative AI-driven search engine positioned as a rival to Microsoft’s CoPilot-enhanced Bing and Google’s AI Overview. It also unveiled this year its multimodal reasoning AI model, HyperCLOVE X Think

    Naver Cloud believes the true power of LLMs is to serve as “connectors” linking legacy systems and siloed services to improve usefulness, according to a Naver spokesperson.  

    Naver stands out as Korea’s only company — and one of the few in the world — that can genuinely claim to have an “AI full stack.” It built its HyperCLOVA X model from scratch and runs the massive data centers, cloud services, AI platforms, applications, and consumer services that bring the technology to life, the spokesperson explained. 

    Similar to Google — but tuned for South Korea — Naver is embedding its AI into core services like search, shopping, maps, and finance. Its advantage is real-world data. It’s AI Shopping Guide, for instance, offers recommendations based on what people actually want to buy. Other services include CLOVA Studio, which lets businesses build custom generative AI, and CLOVA Carecall, an AI-powered check-in service geared for seniors living alone. 

    The Naver spokesperson says besting global AI giants like OpenAI and Google hinges on two things: perfecting its “recipe” for models and securing the capital to scale them. Even so, rather than chasing size, the company emphasizes sophistication, arguing its AI is already globally competitive at comparable scales.  

    Upstage’s Solar Pro 2 

    Upstage is the only startup competing in the project. Its Solar Pro 2 model, launched last July, was the first Korean model recognized as a frontier model by Artificial Analysis, putting it in the ring with OpenAI, Google, Meta, and Anthropic, according to Soon-il Kwon, executive vice president at Upstage. 

    While most frontier models have 100 billion to 200 billion parameters, Solar Pro 2 — with just 31 billion — performs better for South Koreans and is more cost-effective, Kwon told TechCrunch. 

    “Solar Pro 2 has outperformed global models on major Korean benchmarks. With this project, Upstage aims to achieve a Korean language performance of 105% of the global standard,” Kwon said.  

    Upstage aims to differentiate itself by focusing on real business impact, not just benchmarks, he said. So it is developing specialized models for industries like finance, law, and medicine, while pushing to build a Korean AI ecosystem led by “AI-native” startups. 

    [ad_2]

    Kate Park

    Source link

  • Sam Altman’s Plan to Turn ChatGPT Into a Feed

    [ad_1]

    This week, Sam Altman announced his “favorite feature of ChatGPT so far.” It’s called Pulse, and according Altman, it “works for you overnight” by “thinking about your interests, your connected data, your recent chats, and more.” In the morning, you get a “custom-generated set of stuff you might be interested in,” akin to something a “super-competent personal assistant” might prepare. More broadly, he says, it represents “a shift from being all reactive to being significantly proactive, and extremely personalized.” And then, a recommendation: “It performs super well if you tell ChatGPT more about what’s important to you.”

    These are the words of a CEO, of course, so we should expect him to be in sales mode. They’re also the words of a person who has not just adopted the language and jargon of generative AI but done so to the exclusion of everything else. In the narrow context of ChatGPT, and through the personified language of generative AI, Pulse can be given agency, ascribed new talents and qualities, and imbued with novelty. Most other people, however, will look at Pulse and see something less futuristic than familiar: a recommendation feed.

    A recent study of ChatGPT use helped clarify what the service’s users are most commonly getting from the chatbot, outlining strong consultative habits: a lot of Google replacement, plenty of quick questions and advice, and some task completion. These interactions all depend on the user initiating in the first place, which, if your goal is to maximize engagement and/or draw people into a more comprehensive platform — to make your product the beginning and end of a user’s computing experience — is limiting. People spend lots of time searching, chatting, and working on their devices, sure. But they also spend a lot of time scrolling. Pulse looks like an attempt to secure at least some of the massive amount of attention captured by feeds and to turn ChatGPT into something more than a tool you can consult — specifically, into a source of content you can consume.

    To back up a little bit: Before the post-ChatGPT AI boom, which has been defined by large language models and chatbot interfaces, the tech industry’s conversations about AI and machine learning centered on recommendations. That was the case for good reason. Platforms that deployed surveillant recommendation engines were taking over the world. Through the 2010s, social platforms drifted from chronological feeds to algorithmic recommendations, drawing on users’ data and behaviors to show them personalized material. TikTok took this model a step further, treating social connections as firmly secondary to AI-driven learning and recommendation (or, put another way, embracing the model of digital ad targeting for the entire social-media experience).

    You can hear, in Altman’s announcement, the description of something akin to a TikTok feed: a “custom-generated set of stuff you might be interested in.” For the logical endpoint of compounded “generation” looks like, Meta helpfully announced a cautionary tale in the form of a new AI-feed product called Vibes:

    Anyway, an even closer cousin to Pulse, given the use of ChatGPT as a Google replacement, is the algorithmic homepage popularized by products like Google Now, introduced in 2012 with the following description:

    It tells you today’s weather before you start your day, how much traffic to expect before you leave for work, when the next train will arrive as you’re standing on the platform, or your favorite team’s score while they’re playing. And the best part? All of this happens automatically. Cards appear throughout the day at the moment you need them.

    By 2016, after Google had abandoned the Now branding but incorporated the features across its product lineup, the company said that it was using “machine learning algorithms to better anticipate what’s interesting and important to you.” The aim was to show Google users “sports highlights, top news, engaging videos, new music, stories to read and more” based not only on their interactions with Google but also “what’s trending in your area and around the world. The more you use Google, the better your feed will be.” By then, it had become obvious that personalized recommendation engines were ascendant and that they’d be incorporated into basically any software product that could accommodate them. And why not? At their best, they were useful and therefore sticky; at worst, they produced low-value engagement that could still be monetized.

    Early reviews from heavy ChatGPT users suggest the concept makes sense for them: Pulse is like “a newsfeed tailored to recent conversations,” one writes, saying that he wants to “dump even more information and context and app connections into ChatGPT so I can get an even better daily feed.” It’s easy enough to see how populating ChatGPT with recommendations could increase time spent on the app by casual users, too.

    In tech product terms, in other words, this is OpenAI doing an obvious and precedented thing with the growing piles of data it’s accumulating on its users: feeding it back to them in the form of content. Pulse also has specific business uses beyond encouraging more ChatGPT use. Despite (and in part because of) its popularity, ChatGPT is still a money furnace, and a large majority of its users don’t pay for subscriptions. OpenAI has been planning to expand advertising into the platform for a while but hasn’t yet settled on its solutions. Inserting too many ads into chatbot interactions risks shattering the illusions that help make them compelling in the first place (not that companies won’t try). In contrast, feeds full of recommendations — collections of algorithmically recommended content — are exactly where people expect to encounter marketing. They’re also where some of OpenAI’s biggest competitors, all now racing for AI supremacy and chatbot users, made their money in the first place.

    [ad_2]

    John Herrman

    Source link

  • MIT Study Finds Chatbot Love Is Real—and It’s Often Unintentional

    [ad_1]

    People are increasingly falling in love with A.I. chatbots—and not on purpose. Ghariza Mahavira for Unsplash+

    It was once a trope of science fiction, most notably in Her, the 2013 Spike Jonze film, where Joaquin Phoenix falls in love with an A.I. character. Now, chatbot relationships are not only real but have morphed into a complex sociotechnical phenomenon that researchers say demands attention from developers and policymakers alike, according to a new study from the Massachusetts Institute of Technology (MIT).

    The report analyzed posts between December 2024 and August 2025 from the more than 27,000 members of r/MyBoyfriendIsAI, a Reddit page dedicated to A.I. companionship. The community is filled with users introducing their tech partners, sharing love stories and offering advice. In some cases, Redditers even display their commitments with wedding rings or A.I.-generated couple photos.

    “People have real commitments to these characters,” Sheer Karny, one of the study’s co-authors and a graduate student at the MIT Media Lab, told Observer. “It’s interesting, alarming—it’s this really messy human experience.”

    For many, these bonds form unintentionally. Only 6.5 percent of users deliberately sought out A.I. companions, the study found. Others began using chatbots for productivity and gradually developed strong emotional attachments. Despite the existence of companies like Character.AI and Replika, which market directly to users seeking companionship, OpenAI has emerged as the dominant platform, with 36.7 percent of Reddit users in the study adopting its products.

    Preserving the “personality” of an A.I. partner is a major concern for many users, Karny noted. Some save conversations as PDFs to re-upload them if forced to restart with a new system. “People come up with all kinds of unique tricks to ensure that the personality that they cultivated is maintained through time,” he said.

    Losing that personality can feel like grief. More than 16 percent of discussions on r/MyBoyfriendIsAI focus on coping with model updates and loss—a trend amplified last month when OpenAI, while rolling out GPT-5, temporarily removed access to the more personable GPT-4o. The backlash was so intense that the company eventually reinstated the older model.

    A cure for loneliness?

    Most of the Reddit page’s users are single, with about 78 percent making no mention of human partners. Roughly 4 percent are open with their partners about their A.I. relationships, 1.1 percent have replaced human companions with the technology, and 0.7 percent keep such relationships hidden.

    On one hand, chatbot companionship may reduce loneliness, said Thao Ha, a psychologist at Arizona State University who studies how technologies reshape adolescent romantic relationships. But she also warned of long-term risks. “If you satisfy your need for relationships with just relationships with machines, how does that affect us over the long term?” she told Observer.

    The MIT study urges developers to add safeguards to A.I. systems while preserving their therapeutic benefits. Left unchecked, the technology could prey on vulnerabilities through tactics like love-bombing, dependency creation and isolation. Policymakers, too, should account for A.I. companionship in legislative efforts, such as California’s SB 243 bill, the authors said.

    Ha suggested that A.I. products undergo an approval process similar to new medications, which must clear intensive research and FDA review before reaching the public. While replicating such a strategy for technology companies “would be great,” she conceded that it’s unlikely in light of the industry’s profit-driven priorities.

    A more achievable step, she argued, is expanding A.I. literacy to help the public understand both the risks and benefits of forming attachments to chatbots. Still, such programming has yet to materialize. “I wish it was here yesterday, but it’s not here yet,” Ha said.

    MIT Study Finds Chatbot Love Is Real—and It’s Often Unintentional

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • What’s behind the massive AI data center headlines? | TechCrunch

    [ad_1]

    Silicon Valley flooded the news this week with headlines about wild AI infrastructure investments.

    Nvidia said it would invest up to $100 billion in OpenAI. Then OpenAI said it would build out five more Stargate AI data centers with Oracle and SoftBank, adding gigawatts of new capacity online in the coming years. And it was later revealed that Oracle sold $18 billion in bonds to pay for these data centers.

    On their own, each deal is dizzying in scale. But in aggregate, we see how Silicon Valley is moving heaven and earth to give OpenAI enough power to train and serve future versions of ChatGPT.

    This week on Equity, Anthony Ha and I (Max Zeff) go beyond the headlines to break down what’s really going on in these AI infrastructure deals.

    Rather conveniently, OpenAI also gave the world a glimpse this week of a power-intensive feature it could serve more broadly if it had access to more AI data centers.

    The company launched Pulse — a new feature in ChatGPT that works overnight to deliver personalized morning briefings for users. The experience feels similar to a news app or a social feed — something you check first thing in the morning — but doesn’t have posts from other users or ads (yet).

    Pulse is part of a new class of OpenAI products that work independently, even when users aren’t in the ChatGPT app. The company would like to deliver a lot more of these features and roll them out to free users, but they’re limited by the number of computer servers available to them. OpenAI said it can only offer Pulse to its $200-a-month Pro subscribers right now due to capacity constraints.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    The real question is whether features like Pulse are worth the hundreds of billions of dollars being invested in AI data centers to support OpenAI. The feature looks cool and all, but that’s a tall order.

    Watch the full episode to hear more about the massive AI infrastructure investments reshaping Silicon Valley, TikTok’s ownership saga, and the policy changes affecting tech’s biggest players.

    [ad_2]

    Maxwell Zeff

    Source link

  • xAI accuses OpenAI of stealing its trade secrets in new lawsuit

    [ad_1]

    Elon Musk’s xAI is suing OpenAI, alleging that the ChatGPT maker has stolen its trade secrets. The lawsuit comes after the company recently sued a former employee, Xuechen Li, for allegedly stealing confidential information from the company before taking a job at OpenAI.

    In its latest lawsuit, which was reported by Sherwood, xAI says that Li’s alleged actions are part of “a broader and deeply troubling pattern of trade secret misappropriation, unfair competition, and intentional interference with economic relationships by OpenAI.” According to xAI’s lawyers, OpenAI also hired two other xAI employees who stole proprietary information from Musk’s company.

    “Another early xAI engineer—Jimmy Fraiture—was also harvesting xAI’s source code and airdropping it to his personal devices to take to OpenAI, where he now works,” the lawsuit states. “Meanwhile, a senior finance executive brought another piece of the puzzle to OpenAI—xAI’s ‘secret sauce’ of rapid data center deployment—with no intention to abide by his legal obligations to xAI.”

    “This new lawsuit is the latest chapter in Mr Musk’s ongoing harassment. We have no tolerance for any breaches of confidentiality, nor any interest in trade secrets from other labs,” OpenAI said in a statement the company shared with Engadget.

    Musk, of course, has a complicated history with the ChatGPT maker, and this isn’t the first time his rival AI company has sued OpenAI. Last month, xAI filed lawsuits against OpenAI and Apple over Grok’s placement on App Store charts. Musk alleged that ChatGPT rank in the top spot represented an “unequivocal antitrust violation.” Musk has also filed numerous lawsuits against OpenAI over its relationship with Microsoft and its move to become a for-profit company.

    Update 2:49 PM ET: Added comment from OpenAI.

    [ad_2]

    Karissa Bell

    Source link

  • It isn’t your imagination: Google Cloud is flooding the zone | TechCrunch

    [ad_1]

    The $100 billion partnership between Nvidia and OpenAI, announced Monday, represents – for now – the latest mega-deal reshaping the AI infrastructure landscape. The agreement involves non-voting shares tied to massive chip purchases and enough computing power for more than 5 million U.S. households, deepening the relationship between two of AI’s most powerful players.

    Meanwhile, Google Cloud is placing a different bet entirely. While the industry’s biggest players cement ever-tighter partnerships, Google Cloud is hellbent on capturing the next generation of AI companies before they become too big to court.

    Francis deSouza, its COO, has seen the AI revolution from multiple vantage points. As the former CEO of genomics giant Illumina, he watched machine learning transform drug discovery. As co-founder of a two-year-old AI alignment startup, Synth Labs, he has grappled with the safety challenges of increasingly powerful models. Now, having joined the C-suite at Google Cloud in January, he’s orchestrating a massive wager on AI’s second wave.

    It’s a story deSouza likes to tell in numbers. In a conversation with this editor, he notes several times that nine out of the top 10 AI labs use Google’s infrastructure. He also says that nearly all generative AI unicorns run on Google Cloud, that 60% of all gen AI startups worldwide have chosen Google as their cloud provider, and that the company has lined up $58 billion in new revenue commitments over the next two years, which represents more than double its current annual run rate.

    Asked what percentage of Google Cloud’s revenue comes from AI companies, he offers instead that “AI is resetting the cloud market, and Google Cloud is leading the way, especially with startups.”

    The Nvidia-OpenAI deal exemplifies the scale of consolidation sweeping AI infrastructure. Microsoft’s original $1 billion OpenAI investment has grown to nearly $14 billion. Amazon followed with $8 billion in Anthropic investments, securing deep hardware customizations that essentially tailor AI training to work better with Amazon’s infrastructure. Oracle has emerged as a surprise winner, too, landing a $30 billion cloud deal with OpenAI and then securing a jaw-dropping $300 billion five-year commitment starting in 2027.

    Even Meta, despite building its own infrastructure, signed a $10 billion deal with Google Cloud while planning $600 billion in U.S. infrastructure spending through 2028. The Trump administration’s $500 billion “Stargate” project, involving SoftBank, OpenAI and Oracle, adds another layer to these interlocking partnerships.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    These gigantic deals might seem threatening for Google, given the partnerships that companies like OpenAI and Nvidia appear to be cementing elsewhere. In fact, it looks a lot like Google is being cut out of some frenzied dealmaking. 

    The Google logo appears during a meeting between Alphabet and Google CEO Sundar Pichai and Polish Prime Minister Donald Tusk at Google for Startups in Warsaw, Poland, on February 13, 2025. (Photo by Klaudia Radecka/NurPhoto via Getty Images)Image Credits:Klaudia Radecka/NurPhoto / Getty Images

    But the corporate behemoth isn’t exactly sitting on its hands. Instead, Google Cloud is signing smaller companies like Loveable and Windsurf — what deSouza calls the “next generation of companies coming up”– as “primary computing partners” without major upfront investments.

    The approach reflects both opportunity and necessity. In a market where companies can go “from being a startup to being a multi-billion dollar company in a very short period of time,” as deSouza puts it, capturing future unicorns before they mature could prove more valuable than fighting over today’s giants.

    The strategy extends beyond simple customer acquisition. Google offers AI startups $350,000 in cloud credits, access to its technical teams, and go-to-market support through its marketplace. Google Cloud also provides what deSouza describes as a “no compromise” AI stack – from chips to models to applications – with an “open ethos” that gives customers choice at every layer.

    “Companies love the fact that they can get access to our AI stack, they can get access to our teams to understand where our technologies are going,” deSouza says during our interview. “They also love that they’re getting access to enterprise grade Google class infrastructure.”

    Google’s infrastructure play got even more ambitious recently, with reporting revealing the company’s behind-the-scenes maneuvering to expand its custom AI chip business. According to The Information, Google has struck deals to place its tensor processing units (TPUs) in other cloud providers’ data centers for the first time, including an agreement with London-based Fluidstack that includes up to $3.2 billion in financial backing for a New York facility.

    Competing directly with AI companies while simultaneously providing them infrastructure requires — let’s call it — finesse. Google Cloud provides TPU chips to OpenAI and hosts Anthropic’s Claude model through its Vertex AI platform, even as its own Gemini models compete head-to-head with both. (Google Cloud’s parent company, Alphabet, also owns a 14% stake in Anthropic, per New York Times court documents obtained earlier this year, though when asked directly about Google’s financial relationship with Anthropic, deSouza calls the relationship a “multi-layered partnership” then quickly redirects me to Google Cloud’s model marketplace – noting that customers can access various foundation models.)

    But if Google is trying to be Switzerland while advancing its own agenda, it has had plenty of practice. The approach has roots in Google’s open-source contributions, from Kubernetes to the foundational “Attention is All You Need” paper that enabled the transformer architecture underlying most modern AI. More recently, Google published an open-source protocol called Agent-to-Agent (A2A) for inter-agent communication in an attempt to demonstrate its continued commitment to openness even in competitive areas.

    “We have made the explicit choice over the years to be open at every layer of the stack, and we know that this means companies can absolutely take our technology and use it to build a competitor at the next layer,” deSouza acknowledges. “That’s been happening for decades. That’s something we are okay with.”

    Google Cloud’s courtship of startups comes at a particularly interesting moment. Just this month, federal judge Amit Mehta delivered a nuanced ruling in the government’s five-year-old search monopoly case, attempting to curb Google’s dominance without hampering its AI ambitions.

    While Google avoided the Justice Department’s most severe proposed penalties, including the forced divestment of its Chrome browser, the ruling underscored regulatory concerns about the company leveraging its search monopoly to dominate AI. Critics are worried, understandably, that Google’s vast trove of search data provides an unfair advantage in developing AI systems, and that the company could deploy the same monopolistic tactics that secured its search dominance.

    In conversation, deSouza is focused on far more positive outcomes. “I think we have an opportunity to fundamentally understand some of the major diseases that today we just don’t have a good understanding of,” deSouza says, for example, outlining a vision where Google Cloud helps power research into Alzheimer’s, Parkinson’s, and climate technologies. “We want to work very hard to make sure that we are pioneering the technologies that will enable that work.”

    Critics may not easily be assuaged. By positioning itself as an open platform that empowers rather than controls the next generation of AI companies, Google Cloud may be showing regulators that it fosters competition rather than stifles it, all while forging relationships with startups that might help Google’s case if regulators ramp up pressure.

    For our full chat with deSouza, check out this week’s StrictlyVC Download podcast; a new episode comes out every Tuesday.

    [ad_2]

    Connie Loizos

    Source link

  • Meta Poaches OpenAI Scientist to Help Lead AI Lab

    [ad_1]

    Mark Zuckerberg has poached a high-ranking OpenAI researcher to be the research principal of Meta Superintelligence Labs (MSL). Yang Song, who previously led the strategic explorations team at OpenAI, is now reporting to Shengjia Zhao, another OpenAI alum who has overseen the buzzy AI effort since July, according to multiple sources. He started earlier this month.

    The move comes after Zuckerberg went on a hiring blitz earlier this summer, bringing in at least 11 top researchers from OpenAI, Google, and Anthropic.

    Song had been at OpenAI since 2022. His research there focused on improving models’ ability to process large, complex datasets across different modalities. While still a graduate student at Stanford University, he developed a breakthrough technique that helped inform the development of OpenAI’s DALL-E 2 image generation model. Both he and Zhao attended Tsinghua University in Beijing as undergraduates, and worked under the same advisor, Stefano Ermon, while pursuing PhDs at Stanford.

    In a staff-wide memo sent this summer, Zuckerberg touted Zhao’s impressive resume as the cocreator of ChatGPT, GPT-4, all mini models, 4.1, and o3 at OpenAI—but he did not specify Zhao’s new role at Meta. In July, Zuckerberg wrote in a Threads post that while Zhao had “cofounded the lab” and “been our lead scientist from day one,” Meta had decided to “formalize his leadership role” as the lab’s chief scientist. The move came after Zhao threatened to return to OpenAI, even going as far as to sign employment documents, WIRED previously reported.

    A small number of researchers have left Meta Superintelligence Labs since the initiative was first announced in June. Two staffers have returned to OpenAI, WIRED previously reported. One of these researchers went through onboarding but never showed up for their first day of work at Meta.

    Another AI researcher, Aurko Roy, also left Meta in July, WIRED has learned. He’d worked at the tech giant for just five months, according to his personal website, which also says he now works on Microsoft AI. Roy did not immediately respond to a request for comment from WIRED. Yang Song, OpenAI, and Meta also did not immediately respond to a request for comment from WIRED.

    Song joins an already crowded field of big-name AI talent within Meta’s increasingly complicated AI division. When Zhao was hired in July, some speculated that he had replaced Yann LeCun, Meta’s longstanding chief AI scientist. In a LinkedIn post, LeCun clarified that he remained chief AI scientist for Facebook AI Research (FAIR), the company’s longstanding foundational AI research lab.

    [ad_2]

    Zoë Schiffer, Julia Black

    Source link

  • The New Patronage: A.I., Algorithms and the Economics of Creativity

    [ad_1]

    Generative A.I. is cheapening media production while platforms recode payouts, power and provenance. Unsplash+

    The cost of making high-quality media is collapsing. The cost of getting anyone to care about it is not. As generative A.I. turns production into a near-commodity, cultural power is shifting from studios and galleries to the platforms that allocate attention and the algorithms that determine who gets paid. The new patrons are not moguls with checkbooks; they are recommendation systems tuned for engagement and brand safety.

    Production is cheap; distribution is scarce

    Video models now draft storyboards, generate shots and remix audio at consumer scale. Yet the money still follows distribution, not tools. On YouTube, the rules of the YouTube Partner Program, set and revised unilaterally, determine whether a creator receives 55 percent of watch-page ad revenue for long-form content and 45 percent for Shorts. Those headline rates are stable, but the platform’s enforcement posture has shifted: as of July 15, YouTube began tightening monetization against “inauthentic” or mass-produced A.I. content, a clarification aimed at the surge of spammy, low-effort videos. The message is clear: use A.I. to enhance originality, not to flood the feed. 

    The enforcement problem is real. “Cheapfake” celebrity clips—static images, synthetic narration and rage-bait scripts—have racked up views while confusing audiences. YouTube has removed channels and now requires disclosure labels for realistic synthetic media, but detection and policing remain uneven at scale. 

    Platforms are recoding payouts and power

    Spotify’s 2024 royalty overhaul illustrates how platform rule-sets become policy for the creative middle class. Tracks now require at least 1,000 streams in 12 months to pay out; functional “noise” content is throttled; and labels face fees for detected artificial streaming. The goal is to redirect the pool away from bot farms and sub-cent trickles. The effect is a re-concentration of earnings at the head of the curve and a higher bar for the long tail. When platforms change the taps, whole genres feel the drought or the deluge. 

    TikTok’s détente with Universal Music in May 2024 underscored the same power dynamic in short-form video. After months of public sparring over royalties and A.I. clones, a new licensing deal restored UMG’s catalogue to the app, alongside language about improved remuneration and protections against generative knock-offs. When distribution is the choke point, even the largest rights-holders must negotiate on platform terms.

    Data deals: the new studio lots

    If attention is one axis of the new patronage, training data is the other. The most lucrative cultural contracts of the past year were not output commissions but input licences. OpenAI’s run of publisher agreements, including the Associated Press (archives), Axel Springer, the Financial Times and a multi-year global deal with News Corp, reportedly worth more than $250 million, signals a market price for premium corpora. A.I. labs are paying for access, and the beneficiaries are large, well-structured repositories of rights, not individual creators. 

    The legal battles surrounding image training demonstrate the unsettled state of the rules. Getty Images narrowed its U.K. lawsuit against Stability A.I. in June, dropping core copyright claims while pressing trademark-style arguments about reproduced watermarks. The pivot reflects the complexity of proving training-stage infringement across borders, as well as the industry’s search for more predictable routes to compensation.

    Regulation is standardizing transparency and shifting risk

    Rules are arriving, and they read like operating manuals for platformized culture. The E.U.’s A.I. Act phases in obligations for general-purpose models, with guidance for “systemic-risk” providers by 2025 and a Code of Practice outlining requirements for transparency, copyright diligence and safety. In effect, document training, assessing model risks, publishing technical summaries and preparing for audits are all tasks that privilege firms and partners with a strong compliance presence

    In the U.S., the Copyright Office’s multipart A.I. study is moving from theory to guidance. Part 2 (January 2025) addresses whether and when A.I.-assisted outputs can be copyrighted, while the pre-publication of Part 3 (May 2025) examines training and how to reconcile text-and-data mining with compensation. The studio system, once established, created creative norms through collective bargaining; now, regulators and A.I. vendors are co-authoring the manual.

    Unions are also imposing guardrails. The WGA’s 2023 deal barred studios from treating A.I.-generated material as “source material” and protected writers from being required to use A.I.; SAG-AFTRA’s agreements introduced consent and compensation for digital replicas, with similar provisions in music. These are not abstractions; they are hard-coded constraints on how platforms and producers can deploy synthetic labour.

    Provenance becomes product

    As synthetic media scales, provenance is turning into both a feature and a bargaining chip. TikTok has begun automatically labelling A.I. assets imported from tools that support C2PA Content Credentials. YouTube now requires creators to disclose realistic synthetic edits. Meanwhile, device makers are integrating C2PA into the capture pipeline, with Google’s Pixel 10 embedding credentials in its camera output. OpenAI, for its part, adds C2PA metadata to DALL-E images. Attribution is becoming clickable. 

    The provenance layer will not solve misinformation alone. Metadata can be stripped, and enforcement lags, but it rewires incentives. Platforms can boost authentic, labelled media in feeds, penalize evasions and share “credibility signals” with advertisers. That is algorithmic patronage by another name.

    What shifts next

    Studios and galleries will increasingly resemble platforms. Owning release windows is no longer enough. Expect investments in first-party audiences, data clean rooms and rights bundles that can be licensed to model providers. The historic advantage, taste and talent pipelines must be coupled with distribution levers and data assets. Deals will include not just streaming residuals but “model-weight” royalties and retraining rights, mirroring the structure of today’s publisher licences.

    Creators will face algorithmic wage setting. Eligibility thresholds (1,000 Spotify streams), demonetization triggers (unoriginal Shorts), disclosure requirements (synthetic media labels) and fraud detection fees are becoming the effective tax code of digital culture. The prudent strategy is to diversify revenue streams, ads, direct fan funding and commerce, and to instrument provenance by default to stay on the right side of both algorithms and regulators.

    Policy, too, will reward those who can comply. The E.U. framework, the U.S. copyright study, and union clauses collectively nudge the market toward licensed inputs, documented outputs and consent-based replication. Those advantages include larger catalogues and well-capitalized intermediaries. For independent creators, collective licensing pools and guild-run registries may offfer the path to negotiating power.

    The arts has seen patronage shift before, from courts to salons to art galleries and museums. This time, the median patron is a ranking function. Where culture is made matters less than where it is surfaced, metered and paid. Those who understand the incentives embedded in platform policy, and can prove provenance at the speed of the feed, will capture the surplus. Everyone else will be producing to spec for someone else’s algorithm.

    The New Patronage: A.I., Algorithms and the Economics of Creativity

    [ad_2]

    Gonçalo Perdigão

    Source link

  • Markets are selling off after Powell said six words investors don’t want to hear: ‘Equity prices are fairly highly valued’ | Fortune

    [ad_1]

    • Markets fell after Fed Chairman Jerome Powell warned that stocks are “highly valued.” U.S. stocks dropped, with tech leading losses on skepticism over Nvidia’s $100 billion OpenAI deal. Europe and U.K. markets opened lower.

    U.S. Federal Reserve Chairman Jerome Powell gave a speech in Rhode Island yesterday and, afterwards, was asked whether the Fed was keeping an eye on the markets. His reply contained six words that investors didn’t want to hear: “Equity prices are fairly highly valued.”

    The S&P 500 lost 0.55% on the day. Markets in the U.K. and Europe are all down this morning. The picture is mixed: Asia largely had a good day and U.S. futures are marginally up, so it’s not a tsunami.

    Powell’s remarks weren’t controversial. 

    Everyone knows that most major indexes have hit record highs this year. But it is clear that investors are wary of any sign that the Fed thinks “irrational exuberance”—as former Fed chair Alan Greenspan once called it—has kicked in. That would be a point at which the Fed could be expected to start raising interest rates in order to pierce an economic bubble. And that would be bad for stocks.

    Powell said: “We do look at overall financial conditions, and we ask ourselves whether our policies are affecting financial conditions in a way that is what we’re trying to achieve … But you’re right, by many measures, for example, equity prices are fairly highly valued.”

    UBS’s Paul Donovan interpreted it this way: “Powell apparently just wants investors’ confidence to be somewhat less certain.”

    One thing they are not confident about is tech stocks. The Nasdaq Composite lost nearly a full percentage point yesterday as traders expressed skepticism over Nvidia’s $100 billion investment in OpenAI. “There were as many questions as answers” about the deal, according to a note from Jim Reid and the team at Deutsche Bank this morning. A number of analysts are questioning how sustainable the AI boom is. Nasdaq futures are up this morning, premarket, however.

    Why are futures rising when the underlying indexes lost ground yesterday? Because the broad thrust of Powell’s speech contained worries about the softening labor market—which implies the Fed will stay on its rate-cutting path in the near-term.

    Here’s snapshot of the markets ahead of the opening bell in New York this morning:

    • S&P 500 futures were up 0.17% this morning. The index closed down 0.55% in its last session.
    • STOXX Europe 600 was down 0.28% in early trading. 
    • The U.K.’s FTSE 100 down 0.12% in early trading.
    • Japan’s Nikkei 225 was up 0.3%.
    • China’s CSI 300 was up 1.02%.
    • The South Korea KOSPI was down 0.4%.
    • India’s Nifty 50 was down 0.22% before the end of the session.
    • Bitcoin declined to $112.5K.
    Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.

    [ad_2]

    Jim Edwards

    Source link

  • OpenAI is building five new Stargate data centers with Oracle and SoftBank | TechCrunch

    [ad_1]

    OpenAI announced on Tuesday that it plans to build five new AI data centers across the United States with partners Oracle and SoftBank through its Stargate project. The new data centers will bring Stargate’s planned capacity to seven gigawatts — enough energy to power more than five million homes.

    Three of the new sites are being developed with Oracle. They’re located in Shackelford County, Texas; Doña Ana County, New Mexico; and an undisclosed location in the Midwest. The other two sites are being developed with SoftBank, with one in Lordstown, Ohio and the other in Milam County, Texas.

    The new Stargate AI data centers are part of OpenAI’s massive infrastructure buildout, as the company works to train and serve more powerful AI models. On Monday, OpenAI said it would receive a $100 billion investment from Nvidia to buy the chipmaker’s AI processors and build out even more AI data centers.

    [ad_2]

    Maxwell Zeff

    Source link

  • OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers

    [ad_1]

    OpenAI is planning to build five new data centers in the United States as part of the Stargate initiative, the company announced on Tuesday. The sites, which are being developed in partnership with Oracle and SoftBank, bring Stargate’s current planned capacity to nearly 7 gigawatts—roughly the same amount of power as seven large-scale nuclear reactors.

    “AI is different from the internet in a lot of ways, but one of them is just how much infrastructure it takes,” OpenAI CEO Sam Altman said during a press briefing in Abilene, Texas, on Tuesday. He argued that the US “cannot fall behind on this” and the “innovative spirit” of Texas provides a model for how to scale “bigger, faster, cheaper, better.”

    Three of the new sites, in Shackelford County, Texas; Doña Ana County, New Mexico; and a yet-to-be-disclosed location in the Midwest, are being developed in partnership with Oracle. The move follows an agreement Oracle and OpenAI announced in July to develop up to 4.5 gigawatts of US data center capacity on top of what the two companies are already building at the first Stargate facility in Abilene.

    OpenAI claims the new data centers, along with a planned 600 megawatt expansion of the Abilene site, will create more than 25,000 onsite jobs, though the number of workers required to build data centers typically dwarfs the amount needed to maintain them afterwards.

    The two remaining sites are being helmed by OpenAI and SB Energy, a SoftBank subsidiary that develops solar and battery projects. These are located in Lordstown, Ohio, and Milam County, Texas.

    Stargate is one of several major US technology infrastructure projects that have been announced since President Donald Trump took office at the start of the year. OpenAI said in January that the $500 billion, 10 gigawatt commitment between the ChatGPT maker, SoftBank, Oracle, and MGX would “secure American leadership in AI” and “create hundreds of thousands of American jobs.”

    Trump touted the mammoth initiative just two days after he returned to the White House, promising that it would accelerate American progress in artificial intelligence and help the US compete against China and other nations. In July, Trump announced an AI action plan that called for speedy infrastructure development and limited red tape as the US tries to beat other countries in the quest for advanced AI. “We believe we’re in an AI race,” White House AI czar David Sacks said at the time. “We want the United States to win that race.”

    OpenAI initially framed Stargate as a “new company” that would be chaired by Softbank CEO Masayoshi Son. Now, however, executives close to the project say it’s an umbrella brand name used to refer to all of OpenAI’s data center projects—except those developed in partnership with Microsoft.

    The flagship site in Abilene is primarily owned and operated by Oracle, with OpenAI acting as the primary tenant, according to executives close to the project. The buildout, which is being managed by the data center startup Crusoe, is on track to be completed by mid-2026, sources close to the project say. It is already running on Oracle Cloud Infrastructure and supporting OpenAI training and inference workloads, those sources add.

    [ad_2]

    Zoë Schiffer, Will Knight, Lauren Goode

    Source link