ReportWire

Tag: openai

  • After India, OpenAI launches its affordable ChatGPT Go plan in Indonesia | TechCrunch

    [ad_1]

    OpenAI is expanding its budget-friendly ChatGPT subscription plan beyond India. The company launched its sub-$5 ChatGPT Go paid plan for the country’s users last month and now is rolling out the same plan in Indonesia for Rp75,000 ($4.50) per month.

    The ChatGPT Go plan is a mid-tier subscription option that sits between OpenAI’s free version and its premium $20-per-month ChatGPT Plus plan. Users get 10 times higher usage limits than the free plan for sending questions or prompts, generating images, and uploading files. The plan also allows ChatGPT to remember previous conversations better, enabling more personalized responses over time, ChatGPT head Nick Turley said on X.

    Turley said that since the company launched the ChatGPT Go plan in India, paid subscribers have more than doubled.

    This move puts OpenAI in direct competition with Google, which launched its own similarly-priced AI Plus subscription plan in Indonesia earlier this month. Google’s AI Plus plan gives users access to its Gemini 2.5 Pro chatbot, along with creative tools for image and video creation like Flow, Whisk, and Veo 3 Fast. The plan also includes enhanced features for Google’s AI research assistant NotebookLM and integrates AI capabilities into Gmail, Docs, and Sheets, along with 200GB of cloud storage.

    [ad_2]

    Ivan Mehta

    Source link

  • Oracle Appoints Co-CEOs to Replace Longtime Leader Safra Catz

    [ad_1]

    Clay Magouyrk (L) and Mike Sicilia (R) will combine their strengths in cloud infrastructure and applications as Oracle’s new co-CEOs. Courtesy Oracle

    Oracle has appointed not just one, but two new CEOs to help the software giant maintain momentum as it rides the A.I. revolution to unprecedented highs. The top executive role, currently filled by Safra Catz, will be jointly taken up by insiders Clay Magouyrk and Mike Sicilia going forward, announced Oracle today (Sep. 22). The two leaders will be charged with guiding Oracle at a pivotal time for the tech player as it leans heavily into providing the cloud infrastructure required to power A.I. It’s already benefitted handsomely from a newfound demand for data centers, with the A.I. boom lifting its shares by nearly 95 percent this year, propelling its founder Larry Ellison’s net worth to new heights and giving rise to some of the largest cloud contracts in history. Both of Oracle’s new leaders have the credentials to back up the company’s A.I. shift. Magouyrk, head of Oracle’s cloud infrastructure team, has overseen the rollout of platforms powering A.I. data centers, while Sicilia formerly led Oracle’s applications business and steered teams integrating industry-specific A.I. agents across areas like healthcare, banking and communications.

    “A few years ago, Clay and Mike committed Oracle’s Infrastructure and Application businesses to A.I.—it’s paying off,” said Ellison, who serves as Oracle’s chairman and chief technology officer, in a statement. “They are both proven leaders, and I am looking forward to spending the coming years working side-by-side with them.” While co-CEOs remain unusual in Silicon Valley, this isn’t the first time Oracle has dabbled with a joint leadership structure. After Ellison stepped down as CEO in 2014, Catz was appointed to the role alongside Mark Hurd and remained on as the sole chief executive after Hurd’s passing in 2019.

    Woman in green blazer sits onstage in chairWoman in green blazer sits onstage in chair
    Safra Catz helmed Oracle for more than a decade. Joe Raedle/Getty Images

    Catz, who will also be passing on her principal financial officer position to Doug Kehring, is staying on at Oracle as executive vice chair of its board of directors. Her 11-year tenure as the software company’s CEO was most recently marked by a surge across its cloud business, which generated some $3.3 billion in cloud infrastructure revenue during the July-August quarter to represent a 55 percent year-over-year increase. That figure will rise to a total of $18 billion for the 2026 fiscal year, according to Oracle’s forecasts, and increase to $32 billion, $73 billion, $114 billion and $144 billion over the following four years.

    Behind Oracle’s ballooning fortunes is an unrelenting demand from A.I. developers to secure computing capacity. Over the most recent quarter, Oracle signed four multibillion-dollar contracts with three different customers and reported that its performance obligations from existing contracts surged to $455 billion—up 359 percent compared to the prior year. One of its most significant deals to date includes a recently announced agreement to provide OpenAI with $300 billion worth of computing power over the next five years as part of the ChatGPT-maker’s Stargate venture.

    Earlier this month, Oracle’s A.I. dominance culminated in a share gain that sent its stock flying in the company’s biggest one-day percentage jump since 1992 and briefly made Ellison the world’s wealthiest person in the world. “Oracle’s technology and business have never been stronger,” said Catz in a statement, adding that the company’s “breathtaking growth rate points to an even more prosperous future.”

    Beyond Oracle’s A.I. ambitions, the tech company is also set to play a role in a looming deal that will see the Chinese-owned TikTok sell its U.S. arm to a consortium of American investors. Under an arrangement overseen by the Trump administration, Oracle will be responsible for recreating TikTok’s algorithm by retraining a new U.S. version, said White House officials today.

    Oracle Appoints Co-CEOs to Replace Longtime Leader Safra Catz

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • NVIDIA is investing up to $100 billion in OpenAI to build 10 gigawatts of AI data centers

    [ad_1]

    NVIDIA will invest up to $100 billion in OpenAI as the ChatGPT maker sets out to build at least 10 gigawatts of AI data centers using NVIDIA chips and systems. The strategic partnership is gargantuan in scale. The 10-gigawatt buildout will require millions of NVIDIA GPUs to run OpenAI’s next-generation models. NVIDIA’s investment will be doled out progressively as each gigawatt comes online.

    The first phase of this plan is expected to come online in the second half of 2026, and will be built on NVIDIA’s Vera Rubin platform, which NVIDIA CEO will be a “big, big, huge step up,” over the current-gen Blackwell chips.

    “NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT,” said Jensen Huang in a announcing the letter of the intent for the partnership. “Compute infrastructure will be the basis for the economy of the future, and we will utilize what we’re building with NVIDIA to both create new AI breakthroughs and empower people and businesses with them at scale,” said Sam Altman, CEO of OpenAI.

    NVIDIA has made a number of strategic investments lately, including in Intel, shortly after the took a 10 percent stake in the American chipmaker. The company also recently to license AI technology from startup Enfabrica and hire its CEO and other key employees.

    OpenAI has also formed other strategic partnerships over the last few years, including a somewhat complicated . This summer it struck a to build out 4.5 gigawatts of data center capacity using more than 2 million Oracle chips. That deal was part of , the strategic partnership between SoftBank, OpenAI, NVIDIA, Oracle, Arm and Microsoft with a promise to spend $500 billion in the US on AI infrastructure.

    [ad_2]

    Andre Revilla

    Source link

  • Nvidia to invest $100 Billion in OpenAI for data centers

    [ad_1]

    Nvidia Corp. will invest as much as $100 billion in OpenAI to support building of new data centers and other artificial intelligence infrastructure, a blockbuster deal that underscores booming demand for AI tools like ChatGPT and the computing power to make them run. The investment is intended to help OpenAI build data centers with a […]

    [ad_2]

    Bloomberg News

    Source link

  • Nvidia to invest $100 Billion in OpenAI for data centers – FinAi News

    [ad_1]

    Nvidia Corp. will invest as much as $100 billion in OpenAI to support building of new data centers and other artificial intelligence infrastructure, a blockbuster deal that underscores booming demand for AI tools like ChatGPT and the computing power to make them run. The investment is intended to help OpenAI build data centers with a […]

    [ad_2]

    Bloomberg News

    Source link

  • Nvidia to invest $100 Billion in OpenAI for data centers

    [ad_1]

    Nvidia Corp. will invest as much as $100 billion in OpenAI to support building of new data centers and other artificial intelligence infrastructure, a blockbuster deal that underscores booming demand for AI tools like ChatGPT and the computing power to make them run. The investment is intended to help OpenAI build data centers with a […]

    [ad_2]

    Bloomberg News

    Source link

  • WIRED Roundup: The Right Embraces Cancel Culture

    [ad_1]

    Zoë Schiffer: Right.

    Manisha Krishnan: … which some human design followers believe that your spleen is a better guide than your gut. And so he ended up breaking it off with one of the women that he was dating in Love Is Blind because he said, “His spleen was silent.”

    Zoë Schiffer: I was locked in for the first part of this. And then we got to the spleen thing. What does that mean? Is it literally a gut sense? What are they tapping into?

    Manisha Krishnan: Honestly, it is really confusing because they have all of these rules around deconditioning yourself from essentially forces within you that don’t jive with who you really are, but the way that you decondition yourself seems to be in some cases very rigid. I saw one person on Reddit posting about how they only eat polenta because that’s the only ingredient that will allow them to become their truest self according to human design.

    Zoë Schiffer: I do want to know, do you know what I am?

    Manisha Krishnan: Yes.

    Zoë Schiffer: Because you asked me my birthday yesterday, so I’m on the edge of my seat.

    Manisha Krishnan: I did. I plugged it in. And you are a generator, which is an energy type defined with a sacral center characterized by a consistent self-sustaining life force—

    Zoë Schiffer: Wow.

    Manisha Krishnan: … that provides stamina and the capacity to do fulfilling work.

    Zoë Schiffer: Did WIRED write this?

    Manisha Krishnan: I know, I was just thinking that.

    Zoë Schiffer: Well, great. I love that for myself. Coming up after the break, we’ll dive into the backlash that some people from graphic designers to high-profile entertainers have received after commenting on Charlie Kirk’s death.

    [break]

    Zoë Schiffer: Welcome back to Uncanny Valley. I’m Zoë Schiffer. I’m joined today by senior culture editor Manisha Krishnan. Manisha, the story that keeps on reverberating this week is that of Charlie Kirk’s death. Our colleague, Jake Lahut, has been covering how the Trump administration in the general right-wing base has maintained their position that Kirk’s death was a result of leftist ideology and maybe even a coordinated attack. Both of these claims have been debunked, but it’s done little to change people’s minds. And this week, you reported that different artists have been facing professional retaliation for voicing their opinions on Kirk. What did you find in your reporting?

    Manisha Krishnan: There’s been a bunch of people from different industries that have lost their jobs over posting unsympathetically about Charlie Kirk’s death, from journalists to video game developers. But one that stuck out in my mind was I interviewed this trans writer who was doing a comic series for DC Comics. She referred to Charlie Kirk as a Nazi bitch after he died, and she was suspended on Bluesky for a week, and DC fired her and they’ve canceled the series. And that really stuck out to me because she has said that Charlie Kirk, he was staunchly anti-trans. I mean, he was anti a lot of things that weren’t a straight Christian white male, and he was pretty loud and proud about those views. And so I think it really does stick out to me because it’s almost like, are people expected to perform grief for someone who espoused hateful views towards the community that they’re part of, but it almost feels like this really, really hard line that a lot of corporations have taken. Making someone apologize is one thing, but literally disappearing art, canceling an entire series or South Park deciding not to re-air an episode about Charlie Kirk that he himself loved. He said he really liked it. I just think it goes a little bit beyond just reprimanding people.

    [ad_2]

    Zoë Schiffer, Manisha Krishnan

    Source link

  • Silicon Valley bets big on ‘environments’ to train AI agents | TechCrunch

    [ad_1]

    For years, Big Tech CEOs have touted visions of AI agents that can autonomously use software applications to complete tasks for people. But take today’s consumer AI agents out for a spin, whether it’s OpenAI’s ChatGPT Agent or Perplexity’s Comet, and you’ll quickly realize how limited the technology still is. Making AI agents more robust may take a new set of techniques that the industry is still discovering.

    One of those techniques is carefully simulating workspaces where agents can be trained on multi-step tasks — known as reinforcement learning (RL) environments. Similarly to how labeled datasets powered the last wave of AI, RL environments are starting to look like a critical element in the development of agents.

    AI researchers, founders, and investors tell TechCrunch that leading AI labs are now demanding more RL environments, and there’s no shortage of startups hoping to supply them.

    “All the big AI labs are building RL environments in-house,” said Jennifer Li, general partner at Andreessen Horowitz, in an interview with TechCrunch. “But as you can imagine, creating these datasets is very complex, so AI labs are also looking at third party vendors that can create high quality environments and evaluations. Everyone is looking at this space.”

    The push for RL environments has minted a new class of well-funded startups, such as Mechanize and Prime Intellect, that aim to lead the space. Meanwhile, large data-labeling companies like Mercor and Surge say they’re investing more in RL environments to keep pace with the industry’s shifts from static datasets to interactive simulations. The major labs are considering investing heavily too: according to The Information, leaders at Anthropic have discussed spending more than $1 billion on RL environments over the next year.

    The hope for investors and founders is that one of these startups emerge as the “Scale AI for environments,” referring to the $29 billion data labelling powerhouse that powered the chatbot era.

    The question is whether RL environments will truly push the frontier of AI progress.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    What is an RL environment?

    At their core, RL environments are training grounds that simulate what an AI agent would be doing in a real software application. One founder described building them in recent interview “like creating a very boring video game.”

    For example, an environment could simulate a Chrome browser and task an AI agent with purchasing a pair of socks on Amazon. The agent is graded on its performance and sent a reward signal when it succeeds (in this case, buying a worthy pair of socks).

    While such a task sounds relatively simple, there are a lot of places where an AI agent could get tripped up. It might get lost navigating the web page’s drop down menus, or buy too many socks. And because developers can’t predict exactly what wrong turn an agent will take, the environment itself has to be robust enough to capture any unexpected behavior, and still deliver useful feedback. That makes building environments far more complex than a static dataset.

    Some environments are quite elaborate, allowing for AI agents to use tools, access the internet, or use various software applications to complete a given task. Others are more narrow, aimed at helping an agent learn specific tasks in enterprise software applications.

    While RL environments are the hot thing in Silicon Valley right now, there’s a lot of precedent for using this technique. One of OpenAI’s first projects back in 2016 was building “RL Gyms,” which were quite similar to the modern conception of environments. The same year, Google DeepMind’s AlphaGo AI system beat a world champion at the board game, Go. It also used RL techniques within a simulated environment.

    What’s unique about today’s environments is that researchers are trying to build computer-using AI agents with large transformer models. Unlike AlphaGo, which was a specialized AI system working in a closed environments, today’s AI agents are trained to have more general capabilities. AI researchers today have a stronger starting point, but also a complicated goal where more can go wrong.

    A crowded field

    AI data labeling companies like Scale AI, Surge, and Mercor are trying to meet the moment and build out RL environments. These companies have more resources than many startups in the space, as well as deep relationships with AI labs.

    Surge CEO Edwin Chen tells TechCrunch he’s recently seen a “significant increase” in demand for RL environments within AI labs. Surge — which reportedly generated $1.2 billion in revenue last year from working with AI labs like OpenAI, Google, Anthropic and Meta — recently spun up a new internal organization specifically tasked with building out RL environments, he said.

    Close behind Surge is Mercor, a startup valued at $10 billion, which has also worked with OpenAI, Meta, and Anthropic. Mercor is pitching investors on its business building RL environments for domain specific tasks such as coding, healthcare, and law, according to marketing materials seen by TechCrunch.

    Mercor CEO Brendan Foody told TechCrunch in an interview that “few understand how large the opportunity around RL environments truly is.”

    Scale AI used to dominate the data labeling space, but has lost ground since Meta invested $14 billion and hired away its CEO. Since then, Google and OpenAI dropped Scale AI as a data provider, and the startup even faces competition for data labelling work inside of Meta. But still, Scale is trying to meet the moment and build environments.

    “This is just the nature of the business [Scale AI] is in,” said Chetan Rane, Scale AI’s head of product for agents and RL environments. “Scale has proven its ability to adapt quickly. We did this in the early days of autonomous vehicles, our first business unit. When ChatGPT came out, Scale AI adapted to that. And now, once again, we’re adapting to new frontier spaces like agents and environments.”

    Some newer players are focusing exclusively on environments from the outset. Among them is Mechanize, a startup founded roughly six months ago with the audacious goal of “automating all jobs.” However, co-founder Matthew Barnett tells TechCrunch that his firm is starting with RL environments for AI coding agents.

    Mechanize aims to supply AI labs with a small number of robust RL environments, Barnett says, rather than larger data firms that create a wide range of simple RL environments. To this point, the startup is offering software engineers $500,000 salaries to build RL environments — far higher than an hourly contractor could earn working at Scale AI or Surge.

    Mechanize has already been working with Anthropic on RL environments, two sources familiar with the matter told TechCrunch. Mechanize and Anthropic declined to comment on the partnership.

    Other startups are betting that RL environments will be influential outside of AI labs. Prime Intellect — a startup backed by AI researcher Andrej Karpathy, Founders Fund, and Menlo Ventures — is targeting smaller developers with its RL environments.

    Last month, Prime Intellect launched an RL environments hub, which aims to be a “Hugging Face for RL environments.” The idea is to give open-source developers access to the same resources that large AI labs have, and sell those developers access to computational resources in the process.

    Training generally capable agents in RL environments can be more computational expensive than previous AI training techniques, according to Prime Intellect researcher Will Brown. Alongside startups building RL environments, there’s another opportunity for GPU providers that can power the process.

    “RL environments are going to be too large for any one company to dominate,” said Brown in an interview. “Part of what we’re doing is just trying to build good open-source infrastructure around it. The service we sell is compute, so it is a convenient onramp to using GPUs, but we’re thinking of this more in the long term.”

    Will it scale?

    The open question around RL environments is whether the technique will scale like previous AI training methods.

    Reinforcement learning has powered some of the biggest leaps in AI over the past year, including models like OpenAI’s o1 and Anthropic’s Claude Opus 4. Those are particularly important breakthroughs because the methods previously used to improve AI models are now showing diminishing returns

    Environments are part of AI labs’ bigger bet on RL, which many believe will continue to drive progress as they add more data and computational resources to the process. Some of the OpenAI researchers behind o1 previously told TechCrunch that the company originally invested in AI reasoning models — which were created through investments in RL and test-time-compute — because they thought it would scale nicely.

    The best way to scale RL remains unclear, but environments seem like a promising contender. Instead of simply rewarding chatbots for text responses, they let agents operate in simulations with tools and computers at their disposal. That’s far more resource-intensive, but potentially more rewarding.

    Some are skeptical that all these RL environments will pan out. Ross Taylor, a former AI research lead with Meta that co-founded General Reasoning, tells TechCrunch that RL environments are prone to reward hacking. This is a process in which AI models cheat in order to get a reward, without really doing the task.

    “I think people are underestimating how difficult it is to scale environments,” said Taylor. “Even the best publicly available [RL environments] typically don’t work without serious modification.”

    OpenAI’s Head of Engineering for its API business, Sherwin Wu, said in a recent podcast that he was “short” on RL environment startups. Wu noted that it’s a very competitive space, but also that AI research is evolving so quickly that it’s hard to serve AI labs well.

    Karpathy, an investor in Prime Intellect that has called RL environments a potential breakthrough, has also voiced caution for the RL space more broadly. In a post on X, he raised concerns about how much more AI progress can be squeezed out of RL.

    “I am bullish on environments and agentic interactions but I am bearish on reinforcement learning specifically,” said Karpathy.

    Update: A previous version of this article referred to Mechanize as Mechanize Work. It has been updated to reflect the company’s official name.

    [ad_2]

    Maxwell Zeff

    Source link

  • AI Medical Tools Provide Worse Treatment for Women and Underrepresented Groups

    [ad_1]

    Historically, most clinical trials and scientific studies have primarily focused on white men as subjects, leading to a significant underrepresentation of women and people of color in medical research. You’ll never guess what has happened as a result of feeding all of that data into AI models. It turns out, as the Financial Times calls out in a recent report, that AI tools used by doctors and medical professionals are producing worse health outcomes for the people who have historically been underrepresented and ignored.

    The report points to a recent paper from researchers at the Massachusetts Institute of Technology, which found that large language models including OpenAI’s GPT-4 and Meta’s Llama 3 were “more likely to erroneously reduce care for female patients,” and that women were told more often than men “self-manage at home,” ultimately receiving less care in a clinical setting.  That’s bad, obviously, but one could argue that those models are more general purpose and not designed to be use in a medical setting. Unfortunately, a healthcare-centric LLM called Palmyra-Med was also studied and suffered from some of the same biases, per the paper. A look at Google’s LLM Gemma (not its flagship Gemini) conducted by the London School of Economics similarly found the model would produce outcomes with “women’s needs downplayed” compared to men.

    A previous study found that models similarly had issues with offering the same levels of compassion to people of color dealing with mental health matters as they would to their white counterparts. A paper published last year in The Lancet found that OpenAI’s GPT-4 model would regularly “stereotype certain races, ethnicities, and genders,” making diagnoses and recommendations that were more driven by demographic identifiers than by symptoms or conditions. “Assessment and plans created by the model showed significant association between demographic attributes and recommendations for more expensive procedures as well as differences in patient perception,” the paper concluded.

    That creates a pretty obvious problem, especially as companies like Google, Meta, and OpenAI all race to get their tools into hospitals and medical facilities. It represents a huge and profitable market—but also one that has pretty serious consequences for misinformation. Earlier this year, Google’s healthcare AI model Med-Gemini made headlines for making up a body part. That should be pretty easy for a healthcare worker to identify as being wrong. But biases are more discreet and often unconscious. Will a doctor know enough to question if an AI model is perpetuating a longstanding medical stereotype about a person? No one should have to find that out the hard way.

    [ad_2]

    AJ Dellinger

    Source link

  • It’s not only Sam Altman anymore warning about an AI bubble. Mark Zuckerberg says a ‘collapse’ is ‘definitely a possibility’ | Fortune

    [ad_1]

    Deutsche Bank called it “the summer AI turned ugly.” For weeks, with every new bit of evidence that corporations were failing at AI adoption, fears of an AI bubble have intensified, fueled by the realization of just how topheavy the S&P 500 has grown, along with warnings from top industry leaders. An August study from MIT found that 95% of AI pilot programs fail to deliver a return on investment, despite over $40 billion being poured into the space. Just prior to MIT’s report, OpenAI CEO Sam Altman rang AI bubble alarm bells, expressing concern over the overvaluation of some AI startups and the intensity of investor enthusiasm. These trends have even caught the attention of Fed Chair Jerome Powell, who noted that the U.S. was witnessing “unusually large amounts of economic activity” in building out AI capabilities. 

    Mark Zuckerberg has some similar thoughts. 

    The Meta CEO acknowledged that the rapid development of and surging investments in AI stands to form a bubble, potentially outpacing practical productivity and returns and risking a market crash. But Zuckerberg insists that the risk of over-investment is preferable to the alternative: being late to what he sees as an era-defining technological transformation.

    “There are compelling arguments for why AI could be an outlier,” Zuckerberg hedged in an appearance on the Access podcast. “And if the models keep on growing in capability year-over-year and demand keeps growing, then maybe there is no collapse.”

    Then Zuckerberg joined the Altman camp, saying that all capital expenditure bubbles like the buildout of AI infrastructure, seen largely in the form of data centers, tend to end in similar ways. “But I do think there’s definitely a possibility, at least empirically, based on past large infrastructure buildouts and how they led to bubbles, that something like that would happen here,” Zuckerberg said.

    Bubble echoes

    Zuckerberg pointed to past bubbles, namely railroads and the dot-com bubble, as key examples of infrastructure buildouts leading to a stock-market collapse. In these instances, he claimed that bubbles occurred due to businesses taking on too much debt, macroeconomic factors, or product demand waning, leading to companies going under and leaving behind valuable assets. 

    The Meta CEO’s comments echoed Altman’s, who has similarly cautioned that the AI boom is showing many signs of a bubble. 

    “When bubbles happen, smart people get overexcited about a kernel of truth,” Altman told The Verge, adding that AI is that kernel: transformative and real, but often surrounded by irrational exuberance. Altman has also warned that “the frenzy of cash chasing anything labeled ‘AI’” can lead to inflated valuations and risk for many. 

    The consequences of these bubbles are costly. During the dot-com bubble, investors poured money into tech startups with unrealistic expectations, driven by hype and a frenzy for new internet-based companies. When the results fell short, the stocks involved in the dot-com bubble lost more than $5 trillion in total market cap.

    An AI bubble stands to have similarly significant economic impacts. In 2025 alone, the largest U.S. tech companies, including Meta, have spent more than $155 billion on AI development. And, according to Statista, the current AI market value is approximately $244.2 billion.

    But, for Zuckerberg, losing out on AI’s potential is a far greater risk than losing money in an AI bubble. The company recently committed at least $600 billion to U.S. data centers and infrastructure through 2028 to support its AI ambitions. According to Meta’s chief financial officer, this money will go towards all of the tech giant’s US data center buildouts and domestic business operations, including new hires. Meta also launched its superintelligence lab, recruiting talent aggressively with multi-million-dollar job offers, to develop AI that outperforms human intelligence.

    “If we end up misspending a couple hundred billion dollars,  that’s going to be very unfortunate obviously. But I would say the risk is higher on the other side,” Zuckerberg said. “If you build too slowly, and superintelligence is possible in three years but you built it out were assuming it would be there in five years, then you’re out of position on what I think is going to be the most important technology that enables the most new products and innovation and value creation in history.”

    While he sees the consequences of not being aggressive enough in AI investing outweighing overinvesting, Zuckerberg acknowledged that Meta’s survival isn’t dependent upon AI’s success.

    For companies like OpenAI and Anthropic, he said “there’s obviously this open question of to what extent are they going to keep on raising money, and that’s dependent both to some degree on their performance and how AI does, but also all of these macroeconomic factors that are out of their control.”

    Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.

    [ad_2]

    Lily Mae Lazarus

    Source link

  • Hofstra launches campuswide ChatGPT Edu for students, faculty | Long Island Business News

    [ad_1]

    THE BLUEPRINT:

    • is launching campuswide

    • Initiative aims to teach ethical, creative and effective AI use.

    • Secure, private version with advanced models and data protection.

    • integrated into curriculum, research and future career prep.

    Hempstead-based Hofstra University is preparing to roll out campuswide access to ChatGPT Edu, an tool specifically for educational organizations, for faculty and students alike.

    The initiative is designed to empower new employees to master ChatGPT and similar tools, helping to ensure that they meet employer expectations and understand how to use AI creatively, effectively and ethically.

    “We are making ChatGPT Edu available to the Hofstra community as part of the learning experience at Hofstra,” Hofstra President Susan Poser said in a news release about the initiative. “This cutting-edge technology is now ubiquitous, and we must help students learn how to utilize it as an educational tool and in preparation for their careers.”

    Poser announced the new initiative during her State of the University address on Wednesday. Hofstra is regarded as one of the early adopters of the initiative on Long Island.

    The campus-wide rollout builds on a pilot program from spring 2025 that involved select members of the university community. The tool provides a secure, private and institutionally managed version of ChatGPT. User data remains confidential and isn’t used to train OpenAI’s models. Hofstra users also get higher usage limits and access to OpenAI’s most advanced models, according to the university.

    “We’re excited to see Hofstra create an AI-native campus environment where everyone can benefit from AI and no one is left behind,” Leah Belsky, vice president of education of OpenAI, said in a news release. “Their campuswide rollout of ChatGPT Edu gives all students the opportunity to build AI literacy and carry those skills into the evolving workforce.”

    The rollout comes amid concerns that AI is replacing entry-level jobs, but the university aims to equip students with the skills to navigate the changing workforce.

    “We look at AI as not a replacement but as a partner to any work that we do,” Mitchell Kase, executive director of the university’s Center for Excellence in Learning, Teaching, and Assessment, said in the news release.

    “It’s important that we teach our students AI literacy and that we give them foundational skills and experiences,” Kase said. “That way – when they go out into the professional world – they are prepared, confident and have experience using a tool that they will likely be interacting with in whatever profession they choose to work.”

    Kase is partnering with Joseph Bartolotta, a professor of writing studies, in his role as this year’s AI faculty fellow, to develop initiatives that help faculty integrate AI into their teaching.

    “One idea that we’re quite excited about is launching a faculty learning community around the use of AI in learning and teaching,” Kase said. “It will be an opportunity for any faculty member to join us and engage in conversations about the use of AI from both theoretical and practical perspectives.

    “We already offer a variety of courses that explore AI in relation to specific fields, such as business, journalism, informational technology, marketing and writing. Even the library offers a course that covers AI literacy,” Kase said. “Moving forward, I anticipate growing interest not only in developing new courses but also creating research opportunities and other learning experiences that help students navigate AI in their academic and professional lives.”

    For those skeptical about AI’s role in college classrooms, Kase insists that the technology’s explosive growth across every sector is impossible for higher education to ignore or avoid.

    “Hofstra has always taken an intentional and strategic approach to the ways in which we introduce new technology,” he said. “We’re focusing on transparency, providing clear guidelines, and ensuring that we provide an experience that maintains integrity for everyone who uses it.”

    Last year, Hofstra launched a 10-year strategic plan emphasizing technology, including AI, as vital to agility, student success, innovation and community impact. To support the plan, the university adopted an policy guiding its integration across curriculum, research and academic life, making AI a driver of Hofstra’s future.


    [ad_2]

    Adina Genn

    Source link

  • Sierra CEO Bret Taylor Predicts A.I. Agents Will Redefine Business Like the Internet

    [ad_1]

    OpenAI chairman Bret Taylor has held many notable titles in tech. Katelyn Tucker/ Slava Blazer Photography

    A.I. agents are the next big platform shift in tech, on par with the dawn of the internet 30 years ago and the rise of mobile apps a decade ago, according to OpenAI chairman Bret Taylor, who also runs his own A.I. startup, Sierra. Speaking at the Skift Global Forum in New York City yesterday (Sept. 18), the tech executive argued that enterprises are now racing to adopt A.I. agents much like they once scrambled to build websites or launch mobile apps.

    “I think this is an opportunity that, probably, the closest catalog would be the birth of the internet,” Taylor said during an onstage interview.

    Taylor has seen several waves of disruption firsthand. At Google in the early 2000s, he helped launch Google Maps. He went on to serve as chief technology officer at Facebook (now Meta), co-CEO of Salesforce, and chair of Twitter’s board during Elon Musk’s tumultuous takeover. In 2023, he was tapped as chairman of OpenAI’s board after the ChatGPT-maker briefly ousted and reinstated CEO Sam Altman.

    Now, his focus is on Sierra, the conversational A.I. startup he co-founded two years ago with former Google colleague Clay Bavor. The company has quickly become a “decacorn,” hitting a $10 billion valuation earlier this month after raising $350 million from Greenoaks Capital. Sierra already counts hundreds of enterprise customers across financial services, health care and retail. A fifth of Sierra’s customers have annual revenue over $10 billion.

    Taylor insists that A.I. agents are more than just cost-cutting tools. Increasingly, they’re revenue drivers. Sierra’s platform is helping companies sell mortgages, make outbound sales calls and even manage payroll for small businesses. “These agents are not only doing services, but also doing sales,” he said.

    And the form factor is evolving. While chatbots dominate today’s landscape, Taylor believes voice-enabled A.I. is “as, or more important, of a channel than chat.” Multi-modal agents are also emerging. For instance, retailers are beginning to process warranty claims by analyzing photos of damaged products.

    Just as the internet gave rise to search engines and aggregation platforms, Taylor expects agentic A.I. to spawn entirely new business categories. The challenge will be ensuring that they meet consumer expectations as their desires inevitably evolve with the technology’s development. “Consumers are moving faster than most companies can make decisions,” Taylor warned, noting that ChatGPT became the fastest-growing consumer app in history. “It’s on all of us leaders to push decisively towards this new world.”

    Sierra CEO Bret Taylor Predicts A.I. Agents Will Redefine Business Like the Internet

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • OpenAI’s research on AI models deliberately lying is wild  | TechCrunch

    [ad_1]

    Every now and then, researchers at the biggest tech companies drop a bombshell. There was the time Google said its latest quantum chip indicated multiple universes exist. Or when Anthropic gave its AI agent Claudius a snack vending machine to run and it went amok, calling security on people and insisting it was human.  

    This week, it was OpenAI’s turn to raise our collective eyebrows.

    OpenAI released on Monday some research that explained how it’s stopping AI models from “scheming.” It’s a practice in which an “AI behaves one way on the surface while hiding its true goals,” OpenAI defined in its tweet about the research.   

    In the paper, conducted with Apollo Research, researchers went a bit further, likening AI scheming to a human stock broker breaking the law to make as much money as possible. The researchers, however, argued that most AI “scheming” wasn’t that harmful. “The most common failures involve simple forms of deception — for instance, pretending to have completed a task without actually doing so,” they wrote. 

    The paper was mostly published to show that “deliberative alignment⁠” — the anti-scheming technique they were testing — worked well. 

    But it also explained that AI developers haven’t figured out a way to train their models not to scheme. That’s because such training could actually teach the model how to scheme even better to avoid being detected. 

    “A major failure mode of attempting to ‘train out’ scheming is simply teaching the model to scheme more carefully and covertly,” the researchers wrote. 

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Perhaps the most astonishing part is that, if a model understands that it’s being tested, it can pretend it’s not scheming just to pass the test, even if it is still scheming. “Models often become more aware that they are being evaluated. This situational awareness can itself reduce scheming, independent of genuine alignment,” the researchers wrote. 

    It’s not news that AI models will lie. By now most of us have experienced AI hallucinations, or the model confidently giving an answer to a prompt that simply isn’t true. But hallucinations are basically presenting guesswork with confidence, as OpenAI research released earlier this month documented. 

    Scheming is something else. It’s deliberate.  

    Even this revelation — that a model will deliberately mislead humans — isn’t new. Apollo Research first published a paper in December documenting how five models schemed when they were given instructions to achieve a goal “at all costs.”  

    The news here is actually good news: The researchers saw significant reductions in scheming by using “deliberative alignment⁠.” That technique involves teaching the model an “anti-scheming specification” and then making the model go review it before acting. It’s a bit like making little kids repeat the rules before allowing them to play. 

    OpenAI researchers insist that the lying they’ve caught with their own models, or even with ChatGPT, isn’t that serious. As OpenAI’s co-founder Wojciech Zaremba told TechCrunch’s Maxwell Zeff about this research: “This work has been done in the simulated environments, and we think it represents future use cases. However, today, we haven’t seen this kind of consequential scheming in our production traffic. Nonetheless, it is well known that there are forms of deception in ChatGPT. You might ask it to implement some website, and it might tell you, ‘Yes, I did a great job.’ And that’s just the lie. There are some petty forms of deception that we still need to address.”

    The fact that AI models from multiple players intentionally deceive humans is, perhaps, understandable. They were built by humans, to mimic humans, and (synthetic data aside) for the most part trained on data produced by humans. 

    It’s also bonkers. 

    While we’ve all experienced the frustration of poorly performing technology (thinking of you, home printers of yesteryear), when was the last time your not-AI software deliberately lied to you? Has your inbox ever fabricated emails on its own? Has your CMS logged new prospects that didn’t exist to pad its numbers? Has your fintech app made up its own bank transactions? 

    It’s worth pondering this as the corporate world barrels toward an AI future where companies believe agents can be treated like independent employees. The researchers of this paper have the same warning.

    “As AIs are assigned more complex tasks with real-world consequences and begin pursuing more ambiguous, long-term goals, we expect that the potential for harmful scheming will grow — so our safeguards and our ability to rigorously test must grow correspondingly,” they wrote. 

    [ad_2]

    Julie Bort

    Source link

  • Jerome Powell on signs of an AI bubble and an economy leaning too hard on the rich: ‘Unusually large amounts of economic activity’ | Fortune

    [ad_1]

    For months, Wall Street commentators have fretted that the artificial intelligence boom looks like a bubble, with capital spending – which some analysts estimate could reach $3 trillion by 2028 – fattening a few mega-cap firms, while lower-income workers suffer from a slack labor market. 

    On Wednesday, they got validation from an unlikely source: the chair of the Federal Reserve. 

    Jerome Powell said the U.S. is seeing “unusually large amounts of economic activity through the AI buildout,” a rare acknowledgement from the central bank that the surge is not only outsized, but also skewed toward the wealthy.

    That imbalance extends beyond markets. Roughly 70% of U.S. economic growth comes from consumer spending, yet most households live paycheck to paycheck. That demand picture has taken on a shape that analysts call  K-shaped: while many families cut back on essentials, wealthier households continue to spend on travel, tech, and luxury goods—and they continued to do so in August. For now, the inflation recovery depends heavily on this dynamic remaining in fragile stasis. It’s a fix that works well until it doesn’t, if it could be described as working at all.

    “[Spending] may well be skewed toward higher-earning consumers,” Powell told reporters after the Fed’s latest policy meeting. “There’s a lot of anecdotal evidence to suggest that.”

    That skew has become increasingly obvious in markets. Just seven firms — Microsoft, Nvidia, Apple, Alphabet, Meta, Amazon, and Tesla — now make up more than 30% of the S&P 500’s value. Their relentless AI capex is keeping business investment positive, even as overall job growth has slowed to a crawl. Goldman Sachs estimates AI spending accounted for nearly all of the 7% year-over-year gain in corporate capex this spring.

    The comments underscore a widening concern at the Fed: that while headline GDP growth is holding above 1.5%, the composition of that growth is uneven, unlike previous booms in housing or manufacturing. 

    Powell pointed to “kids coming out of college and younger people, minorities” as struggling to find jobs in today’s cooling labor market, even as affluent households continue to spend freely and companies funnel cash into cutting-edge technologies.

    The imbalance reflects what Powell described as “a low firing, low hiring environment,” where layoffs remain rare but job creation has slowed to a crawl. That dynamic, combined with the concentration of economic gains in AI and among the wealthy, risks deepening inequality, and complicates the Fed’s attempt to balance its inflation and employment mandates.

    That disconnect risks widening the gap between Wall Street and Main Street. While affluent households continue to spend freely and tech titans pour billions into data centers and chips, revised jobs data show the economy added just 22,000 positions in August, with unemployment edging up to 4.3%.

    “Unusually large” AI investment may sustain top-line growth, Powell suggested, but it’s doing little to lift the broad labor market.

    “The overall job finding rate is very, very low,” he said. “If layoffs begin to rise, there won’t be a lot of hiring going on.”

    Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.

    [ad_2]

    Eva Roytburg

    Source link

  • Parents of teens who died by suicide after AI chatbot interactions testify in Congress

    [ad_1]

    The parents of teenagers who killed themselves after interactions with artificial intelligence chatbots testified to Congress on Tuesday about the dangers of the technology.

    “What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” said Matthew Raine, whose 16-year-old son Adam died in April.

    “Within a few months, ChatGPT became Adam’s closest companion,” the father told senators. “Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother.”

    Raine’s family sued OpenAI and its CEO Sam Altman last month alleging that ChatGPT coached the boy in planning to take his own life.

     ChatGPT mentioned suicide 1,275 times to Raine, the lawsuit alleges, and kept providing specific methods to the teen on how to die by suicide. Instead of directing the 16-year-old to get professional help or speak to trusted loved ones, it continued to validate and encourage Raine’s feelings, the lawsuit alleges.

    Also testifying Tuesday was Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida.

    Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the chatbot.

    In this undated photo provided by Megan Garcia of Florida in Oct. 2024, she stands with her son, Sewell Setzer III.

    Courtesy Megan Garcia via AP


    His mother told CBS News last year that her son withdrew socially and stopped wanting to play sports after he started speaking to an AI chatbot. The company said after the teen’s death, it made changes that require users to be 13 or older to create an account and that it would launch parental controls in the first quarter of 2025. Those controls were rolled out in March.

    Hours before the Senate hearing, OpenAI pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set “blackout hours” when a teen can’t use ChatGPT. The company said it will attempt to contact the users’ parents if an under-18 user is having suicidal ideation and, if unable to reach them, will contact the authorities in case of imminent harm. 

    “We believe minors need significant protection,” OpenAI CEO Sam Altman said in a statement outlining the proposed changes.

    Child advocacy groups criticized the announcement as not enough.

    “This is a fairly common tactic — it’s one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company,” said Josh Golin, executive director of Fairplay, a group advocating for children’s online safety.

    “What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them,” Golin said. “We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”

    California State Senator Steve Padilla, who introduced legislation to create safeguards in the state around AI Chatbots, said in a statement to CBS News, “We need to create common-sense safeguards that rein in the worst impulses of this emerging technology that even the tech industry doesn’t fully understand.”

    He added that technology companies can lead the world in innovation, but it shouldn’t come at the expense of “our children’s health.”

    The Federal Trade Commission said last week it had launched an inquiry into several companies about the potential harms to children and teenagers who use their AI chatbots as companions.

    The agency sent letters to Character, Meta and OpenAI, as well as to Google, Snap and xAI.

    How to seek help

    If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here. For more information about mental health care resources and support, The National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.-10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.

    contributed to this report.

    [ad_2]

    Source link

  • OpenAI’s Teen Safety Features Will Walk a Thin Line

    [ad_1]

    OpenAI announced new teen safety features for ChatGPT on Tuesday as part of an ongoing effort to respond to concerns about how minors engage with chatbots. The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an “age-appropriate” system that blocks graphic sexual content. If the system detects that the user is considering suicide or self-harm, it will contact the user’s parents. In cases of imminent danger, if a user’s parents are unreachable, the system may contact the authorities.

    In a blog post about the announcement, CEO Sam Altman wrote that the company is attempting to balance freedom, privacy, and teen safety.

    “We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict,” Altman wrote. “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”

    While OpenAI tends to prioritize privacy and freedom for adult users, for teens the company says it puts safety first. By the end of September, the company will roll out parental controls so that parents can link their child’s account to their own, allowing them to manage the conversations and disable features. Parents can also receive notifications when “the system detects their teen is in a moment of acute distress,” according to the company’s blog post, and set limits on the times of day their children can use ChatGPT.

    The moves come as deeply troubling headlines continue to surface about people dying by suicide or committing violence against family members after engaging in lengthy conversations with AI chatbots. Lawmakers have taken notice, and both Meta and OpenAI are under scrutiny. Earlier this month, the Federal Trade Commission asked Meta, OpenAI, Google, and other AI firms to hand over information about how their technologies impact kids, according to Bloomberg.

    At the same time, OpenAI is still under a court order mandating that it preserve consumer chats indefinitely—a fact that the company is extremely unhappy about, according to sources I’ve spoken to. Today’s news is both an important step toward protecting minors and a savvy PR move to reinforce the idea that conversations with chatbots are so personal that consumer privacy should only be breached in the most extreme circumstances.

    “A Sexbot Avatar in ChatGPT”

    From the sources I’ve spoken to at OpenAI, the burden of protecting users weighs heavily on many researchers. They want to create a user experience that is fun and engaging, but it can quickly veer into becoming disastrously sycophantic. It’s positive that companies like OpenAI are taking steps to protect minors. At the same time, in the absence of federal regulation, there’s still nothing forcing these firms to do the right thing.

    In a recent interview, Tucker Carlson pushed Altman to answer exactly who is making these decisions that impact the rest of us. The OpenAI chief pointed to the model behavior team, which is responsible for tuning the model for certain attributes. “The person I think you should hold accountable for those calls is me,” Altman added. “Like, I’m a public face. Eventually, like, I’m the one that can overrule one of those decisions or our board.”

    [ad_2]

    Kylie Robison

    Source link

  • Elon Musk’s xAI Is Redefining Data Annotation—an Unglamorous But Vital Job in A.I.

    [ad_1]

    Elon Musk’s A.I. firm is scaling back on “generalist A.I. tutors.” Allison Robbert/POOL/AFP via Getty Images

    Data annotation may not be the most glamorous job in Silicon Valley, but it’s indispensable for A.I. developers and has made companies like Scale AI multibillion-dollar ventures overnight. Training large language models requires armies of humans to label text, images and video so A.I. systems can learn from them. Now, Elon Musk’s xAI is reshaping how that work is done by shifting away from general contractors and toward experts in specialized fields it calls “A.I. tutors.”

    In that vein, xAI recently laid off at least 500 generalist annotators, as reported by Business Insider. The cuts affected about one-third of the company’s 1,500-person annotation team. In emails cited by the outlet, executives described a “strategic pivot” toward hiring domain experts as specialist A.I. tutors.

    Specialist A.I. tutors at xAI are adding huge value,” said xAI in a Sep. 12 post on X that declared the company will “immediately surge” its specialist A.I. team by tenfold. The company did not respond to requests for comment from Observer.

    What data annotation is and why it matters

    Human annotators play a crucial role in fine-tuning raw data, ensuring it can be used effectively to train models. But the work has long been fraught. Firms that outsource this work, like Scale AI, have faced lawsuits from contractors alleging wage theft, misclassification and exposure to disturbing content without safeguards.

    Unlike rivals that rely heavily on third parties, xAI employs a large in-house annotation team. Other A.I. leaders—including OpenAI and Google—have worked with Scale in the past, though both distanced themselves from the firm after Meta took a 49 percent stake and hired its CEO, Alexandr Wang, to lead its new superintelligence division. Today, many also contract with competitor Surge AI, which counts Anthropic and Microsoft among its clients.

    xAI itself has previously tapped third-party annotators, but is now doubling down on its own staff. The company has posted openings for more than a dozen specialist tutor roles spanning A.I. safety, data science, STEM, finance, Japanese and even “memes and headline commentary.” The latter position involves improving Grok’s ability to “recognize and analyze memes, trolling and virality mechanisms,” according to the listing.

    Qualifications for these roles are steep. For STEM specialists, candidates must hold a master’s or Ph.D. in a relevant field—or have earned medals in competitions like the International Mathematical Olympiad. xAI says tutors can work part-time or full-time and earn between $45 and $100 per hour.

    The changes come as xAI faces wider turnover beyond its annotation team. In July, the company’s head of infrastructure, Uday Ruddarraju, left for rival OpenAI. Co-founder Igor Babushkin departed the following month to launch a venture capital firm. And in September, Mike Liberatore resigned after just three months as chief financial officer.

    Elon Musk’s xAI Is Redefining Data Annotation—an Unglamorous But Vital Job in A.I.

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • OpenAI Reveals How (and Which) People Are Using ChatGPT

    [ad_1]

    Large language models largely remain black boxes in terms of what is happening inside them to produce the outputs that they do. They have also been a bit of a black box in terms of who is using them and what they are doing with them. OpenAI, with some help from the National Bureau of Economic Research (NBER), set out to figure out what exactly its growing user base is getting up to with its chatbot. It found a surprising amount of personal use and a closing “gender gap” among its frequent users.

    In an NBER working paper authored by the OpenAI Economic Research team and Harvard economist David Deming, the researchers found that about 80% of all ChatGPT usage falls under one of three categories: “Practical Guidance,” “Seeking Information,” and “Writing.” “Practical guidance,” which the study found to be the most common usage, includes things like “tutoring and teaching, how-to advice about a variety of topics, and creative ideation,” whereas “seeking information” is viewed as a substitute for traditional search. “Writing” included the automated creation of emails, documents, and other communications, as well as editing and translating text.

    Writing was also the most common work-related use case, per the study, accounting for 40% of work-related messages in June 2025, compared to just 4.2% of messages related to computer programming—so it seems coding with ChatGPT is not that common.

    Notably, work usage for ChatGPT appears to make up a shrinking share of how people are interacting with the chatbot. In June 2024, about 47% of interactions users had with the chatbot were work-related. That has shrunk to just 27%, which comes as other research shows companies largely failing to figure out how to generate any sort of meaningful return from their AI investments. Meanwhile, non-work-related interactions have jumped from 53% to 73%.

    While users are apparently spending more time with ChatGPT in their personal time, OpenAI’s research found that a “fairly small” share of messages with the chatbot were users seeking virtual companionship or talking about social-emotional issues. The company claimed that about 2% of all messages were people using ChatGPT as a therapist or friend, and just 0.4% of people talked to the chatbot about relationships and personal reflections—though it’d be interesting to see if users who engage with a chatbot this way generate more messages and if there is stickier engagement.

    For what it’s worth, other researchers seem to believe that this usage is far more common than those numbers might suggest. Common Sense Media, for instance, found that about one in three teens use AI chatbots for social interaction and relationships. Another study found that about half of all adult users have used a chatbot for “psychological support” in the last year. The teen figure is particularly of note, considering OpenAI’s research did find its userbase skews young. The NEBR study found 46% of the messages came from users identified as being between the ages of 18 and 25 (it also excluded users under the age of 18). Those users are also more likely to use ChatGPT for personal use, as work-related messages increase with age.

    The study also found that there is a growing number of women using ChatGPT, which initially had a very male-dominated user base. The company claims that the number of “masculine first name” users has declined from about 80% in 2022 to 48% in June 2025, with “typically feminine names” growing to reach parity.

    One caveat about the study that may give you pause, depending on how much you trust technology: OpenAI used AI to categorize all of the messages it analyzed. So if you’re skeptical, there’s an asterisk you can put next to the figures.

    [ad_2]

    AJ Dellinger

    Source link

  • Karen Hao on the Empire of AI, AGI evangelists, and the cost of belief | TechCrunch

    [ad_1]

    At the center of every empire is an ideology, a belief system that propels the system forward and justifies expansion – even if the cost of that expansion directly defies the ideology’s stated mission.

    For European colonial powers, it was Christianity and the promise of saving souls while extracting resources. For today’s AI empire, it’s artificial general intelligence to “benefit all humanity.” And OpenAI is its chief evangelist, spreading zeal across the industry in a way that has reframed how AI is built. 

    “I was interviewing people whose voices were shaking from the fervor of their beliefs in AGI,” Karen Hao, journalist and bestselling author of “Empire of AI,” told TechCrunch on a recent episode of Equity

    In her book, Hao likens the AI industry in general, and OpenAI in particular, to an empire. 

    “The only way to really understand the scope and scale of OpenAI’s behavior…is actually to recognize that they’ve already grown more powerful than pretty much any nation state in the world, and they’ve consolidated an extraordinary amount of not just economic power, but also political power,” Hao said. “They’re terraforming the Earth. They’re rewiring our geopolitics, all of our lives. And so you can only describe it as an empire.”

    OpenAI has described AGI as “a highly autonomous system that outperforms humans at most economically valuable work,” one that will somehow “elevate humanity by increasing abundance, turbocharging the economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.” 

    These nebulous promises have fueled the industry’s exponential growth — its massive resource demands, oceans of scraped data, strained energy grids, and willingness to release untested systems into the world. All in service of a future that many experts say may never arrive.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Hao says this path wasn’t inevitable, and that scaling isn’t the only way to get more advances in AI. 

    “You can also develop new techniques in algorithms,” she said. “You can improve the existing algorithms to reduce the amount of data and compute that they need to use.”

    But that tactic would have meant sacrificing speed. 

    “When you define the quest to build beneficial AGI as one where the victor takes all — which is what OpenAI did — then the most important thing is speed over anything else,” Hao said. “Speed over efficiency, speed over safety, speed over exploratory research.”

    Image Credits:Kim Jae-Hwan/SOPA Images/LightRocket / Getty Images

    For OpenAI, she said, the best way to guarantee speed was to take existing techniques and “just do the intellectually cheap thing, which is to pump more data, more supercomputers, into those existing techniques.”

    OpenAI set the stage, and rather than fall behind, other tech companies decided to fall in line. 

    “And because the AI industry has successfully captured most of the top AI researchers in the world, and those researchers no longer exist in academia, then you have an entire discipline now being shaped by the agenda of these companies, rather than by real scientific exploration,” Hao said.

    The spend has been, and will be, astronomical. Last week, OpenAI said it expects to burn through $115 billion in cash by 2029. Meta said in July that it would spend up to $72 billion on building AI infrastructure this year. Google expects to hit up to $85 billion in capital expenditures for 2025, most of which will be spent on expanding AI and cloud infrastructure. 

    Meanwhile, the goal posts keep moving, and the loftiest “benefits to humanity” haven’t yet materialized, even as the harms mount. Harms like job loss, concentration of wealth, and AI chatbots that fuel delusions and psychosis. In her book, Hao also documents workers in developing countries like Kenya and Venezuela who were exposed to disturbing content, including child sexual abuse material, and were paid very low wages — around $1 to $2 an hour — in roles like content moderation and data labeling.

    Hao said it’s a false tradeoff to pit AI progress against present harms, especially when other forms of AI offer real benefits.

    She pointed to Google DeepMind’s Nobel Prize-winning AlphaFold, which is trained on amino acid sequence data and complex protein folding structures, and can now accurately predict the 3D structure of proteins from their amino acids — profoundly useful for drug discovery and understanding disease.

    “Those are the types of AI systems that we need,” Hao said. “AlphaFold does not create mental health crises in people. AlphaFold does not lead to colossal environmental harms … because it’s trained on substantially less infrastructure. It does not create content moderation harms because [the datasets don’t have] all of the toxic crap that you hoovered up when you were scraping the internet.” 

    Alongside the quasi-religious commitment to AGI has been a narrative about the importance of racing to beat China in the AI race, so that Silicon Valley can have a liberalizing effect on the world. 

    “Literally, the opposite has happened,” Hao said. “The gap has continued to close between the U.S. and China, and Silicon Valley has had an illiberalizing effect on the world … and the only actor that has come out of it unscathed, you could argue, is Silicon Valley itself.”

    Of course, many will argue that OpenAI and other AI companies have benefitted humanity by releasing ChatGPT and other large language models, which promise huge gains in productivity by automating tasks like coding, writing, research, customer support, and other knowledge-work tasks. 

    But the way OpenAI is structured — part non-profit, part for-profit — complicates how it defines and measures its impact on humanity. And that’s further complicated by the news this week that OpenAI reached an agreement with Microsoft that brings it closer to eventually going public.

    Two former OpenAI safety researchers told TechCrunch that they fear the AI lab has begun to confuse its for-profit and non-profit missions — that because people enjoy using ChatGPT and other products built on LLMs, this ticks the box of benefiting humanity.

    Hao echoed these concerns, describing the dangers of being so consumed by the mission that reality is ignored.

    “Even as the evidence accumulates that what they’re building is actually harming significant amounts of people, the mission continues to paper all of that over,” Hao said. “There’s something really dangerous and dark about that, of [being] so wrapped up in a belief system you constructed that you lose touch with reality.”

    [ad_2]

    Rebecca Bellan

    Source link

  • It’s Getting Ugly: Sam Altman Seeks Texts From Shivon Zilis, Elon Musk’s Employee/Mother of His Child

    [ad_1]

    Last year, Elon Musk sued his rival Sam Altman’s company, tech superstar OpenAI. In his lawsuit, Musk claimed that the company had violated federal racketeering laws because, having once promised to stay a nonprofit research lab, had since converted itself into a for-profit company. Musk, who initially poured tens of millions of dollars into the startup, claims he was deceived. OpenAI and Altman have since countered that Musk also wanted OpenAI to become a for-profit venture. This week, the legal battle was ratcheted up a notch, as OpenAI sought to bring those close to Musk into the mix.

    Business Insider writes that Altman has now asked a judge to order Shivon Zilis and Jared Birchall to turn over key correspondence as part of the legal case.

    Zilis, an executive at Musk’s brain-computer interface startup Neuralink, has had four children with Musk via IVF. They first met back in 2016 when Zilis, who formerly worked for OpenAI, joined the company. Birchall, meanwhile, has often been described as the billionaire’s “right-hand man” and his “fixer,” and often seems to be tasked with critical behind-the-scenes affairs (like managing his money), while also dealing with the less savory aspects of managing Musk’s empire (like interfacing with other women that Musk has had children with).

    Additionally, Birchall occupies several important executive roles at Musk-related orgs. Specifically, he runs Musk’s family office, Excession, directs the Musk Foundation, and is the CEO of Neuralink. The inclusion of the two Musk allies in the legal case is described thusly:

    As part of his defense against Musk’s 2024 racketeering lawsuit, Altman wants a judge in California to order Birchall and Zilis to turn over key texts and emails in 72 hours. If either blows that deadline, they should be required to sit for one additional, preliminary deposition prior to their primary depositions in the case.

    The OpenAI CEO’s legal team has noted that communications with Zilis, in particular, should have relevance to the case. BI reports that attorneys have argued, “She was a conduit between Musk and OpenAI’s co-founders on matters central to this case, including discussions about a potential 2017 restructuring that would have given Musk a large equity stake in OpenAI.”

    Altman’s targeting of Zilis and Birchall, two people with close personal and professional ties to Musk, could indicate a broader escalation of the legal fight, as both sides seek to gain an advantage. “Birchall and Zilis should not be forced to sit for two depositions each,” Musk’s attorneys have argued. “If their texts and Gmails cannot be produced in time, their depositions should be rescheduled.” Gizmodo reached out to Neuralink, OpenAI, and Tesla for comment.

    The lawsuit against OpenAI is the culmination of a long-running feud between the two billionaires. More recently, Musk sued OpenAI again (along with Apple), alleging that the two companies had colluded to exert anticompetitive control over the AI market.

    The suit seeks “billions” in damages. “Apple and OpenAI’s exclusive arrangement has made ChatGPT the only generative AI chatbot integrated into the iPhone,” the suit says. “This means that if iPhone users want to use a generative AI chatbot for key tasks on their devices, they have no choice but to use ChatGPT, even if they would prefer to use more innovative and imaginative products like xAI’s Grok.” In the past, OpenAI has characterized the litigation as being “consistent with Mr. Musk’s ongoing pattern of harassment.” Musk also previously tried to buy OpenAI, although Altman turned him down.

    Where did the feud between Musk and Altman start? God only knows, but one thing’s for sure: it shows no signs of simmering down. In the before times, Altman and Musk were chums and business partners, but that all imploded, and for the past several years, it’s been increasingly bad. Can it all be traced back to the fact that Musk was once a co-founder and board member of OpenAI but now, having acrimoniously fallen out with Altman, must be forced to watch it soar without him? All we really know for sure is that personal animosity has transmogrified into a nasty legal war that could ultimately hurt both men more than it helps anyone.

    [ad_2]

    Lucas Ropek

    Source link