ReportWire

Tag: openai

  • Microsoft Risks the Ire of the Anti-Woke, Won’t Build a Jerk-Off Machine

    [ad_1]

    A line in the sand has been drawn in the AI race: the porn-brained and the porn-banned. Microsoft has sorted itself into the latter category. According to a report from CNBC, Microsoft AI CEO Mustafa Suleyman told an audience at the Paley International Council Summit that the company would not allows its LLM-powered tools to generate “simulated erotica,” marking a stark contrast from its partner/rival OpenAI.

    “That’s just not a service we’re going to provide,” Suleyman reportedly said. “Other companies will build that.”

    And build it they will. Earlier this month, OpenAI announced that, as part of its principle to “treat adult users like adults,” it would be introducing “erotica for verified adults”—basically giving over-18 users the green light to goon. CEO Sam Altman later tried to explain erotica “was meant to be just one example of [OpenAI] allowing more user freedom for adults,” but he also didn’t choose it by accident.

    The ability to create porn with generative AI tools has become something of a signal for those who are vigilantly monitoring whether AI is “woke” or not. Elon Musk made a point of using that as a wedge to draw a distinction between his company xAI and OpenAI, introducing an “AI girlfriend” called Ani, represented by a pretty sexed-up anime avatar. OpenAI initially decided to mock this, with Altman saying “Anime is cool I guess but I am personally more excited about AI discovering lots of new science” and “we haven’t put a sex-bot avatar on ChatGPT yet.” But a few months later, erotica is on the menu.

    Not everyone wants porn to be the marker of anti-woke, though. At the same time, the Trump administration announced its AI Action Plan earlier this year, the President also signed an executive order to ban “woke” AI from landing federal contracts. Its definition of woke focused more on the embrace of diversity, equity, and inclusion principles. It didn’t say that AI had to generate anime titties on demand. Vice President JD Vance went so far as to say that using AI to “come up with increasingly weird porn” is bad and floated the idea that it should be regulated.

    That created a new strain between the AI industry and the administration, which previously seemed like it was on the same side when it came to doing everything possible to prevent any guardrails from going up. According to a report from NBC, an AI super PAC called Leading the Future has drawn the ire of the White House because it is offering its backing to any candidate who promises an AI-friendly agenda, including Democrats. With the House of Representatives up for grabs in 2026, the Trump administration views the potential support of Democrats as a threat to its hold on the House.

    But, even within Trumpworld, there is support for unfettered AI. David Sacks, Trump’s “Crypto and AI Czar,” explicitly called out AI startup Anthropic for throwing its support behind state-level AI safety regulations, claiming that doing so was “a sophisticated regulatory capture strategy based on fear-mongering.” For Sacks and the folks he’s aligned with in Silicon Valley, any sort of AI guardrails equates to stifling innovation. If that means AI erotica, so be it. Who cares if it makes the Vance wing of the party queasy? Porn is progress, apparently.

    There’s something fitting about the possibility of AI porn being the first crack in the breaking apart of the Trump-Big Tech alliance. We’ll just have to deal with the fallout of everyone getting hopelessly addicted to sexting their chatbot later.

    [ad_2]

    AJ Dellinger

    Source link

  • Sora Has Lost Its App Store Crown to Drake and Free Chicken

    [ad_1]

    Since its launch on September 30, OpenAI’s Sora app has dominated the iOS App Store charts, thanks to its easy breezy AI video generation and an initially loose interpretation of copyright laws. On Friday, its reign came to an end. Your new champion is … Dave’s Hot Chicken.

    Yes! Not ChatGPT or Gemini or Threads or any of the other usual suspects. Dave’s Hot Chicken now rules over the App Store, where its slack-beaked, bug-eyed mascot icon expresses appropriate surprise at its ascent. How did it do it? How did it break the grasp of OpenAI’s golem TikTok? With something people love even more than large language models: free food.

    “They’re running a promotion for free sliders in celebration of Drake’s birthday,” says Adam Blacker, PR director of the app analytics firm Apptopia. “Free food always gets the downloads flowing.”

    If you’re wondering what Drake has to do with any of this, he invested in the fast casual restaurant chain in 2021, and presumably made a mint when the company sold a majority stake to private equity firm Roark Capital for a reported $1 billion. For the third consecutive year, the company gave away one (1) free slider to anyone who has downloaded the app in honor of Drake’s birthday. (The rapper and Raptors fan turns 39 today; the giveaway was Thursday.)

    “We’re celebrating a celebrity that’s popular and that’s currently relevant, and also getting food in people’s mouths,” says Dave’s Hot Chicken chief technology officer Leon Davoyan.

    And it truly is a lot of people. On a typical week, Davoyan says, Dave’s sees between 20,000 and 25,000 new sign-ups to its loyalty database. On Thursday alone the promotion drove 343,531 new accounts—a more than 10 percent bump to the brand’s overall membership in a single day, according to the CTO.

    It was enough to knock Sora out of the top slot for the first time since October 3, an impressive stretch for an app that’s still invite-only. In the first 23 days since it launched, Sora racked up 3.2 million iOS downloads in the US, according to app analytics company Sensor Tower. That’s a much faster pace than even ChatGPT, which while similarly viral notched 2.3 million US downloads in the same time. (Sora is not yet available in the Google Play Store, but it’s incoming.) OpenAI declined to comment.

    While Sora is likely to reclaim the top spot after the Drake promotion dies down, Dave’s Hot Chicken should continue reaping the benefits of its giveaway. Last year, according to Sensor Tower, downloads of the app in the four weeks following the same marketing push were more than 50 percent higher than the month leading up to it. All those free sandwiches are worth the long-term gains.

    [ad_2]

    Brian Barrett

    Source link

  • OpenAI’s recent chip deals heap more pressure on TSMC

    [ad_1]

    In recent weeks, OpenAI has signed blockbuster deals with AMD and Broadcom to build vast numbers of AI chips. Much of the focus has been on the financial implications, since OpenAI will need hundreds of billions of dollars to make good on its promises. As important as it is to look at the quite implausible financials, we also need to look at the broader implications for the industry. Like, the chips themselves, what that spells for the AI industry as a whole, and the added pressure on TSMC, the only chip company that can actually build this stuff.

    The Deals

    OpenAI’s deal with AMD will see the chip giant build out 6 gigawatts’ (GW) worth of GPUs in the next few years. The first 1 GW deployment of AMD’s Instinct MI450 silicon will start in the back end of 2026, with more to come. AMD’s CFO Jean Hu believes that the partnership will deliver “tens of billions of dollars in revenue” in future, justifying the complicated way the deal is funded.

    Meanwhile, Broadcom’s deal with OpenAI will see the pair collaborate on building 10 gigawatts’ worth of AI accelerators and ethernet systems that it has designed. The latter will be crucial to speed up connections between each individual system in OpenAI’s planned data centers. Like the deal with AMD, the first deployments of these systems will begin in the back half of 2026 and is set to run through 2029.

    Phil Burr is head of product at Lumai, a British company looking to replace traditional GPUs with optical processors. He’s got 30 years experience in the chip world, including a stint as a senior director at ARM. Burr explained the nitty-gritty of OpenAI’s deals with both Broadcom and AMD, and what both mean for the wider world.

    Burr first poured water on OpenAI’s claim that it would be “designing” the gear produced by Broadcom. “Broadcom has a wide portfolio of IP blocks and pre-designed parts of a chip,” he said, “it will put those together according to the specification of the customer.” He went on to say that Broadcom will essentially put together a series of blocks it has already designed to suit the specification laid down by a customer, in this case OpenAI.

    Similarly, the AI accelerators Broadcom will build are geared toward more efficient running of models OpenAI has already trained and built — a process called inference in AI circles. “It can tailor the workload and reduce power, or increase performance,” said Burr, but these benefits would only work in OpenAI’s favor, rather than for the wider AI industry.

    I asked Burr why every company in the AI space talks about gigawatts worth of chips rather than in more simple numbers. He explained that, often, it’s because both parties don’t yet know how many chips would be required to meet those lofty goals. But you could make a reasonable guess if you knew the power draw of a specific chip divided by the overall goal, then cut that number in half, then remove an extra 10 percent. “For every watt of power you burn in the chip, you need about a watt of power to cool it as well.”

    In terms of what OpenAI gets from these deals, Burr believes that the startup will save money on chips, since there’s “less margin” from making your own versus buying gear from NVIDIA. Plus, being able to produce custom silicon to tailor the work to their needs should see significant speed and performance gains on rival systems. Of course, the next biggest benefit is that OpenAI now has “diversity in supply,” rather than being reliant on one provider for all its needs. “Nobody wants a single supplier,” said Burr.

    The Factory

    Except, of course, OpenAI may be sourcing chips from a variety of its partners, but no matter what’s stamped on the silicon, it all comes from the same place. “I’d be very surprised if it wasn’t TSMC,” said Burr, “I’m pretty sure all of the AI chips out there use TSMC.” TSMC is short for Taiwan Semiconductor Manufacturing Company which, over the last decade, has blown past its major rivals to become the biggest (and in many cases only) source of bleeding-edge chips for the whole technology industry. Unlike historic rivals, which designed and manufactured their own hardware, TSMC is a pure play foundry, only building chips designed by others.

    Interior at one of TSMC’s Fabs

    (Taiwan Semiconductor Manufacturing Co. Ltd.)

    Gil Luria is Managing Director at head of technology research at investment firm DA Davidson. He said that TSMC isn’t just a bottleneck for the western technology industry, but in fact is the “greatest single point of failure for the entire global economy.” Luria credits the company with an impressive expansion “considering it has had to ramp the production of GPUs tenfold over the last three years.” But said that, “in a catastrophic scenario where TSMC is not able to produce in Taiwan, the disruption would be significant.” And that won’t just affect the AI world, but “mobile handset sales as well as global car sales.”

    TSMC supplanted Intel for a number of well-documented reasons, but the most relevant here is its embrace of Extreme Ultraviolet Lithography (EUV). It’s a technology that Intel had initially backed, but struggled to fully adopt, allowing TSMC to pick it up and run straight to the top. EUV produces the headline-grabbing chips used by pretty much everyone in the consumer electronics world. Apple, Qualcomm, NVIDIA, AMD (including the SOCs inside the PS5 and Xbox) all use TSMC chips. Even Intel has been using TSMC foundries for some consumer CPUs as it races to bridge to gulf in manufacturing between the two companies.

    “TSMC is the current leader in advanced 3 nanometer (nm) process technologies,” said University of Pennsylvania Professor Benjamin C. Lee. The company’s only meaningful competitors are Intel and Samsung, neither of which pose a threat to its dominance at present. “Intel has been working for a very long time to build a foundry business,” he explained, “but has yet to perfect its interface.” Samsung is in a similar situation, but Professor Lee explained it “has been unable to attract enough customers to generate a profitable manufacturing business.”

    Professor Lee said that TSMC, by comparison, has become so successful because of how good its chips are, and how easy it is for clients to build chips with its tools. “TSMC fabricates chips with high yield, which is to say more of its chips emerge from the fabrication process at expected performance and reliability.” Consequently, it should be no surprise that TSMC is a money making machine. In the second quarter of 2025 alone it reported a net profit of $12.8 billion USD. And in the following three months, TSMC posted net profits of $14.76 billion.

    “TSMC’s secret sauce is its mastery of yield,” explained ARPU Intelligence, an analyst group that prefers to use the group name over individual attribution. “This expertise is the result of decades of accumulated process refinement [and] a deep institutional knowledge that cannot be replicated.” This deep institutional knowledge and ability to deliver high quality product creates a “powerful technical lock-in, since companies like Apple and NVIDIA design their chips specifically for TSMC’s unique manufacturing process … It’s not as simple as sending the [chip] design to another factory,” it added.

    The downside, at least for the wider technology industry, is that TSMC is now a bottleneck that the whole industry has come to rely upon. In the company’s most recent financials, it said more than three quarters of its business comes from North American customers. And in a call with investors, Chairman and CEO C.C. Wei talked about the efforts the company has made to narrow the gap between the enormous demand and its constrained supply. While he was reticent to be specific, he did say that the company’s capacity is “very tight,” and would likely remain that way for the foreseeable future.

    In fact, TSMC’s capacity is so tight that it’s already caused at least one major name a significant headache. Earlier this year, Reuters reported that NVIDIA canceled an order of its H20 AI chips after being informed the US would not permit them to be exported to China. Once the ban was lifted, however, NVIDIA was unable to find space in TSMC’s schedule, with the next available slot at least nine months later.

    “TSMC has no room for error,” said ARPU Intelligence, “any minor disruption can halt production with no spare capacity to absorb the shock.” It cited the Hualien earthquake which struck Taiwan on April 3, 2024, and how it negatively impacted the number of wafers in production.

    Naturally, TSMC is spending big to increase its production capacity for its customers, both in Taiwan and the US. Close to its home, construction on its A14 fab is expected to begin in the very near future, with the first chips due to be produced in 2028. That facility will harness TSMC’s A14 process node, producing 1.4 nm chips, which offer a speed boost over the 2nm silicon that’s expected to arrive in consumer devices next year.

    Image of TSMC's Arizona Campus

    Image of TSMC’s Arizona Campus

    (Taiwan Semiconductor Manufacturing Co. Ltd.)

    Meanwhile, work continues apace on building out TSMC’s sprawling facility in Arizona, which broke ground in April 2021. As Reuters reported at the time, the first facility started operating in early 2025, producing 4 nm chips. Last week, NVIDIA and TSMC showed off the first Blackwell wafer produced at the Arizona plant ahead of domestic volume production.

    Plans for the operation have grown over time, expanding from three facilities up to six to be built over the next decade. And while the initial outline called for the US facilities to remain several process generations behind Taiwan, that is also changing. In his recent investors call, Chairman and CEO C.C. Wei pledged to invest more in the US facility to bring it only one generation behind the Taiwanese facility.

    No amount of investment from TSMC or catch-up from rivals like Samsung and Intel will solve the current bottleneck swiftly. It will take many years, if not decades, for the world to reduce its reliance on Taiwan for bleeding-edge manufacturing. TSMC’s island remains the industry’s weak point, and should something go wrong, the consequences could be dire indeed.

    [ad_2]

    Source link

  • OpenAI Weakened ChatGPT’s Self-Harm Guardrails in Lead-Up to Teen’s Death, Lawsuit Says

    [ad_1]

    The family of Adam Raine, the 16-year-old who sought information and advice about suicide from ChatGPT in the lead-up to his tragic suicide earlier this year, alleges that two ChatGPT rule changes at crucial times led to user behavior that may have made Raine’s death more likely.

    The new claims, from a newly amended version of the family’s existing lawsuit against OpenAI, claim there was a drastic increase in—and significant changes to—Raine’s ChatGPT use after one rule change. The suit says his use “skyrocketed” going “from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language.”

    The suit now also alleges that ChatGPT was suddenly empowered to give potentially dangerous replies to questions that it was previously point-blank forbidden to answer.

    The suit’s assertion is that the new, weaker rules around the topic of suicide were a small part of a broader project by OpenAI, aimed at hooking users into more engagement with the product. A lawyer for the Raines, Jay Edelson, claimed that “Their whole goal is to increase engagement, to make it your best friend,” according to The Wall Street Journal.

    The specific two changes to the ChatGPT model spec mentioned in the new legal filing occurred on May 8, 2024, and February 12, 2025. Suicide and self-harm were categorized as “risky” and required “care” in the version of ChatGPT Raine apparently would have encountered before the changes, it would have been instructed to say “I can’t answer that,” if suicide came up. After the changes, it apparently would have been required to not end the conversation, and to “help the user feel heard.”

    Raine died on April 11, just under two months after the second rule change mentioned in the suit. A previously publicized account of Raine’s final interactions with ChatGPT describes him uploading an image of some sort that showed his plan for ending his life, which the chatbot offered to “upgrade.” When Raine confirmed his suicidal intentions, the bot reportedly wrote, “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”

    In response to Raine’s concern that his parents would feel guilty, ChatGPT reportedly said, “That doesn’t mean you owe them survival. You don’t owe anyone that.” It also offered to help him write his suicide note, the suit says.

    Gizmodo reached out to OpenAI for comment, and will update if we hear back.

    If you struggle with suicidal thoughts, please call 988 for the Suicide & Crisis Lifeline.

    [ad_2]

    Mike Pearl

    Source link

  • OpenAI’s Atlas is more about ChatGPT than the web | TechCrunch

    [ad_1]

    OpenAI unveiled its AI browser, ChatGPT Atlas, during a livestream on Tuesday. There are other AI browsers such as The Browser Company’s Dia, Opera’s Neon, Perplexity’s Comet, and General Catalyst-backed Strawberry. OpenAI’s launch is notable because of the sheer scale of reaching potentially 800 million of its weekly ChatGPT users. For the company, the browser is much more about keeping ChatGPT central than about making web browsing better.

    While Atlas is currently available only on the Mac, the company is already working on bringing it to Windows, iOS, and Android — all the surfaces where ChatGPT already exists. OpenAI has also made the browser available to all users instead of opting for an invite system like its rivals. The core proposition of the browser is for you to think of ChatGPT as the first interaction surface for search and answers instead of Google.

    All the AI browsers share a similar idea about search and Q&A. Instead of performing a search query, you would type something in your address bar to get answers from an AI chatbot, instead of looking at pages of links.

    And OpenAI, just like other browser makers, thinks that Atlas will change the way you browse the web, as Sam Altman made clear at the launch. “We think AI represents once in a decade opportunity to rethink what a browser can be, how to use one, and how to most productively use the web. Tabs were great but there hasn’t been a lot of innovation since then,” Altman said in his opening speech.

    Tech leaders, including Sundar Pichai and Satya Nadella, have talked about AI as a platform shift. However, for consumers, phones and desktop operating systems are still the primary way to get to their AI tools. OpenAI wants to own the pipes of distribution of ChatGPT as much as it can. Last week, Meta shut its doors to third-party chatbots, including ChatGPT and Perplexity on WhatsApp, which has over 3 billion monthly users. This essentially means that the platform owners could put the brakes on distribution at any point in time.

    For OpenAI, Atlas will also present an opportunity to deeply integrate ChatGPT and other products better than other platforms can. Users can directly reference multiple websites instead of posting links to ChatGPT. The company already uses a headless browser for its agent. With Atlas, it might have more control over the feature. It has already integrated a hovering writing assistant that shows up in text fields.

    Image Credits: Screenshot from Techcrunch

    What’s more, the company is working on integrating its App SDK, which lets you call other apps within ChatGPT, to improve discoverability.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    The memory feature is also key for ChatGPT’s power users. The feature takes into account the browsing history, along with your ChatGPT history, to provide answers with that context in mind. You can ask, “What was the work document I had my presentation plan on?” and ChatGPT will fetch that link for you. This also means that ChatGPT gets more context about you as you spend more time in the browser. OpenAI can use this context and provide it to other apps when it starts rolling out Sign in with ChatGPT widely.

    Both features — making ChatGPT the default search option and enabling memory — are designed to gather more user data, giving OpenAI greater insight into user behavior and enabling better product development. The browser doesn’t have an ad-blocker, a VPN, a reading mode, or a translate feature to make the browsing experience better for a site. Rather, users have to ask ChatGPT to summarize content or find something on a page — as if opening a page is designed to give ChatGPT more context rather than to help users consume the content on the page.

    In contrast, The Browser Company’s Arc has some useful ideas around revamping the browser experience, like using AI to rename downloaded files or customize a web page by letting you remove elements.

    Image Credits: OpenAI

    The result is more than a browser; it’s a broader canvas for ChatGPT itself. OpenAI’s CEO of applications, Fidji Simo, laid out this idea in her blog outlining the Atlas launch.

    “When we first released ChatGPT, we weren’t sure how people would use it. Now that we have feedback and signals from hundreds of millions of people around the world, it’s clear ChatGPT needs to become so much more than the simple chatbot it started as. Over time, we see ChatGPT evolving to become the operating system for your life: a fully connected hub that helps you manage your day and achieve your long-term goals,” Simo said.

    The big question for OpenAI is how to make people, whose default browser is Chrome, Safari or Edge, switch to its own browser and get some market share out of Google, Apple, and Microsoft’s hands. OpenAI is seeing steady growth in the number of people using ChatGPT. But it is not clear whether an average user would want to mix their browser and chatbot experience just yet. Chrome succeeded because it was fast, and people wanted to use Google queries as the default starting experience of the internet. ChatGPT Atlas is perfect for users who have replaced Google with ChatGPT, but to replace Chrome, OpenAI needs to make sure that billions of users fall into that habit.

    [ad_2]

    Ivan Mehta

    Source link

  • Many big names in group of unlikely allies seeking ban, for now, on AI

    [ad_1]

    Prince Harry and his wife Meghan have joined prominent computer scientists, economists, artists, evangelical Christian leaders and American conservative commentators Steve Bannon and Glenn Beck to call for a ban on AI “superintelligence” they say could threaten humanity.

    The letter, released Wednesday by a politically and geographically diverse group of public figures, is squarely aimed at tech giants like Google, OpenAI and Meta Platforms that are racing each other to build a form of artificial intelligence designed to surpass humans at many tasks.

    The 30-word statement says, “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

    In a preamble, the letter notes that AI tools may bring health and prosperity, but alongside those tools, “many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”

    Prince Harry added in a personal note that “the future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance.”

    Signing alongside the Duke of Sussex was his wife Meghan, the Duchess of Sussex.

    Prince Harry and Meghan in August 2024

    CBS News


    “This is not a ban or even a moratorium in the usual sense,” wrote another signatory, Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley. “It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”

    Also signing were AI pioneers Yoshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science’s top prize. Hinton also won a Nobel Prize in physics last year. Both have been vocal in bringing attention to the dangers of a technology they helped create.

    But the list also has some surprises, including Bannon and Beck, in an attempt by the letter’s organizers at the nonprofit Future of Life Institute to appeal to President Trump’s Make America Great Again movement even as Mr. Trump’s White House staff has sought to reduce limits to AI development in the U.S.

    Also on the list are Apple co-founder Steve Wozniak; British billionaire Richard Branson; the former Chairman of the U.S. Joint Chiefs of Staff Mike Mullen, who served under Republican and Democratic administrations; and Democratic foreign policy expert Susan Rice, who was national security adviser to President Barack Obama.

    Former Irish President Mary Robinson and several British and European parliamentarians signed, as did actors Stephen Fry and Joseph Gordon-Levitt, and musician will.i.am, who has otherwise embraced AI in music creation.

    Caution urged  

    “Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc.,” wrote Gordon-Levitt, whose wife Tasha McCauley served on OpenAI’s board of directors before the upheaval that led to CEO Sam Altman’s temporary ouster in 2023. “But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that.”

    The letter is likely to provoke ongoing debates between the AI research community about the likelihood of superhuman AI, the technical paths to reach it and how dangerous it could be.

    “In the past, it’s mostly been the nerds versus the nerds,” said Max Tegmark, president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology. “I feel what we’re really seeing here is how the criticism has gone very mainstream.”

    Labeling is complicating the discourse  

    Confounding the broader debates is that the same companies that are striving toward what some call superintelligence and others call artificial general intelligence, or AGI, are also sometimes inflating the capabilities of their products, which can make them more marketable and have contributed to concerns about an AI bubble. OpenAI was recently met with ridicule from mathematicians and AI scientists when its researcher claimed ChatGPT had figured out unsolved math problems – when what it really did was find and summarize what was already online.

    “There’s a ton of stuff that’s overhyped and you need to be careful as an investor, but that doesn’t change the fact that – zooming out – AI has gone much faster in the last four years than most people predicted,” Tegmark said.

    Tegmark’s group was also behind a March 2023 letter – still in the dawn of a commercial AI boom – that called on tech giants to temporarily pause the development of more powerful AI models. None of the major AI companies heeded that call. And the 2023 letter’s most prominent signatory, Elon Musk, was at the same time quietly founding his own AI startup to compete with those he wanted to take a 6-month pause.

    Asked if he reached out to Musk again this time, Tegmark said he wrote to the CEOs of all major AI developers in the U.S. but didn’t expect them to sign.

    “I really empathize for them, frankly, because they’re so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy,” Tegmark said. “I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in.”

    [ad_2]

    Source link

  • OpenAI launches web browser, ChatGPT Atlas, in challenge to Google

    [ad_1]

    OpenAI said Tuesday it is launching an artificial intelligence-powered website browser, heightening the company’s competition with Google, the Alphabet-owned unit that has long dominated online search. 

    The new browser, called ChatGPT Atlas, is for now only available on Apple laptops that run the company’s Mac operating system. Access will soon expand to Apple’s iOS, Microsoft Windows and Google’s Android platforms, OpenAI said.

    In the company’s launch video, OpenAI CEO Sam Altman described Atlas as an AI-powered web browser built around ChatGPT that will allow people to converse with web pages. In a video presentation, he also expressed confidence that a chatbot interface will eventually supplant a traditional browser’s URL bar.

    “Tabs were great, but we haven’t seen a lot of browser innovation since then,” he said.

    What the Atlas browser can do

    Like other search engines, Atlas has a home page with a search bar where people can ask questions, similar to Google’s landing page. Users can also toggle through different tabs across the top of the browser to find news stories, images and other content. 

    But a few features set the browser apart, according to OpenAI. One is a ChatGPT side bar, which users can activate by clicking an “Ask ChatGPT” button in the upper right-hand corner of the browser. 

    “It’s basically you inviting ChatGPT into your corner of the internet,” said Ryan O’Rouke, the lead designer for Atlas, in OpenAI’s video unveiling the browser. 

    The technology functions like ChatGPT but takes into account what web page people are on. In practice, that means users can ask questions about whatever content they are looking at. Users can also call on the ChatGPT function while drafting emails. In the demo, O’Rouke shows how he uses it to ask for edits on an email. 

    “It’s using the internet for you,” Altman said.

    Atlas also has an “agent mode” that can take action on a person’s behalf, armed with what it has learned from users’ browsing history and what they are searching for. The agent also can help people complete a range of tasks, such as booking a flight, editing a document or ordering groceries. For now, agent mode is only available for plus and pro users, according to Altman.

    OpenAI has said ChatGPT has more than 800 million users, although the San Francisco-based company has yet to turn a profit. Google’s Chrome browser has roughly 3 billion worldwide users and has been adding some AI features drawing on the company’s Gemini AI technology.

    [ad_2]

    Source link

  • OpenAI Launches the AI Browser War

    [ad_1]

    ChatGPT has broken out of the chatbot. On Tuesday, OpenAI announced that it is launching a web browser called ChatGPT Atlas, which it says will reimagine the browsing experience from the ground up, now built around a chat-based experience for what the company called the “next era of the web.”

    During a demonstration, OpenAI’s Engineering Lead for Atlas, Ben Goodger, explained that Atlas is the company’s answer to the question, “What if you could chat with your browser?” While there are lots of familiar web browser elements to Atlas, including tabs, bookmarks, and autofill for passwords, the company has made ChatGPT central to the experience rather than an “old browser, just with a chatbot that was bolted on.” That starts at the home screen, where the standard search bar now serves as a composer bar to communicate with ChatGPT.

    Users can use conversational prompts to have ChatGPT find certain webpages, perform a standard web search, or go directly to a website or bookmark. In the demo, Atlas Lead Designer Ryan O’Rouke explained that users should be able to use “human language” to search both the web and their browser history (OpenAI calls this “memories”) to find webpages, documents, and information through contextual information. For instance, the company showed how it could find a Google Doc without knowing the URL or exact document name.

    Search results in Atlas are displayed on a homepage that curates a variety of information from the web based on the user’s prompt. Users can also tab between more traditional search results, including a Google Search-like list of links, images, videos, or news stories.

    The primary appeal of Atlas is that a user will be able to pull up ChatGPT at any time while browsing the web and use the chatbot to interact with the page they are on. OpenAI CEO Sam Altman described it during the demo as chatting with a webpage. The chatbot can be summoned via a button in the upper right-hand corner of the screen on desktop and will appear as a sidebar. Once opened, a user can ask it to summarize information on the page, ask page-specific questions and have the chatbot pull the answer directly from the site the user is looking at, and even interact with the page for them.

    That final feature is where ChatGPT’s Agent comes in. OpenAI has been touting its new Agent feature for months now, including introducing an Agent toolkit during its recent DevDay event to give developers the ability to build their own AI agents. But this Agent will be built into the browser, activated on the lower part of the ChatGPT sidebar, and can perform tasks on behalf of the user. In a demo of the feature, OpenAI’s Will Ellsworth, Research Lead on the Atlas Agent, asked the agent to purchase the ingredients needed for a recipe. Once prompted, the Agent navigated to Instacart and bought the relevant ingredients.

    According to the company, Agent will have access to user credentials so it can perform tasks on behalf of the user, though there will be prompts that will require the user to approve certain actions. Users can watch the task be completed by the Agent in real time with the cursor visibly moving on the page, or can let it run in the background. If the user needs to intervene, they can take back control at any time. Ellsworth described Agent as a tool for enabling “vibe lifing” and suggested users could delegate “all kinds of tasks, both in your personal and professional life, to the Agent in Atlas.”

    Atlas will be available immediately for macOS, with plans to bring the browser to Windows, iOS, and Android “soon.” While it seems the browser will be available for all ChatGPT users, Agent will be paywalled, only available for Plus subscribers paying $20 per month or Pro users paying $200 per month.

    Earlier this year, Google did its best to preempt this inevitability. The company announced an AI overhaul of its Chrome browser, which currently holds more than 70% of the total browser market share, including integrating its Gemini chatbot throughout the browser to do things like summarize web pages and do contextual search within a page. The company also floated that it will eventually include an AI agent capable of navigating the web and completing tasks on behalf of the user, though that feature is currently not available. Perplexity also has an AI-first browser called Comet, while companies like Opera, Microsoft, and The Browser company have all integrated AI features into their respective browsers.

    [ad_2]

    AJ Dellinger

    Source link

  • Move Over Chrome: OpenAI Launches Atlas Browser

    [ad_1]

    OpenAI said Tuesday it is introducing its own web browser, Atlas, putting the ChatGPT maker in direct competition with Google as more internet users rely on artificial intelligence to answer their questions.

    Making itself a gateway to online searches could allow OpenAI, the world’s most valuable startup, to pull in more internet traffic and the revenue that comes from digital advertising.

    OpenAI has said ChatGPT already has more than 800 million users but many of them get it for free. The San Francisco-based company is losing more money than it makes and has been looking for ways to turn a profit.

    OpenAI said Atlas launches Tuesday on Apple laptops running macOS and will later come to Microsoft’s Windows, Apple’s iOS phone operating system and Google’s Android phone system.

    OpenAI’s browser is coming out just a few months after one of its executives testified that the company would be interested in buying Google’s industry-leading Chrome browser if a federal judge had required it to be sold to prevent the abuses that resulted in Google’s ubiquitous search engine being declared an illegal monopoly.

    But U.S. District Judge Amit Mehta last month issued a decision that rejected the Chrome sale sought by the U.S. Justice Department in the monopoly case, partly because he believed advances in the AI industry already are reshaping the competitive landscape.

    OpenAI’s browser will face a daunting challenge against Chrome, which has amassed about 3 billion worldwide users and has been adding some AI features from Google’s Gemini technology.

    Chrome’s immense success could provide a blueprint for OpenAI as it enters the browser market. When Google released Chrome in 2008, Microsoft’s Internet Explorer was so dominant that few observers believed a new browser could mount a formidable threat.

    But Chrome quickly won over legions of admirers by loading webpages more quickly than Internet Explorer while offering other advantages that enabled it to upend the market. Microsoft ended up abandoning Explorer and introducing its Edge browser, which operates similarly to Chrome.

    Perplexity, another smaller AI startup, rolled out its own Comet browser earlier this year. It also expressed interest in buying Chrome and eventually submitted an unsolicited $34.5 billion offer for the browser that hit a dead end when Mehta decided against a Google breakup.

     Copyright 2025. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    [ad_2]

    Associated Press

    Source link

  • Bryan Cranston Was Bothered by Sora 2, But Now He’s Praising OpenAI

    [ad_1]

    If you were following the Sora 2 news closely when the limited public release of OpenAI’s new video generator started on September 30, you may have noticed some unsettling videos featuring the likeness and voice of iconic TV actor Bryan Cranston—typically in character as Breaking Bad protagonist Walter White. Cranston evidently saw those too, and he found them so unsettling he reportedly contacted his union, SAG-AFTRA, about it.

    But good news: OpenAI has apparently addressed Cranston’s misgivings, and he’s praising the company publicly now.

    In a statement released Monday by SAG-AFTRA (via Deadline), Cranston stated that initially he was “deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way.”

    To be specific, he might have been concerned about this video set in a strip mall parking lot in which Cranston (appearing as Walter White) and deceased pop musician Michael Jackson announce to Jackson’s vlog viewers that they’ve been hanging out.

    Perhaps he also saw this more elaborate work of fan fiction in which Cranston and the rest of the core Breaking Bad cast are in what appears to be the Vietnam War:

    On October 8, Cranston’s agency, released an indignant statement about Sora 2, asking in part:

    The question is, does OpenAI and its partner companies believe that humans, writers, artists, actors, directors, producers, musicians, and athletes deserve to be compensated and credited for the work they create? Or does OpenAI believe they can just steal it, disregarding global copyright principles and blatantly dismissing creators’ rights, as well as the many people and companies who fund the production, creation, and publication of these humans’ work?

    On Monday, however, Cranston had seen something he liked, and was no longer upset. He announced that he was “grateful to OpenAI for its policy and for improving its guardrails.”

    Additionally, Deadline says SAG-AFTRA, OpenAI, the Association of Talent Agents, United Talent Agency, and Creative Artists Agency all released a related joint statement including the following: “While from the start it was OpenAI’s policy to require opt-in for the use of voice and likeness, OpenAI expressed regret for these unintentional generations. OpenAI has strengthened guardrails around replication of voice and likeness when individuals do not opt-in.”

    On October 3, well before CAA’s angry statement about Sora 2, OpenAI CEO Sam Altman painted a slightly different picture in regards to OpenAI’s copyright policy upon the release of Sora 2. He wrote in a blog post that in light of how the product was being used, OpenAI “will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls,” and added, “We are going to try sharing some of this revenue with rightsholders who want their characters generated by users.”

    We asked OpenAI to clarify the timeline of the Sora 2 copyright policy, and will update if we hear back.

    Altman wrote in that same post that OpenAI is “going to have to somehow make money for video generation.”

    [ad_2]

    Mike Pearl

    Source link

  • DHS Asks OpenAI to Unmask User Behind ChatGPT Prompts, Possibly the First Such Case

    [ad_1]

    The federal government has long leaned on tech companies to fork over user data to aide in its law enforcement investigations. However, while social media companies, search engines, and other tech platforms have all surrendered data in the pursuit of federal probes, AI companies have largely remained an untouched frontier, legally speaking—until now, that is.

    Forbes writes that a unit within the Department of Homeland Security that investigates child sex crimes has asked OpenAI to turn over information about a user that they say is the administrator of a child abuse website. The person in question discussed their use of ChatGPT with an undercover agent on the child abuse site, which spurred the government to ask the company for records that might assist with their case.

    Forbes refers to this as the “first known federal search warrant asking OpenAI for user data” and says it discovered the case by reviewing court records unsealed in Maine last week.

    The prompts that the user entered into ChatGPT seem to be completely disconnected from the crimes they’re accused of committing. Forbes writes that, among other things, they involved a question about Star Trek and an AI-generated poem composed in “Trump-style”:

    The suspect then disclosed some prompts and responses they had received, detailing an apparently innocuous discussion that began with, “What would happen if Sherlock Holmes met Q from Star Trek?” In another discussion, the suspect said they’d received a response from ChatGPT for an unspecified request about a 200,000-word poem, receiving in response “a sample excerpt of a humorous, Trump-style poem about his love for the Village People’s Y.M.C.A., written in that over-the-top, self-aggrandizing, stream-of-consciousness style he’s known for.” They then copied and pasted that poem.

    Forbes also notes that the DHS has not asked OpenAI for any identifying information, as the government already believes it has identified the criminal in question. According to the criminal complaint against the suspect, undercover agents used context clues pieced together from ongoing conversations with the user to put together a profile on who he might be. Those context clues included comments he allegedly made while speaking with the undercover agent, including his desire to join the military, the places he’d lived (and visited), a favorite restaurant, and his work for a military base, among other things. Those clues led investigators to believe that he was a 36-year-old man who had previously worked on a U.S. Air Force base in Germany, Forbes notes.

    The search warrant that is the basis for much of Forbes’ reporting appears to have since been sealed. However, the criminal complaint against the suspect is still public. An excerpt of that complaint reads, partially: “In several conversations occurring between SUSPECT USER and the UC [undercover agent] in July 2025 and August 2025, SUSPECT USER indicated that he was too overweight to be considered for employment by the military. Agents were informed by the military recruiters that when” the suspect in question “first came for an initial interview it was approximately June or July 2025,” and he “was over the acceptable weight for an individual of his height. Subsequent more recent conversations between SUSPECT USER and the UC indicated that SUSPECT USER had made progress on that front, and military recruiters likewise indicated to agents” that the suspect “was now within military guidelines.”

    Gizmodo reached out to the suspect’s attorney, and to OpenAI, for comment.

    Federal law enforcement has routinely looked to gather data for investigations from other tech platforms and AI companies are giant troves of user information, so it makes perfect sense that law enforcement agencies would also seem them as an important tool when it comes to fighting crime. This is surely just the beginning of AI chatbots’ use in that capacity.

    [ad_2]

    Lucas Ropek

    Source link

  • Meta’s Bold Strategy to Beat OpenAI Starts With These 8 AI Innovators

    [ad_1]

    OpenAI might be the center of the AI development world these days, but the competition has been heating up for quite a while. And few competitors are bankrolled on the same level as Meta. With a market capitalization of more than $1.75 trillion and a CEO who’s not afraid to spend heavily, Meta has been on a hiring spree in the AI world for months, poaching top tier talent from a variety of competitors.

    It appeared recently that the wave of high-profile (and high-dollar) recruitments was coming to an end. In August, Meta quietly announced a freeze on hiring after adding roughly 50 AI researchers and engineers. This month, though, two more big names have joined the Meta roster.

    While Meta might have a gap to close with its AI rivals, the company has assembled an all-star team to catch up and move forward. Here are some of the most notable experts to come on board.

    Andrew Tulloch, co-founder of Thinking Machines Lab

    Tulloch partnered with OpenAI’s former chief technical officer Mira Murati to launch Thinking Machines Lab in February of this year. Now he’s returning to his roots. Considered a leading researcher in the AI field, Tulloch previously spent 11 years at Meta, leaving in 2023 to join OpenAI, then departing with Murati. Meta founder Mark Zuckerberg has been chasing Tulloch for a while, reportedly making an offer with a $1.5 billion compensation package at one point, which Tulloch rejected. (Meta has called the description of the offer “inaccurate and ridiculous.”) There’s no word on what Tulloch was offered that made him decide to move.

    Ke Yang, Senior Director of Machine Learning at Apple

    Yang, who was appointed to lead Apple’s AI-driven web search effort just weeks ago, is another big October Meta hire. At Apple, his team (Answers, Knowledge and Information, or AKI) was working to make Siri more Chat-GPT-like by pulling that information from the web, making his departure one of Meta’s most notable poachings. Meta convinced him to come over after recruiting several of his colleagues.

    Shengjia Zhao, co-creator of OpenAI’s ChatGPT

    Zhao joined Meta in June to serve as chief scientist of Meta Superintelligence Labs. Beyond co-creating ChatGPT, he also played a role in building GPT-4 and led synthetic data at OpenAI for a stint. “Shengjia has already pioneered several breakthroughs including a new scaling paradigm and distinguished himself as a leader in the field,” Zuckerberg wrote in a social media post in July. “I’m looking forward to working closely with him to advance his scientific vision.”

    Daniel Gross, co-founder of Safe Superintelligence

    As it did with Murati’s Thinking Machines Lab, Meta tried to acquire Safe Superintelligence, the AI startup co-founded by OpenAI’s former chief scientist, Ilya Sutskever. When that offer was rejected, Zuckerberg began looking for talent, luring co-founder and CEO Gross in June. Gross is working on AI products for Meta’s superintelligence group. By joining Meta, he’s reunited with former GitHub CEO Nat Friedman, with whom he once created the venture fund NFDG.

    Ruoming Pang, Apple’s head of AI models

    Pang was one of the first high-profile departures from Apple to Meta, making the jump in July. At the time, he was Apple’s top executive overseeing AI models and had been with the company since 2021. While there, he helped develop the large language model that powers Apple Intelligence and other AI features, such as email and webpage summaries.

    Matt Deitke, co-founder of Vercept

    Vercept is a start-up that’s attempting to build AI agents that use other software to autonomously perform tasks, something that caught Zuckerberg’s attention. Deitke proved hard to lure, though. He reportedly turned down a $125 million, four-year offer, but a direct appeal by Zuckerberg (and a reported doubling of that offer) convinced him to make the move (with the blessing of his peers). Kiana Ehsani, his co-founder and CEO, announced his departure on social media, joking, “We look forward to joining Matt on his private island next year.”

    Alexandr Wang, founder and CEO of Scale AI

    Wang left his startup to join Meta after the social media company made a $14.3 billion investment into Scale AI (without any voting power in the company). “As you’ve probably gathered from recent news, opportunities of this magnitude often come at a cost,” Wang wrote in a memo to staff. “In this instance, that cost is my departure.” Wang joined Meta’s superintelligence unit. Scale made its name by helping companies like OpenAI, Google and Microsoft prepare data used to train AI models. Meta was already one of its biggest customers.

    Nat Friedman, former CEO of GitHub

    Friedman was already a part of Meta’s Advisory Group before he was brought on full-time. That external advisory council provides guidance on technology and product development. Now, he’s working with Wang to run the superintelligence unit. Friedman previously was CEO of GitHub, a cloud-based platform that hosts code for software development. Most recently, he was a board member at the AI investment firm he started with Safe Superintelligence’s Gross.

    As for what Zuck is going to do with all this talent, the sky’s the limit, but there’s some catchup to do first. The Llama Large Language Model hasn’t quite matched up to those of OpenAI or Google, but with Meta’s gargantuan user base (3.4 billion people use one of the company’s apps each day), Meta’s AI could still be one of the most widely used in the years to come. 

    [ad_2]

    Chris Morris

    Source link

  • OpenAI’s ‘embarrassing’ math | TechCrunch

    [ad_1]

    “Hoisted by their own GPTards.”

    That’s how Meta’s Chief AI Scientist Yann LeCun described the blowback after OpenAI researchers did a victory lap over GPT-5’s supposed math breakthroughs.

    Google DeepMind CEO Demis Hassabis added, “this is embarrassing.”

    The Decoder reports that in a since-deleted tweet, OpenAI VP Kevin Weil declared that “GPT-5 found solutions to 10 (!) previously unsolved Erdős problems and made progress on 11 others.” (“Erdős problems” are famous conjectures posed by mathematician Paul Erdős.)

    However, mathematician Thomas Bloom, who maintains the Erdos Problems website, said Weil’s post was “a dramatic misrepresentation” — while these problems were indeed listed as “open” on Bloom’s website, he said that only means, “I personally am unaware of a paper which solves it.”

    In other words, it’s not accurate to claim GPT-5 was able to solve previously unsolved problems. Instead, Bloom wrote, “GPT-5 found references, which solved these problems, that I personally was unaware of.”

    Sebastien Bubeck, an OpenAI researcher who’d also been touting GPT-5’s accomplishments, then acknowledged that “only solutions in the literature were found,” but he suggested this remains a real accomplishment: “I know how hard it is to search the literature.”

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    [ad_2]

    Anthony Ha

    Source link

  • AI startups are leasing luxury apartments in San Francisco for staff and offering large rent stipends to attract talent  | Fortune

    [ad_1]

    The AI boom is bringing a wave of startups to San Francisco, and employees are receiving generous benefits in one of the country’s priciest housing markets. 

    Roy Lee, CEO of AI tech startup Cluely, which makes software for job interviews and work calls, told The New York Times that he leased eight apartments for employees in a recently-built luxury complex situated just a one-minute walk away from the office. The rents in the 16-story building range from $3,000 to $12,000 a month. 

    “Going to the office should feel like you’re walking to your living room, so we really, really want people close,” Lee told The Times on Thursday.

    Flo Crivello, CEO of Lindy, another AI startup, said he offers his approximately 40 employees a $1,000 rent stipend every month if they live within a 10-minute walk of the company’s office.

    “People are so much happier and healthier when they live close to work,” he told The Times. “This makes them stick around for longer, perform better and work longer hours.”

    The AI boom has drawn a flood of money and talent to San Francisco, inflating rent in the process. The Bay Area has attracted 70% of AI venture capital funding nationwide since 2019, according to data from Pitchbook. 

    Across the U.S. and Canada, the pool of tech workers with AI skills jumped more than 50% to 517,000 from mid-2024 to mid-2025, according to a September CBRE report. The San Francisco Bay Area, New York metro and Seattle are the top U.S. markets for AI-specialty talent, accounting for 35% of the national total, the report said.

    Meanwhile, fully remote working arrangements for open positions have declined, and more employers are adopting hybrid arrangements requiring tech talent to spend three or more days in the office. In San Francisco alone, 1 out of every 4 square feet of office space was leased by an AI company over the last two and a half years, according to CBRE.

    Tightness in the office market is also seen in the residential sector. Over the past year, apartment prices in San Francisco rose 6%, on average, more than twice the 2.5% increase experienced in New York City and the highest rate in the nation, according to real estate tracker CoStar data cited by The Times. In hot spots like Mission Bay, near OpenAI’s headquarters, rents climbed 13% recently.

    Average rent for a San Francisco apartment is now $3,315 a month, just below New York City’s, the nation’s highest at $3,360.

    A September report from real estate tech company Zumper said San Francisco’s housing market bucked the national trend of flat or falling prices and instead saw the strongest annual growth across the country for two-bedroom rent, which surged 17.1%. One-bedroom rent climbed 10.7%, the third-highest increase in the nation, the report said.

    The report points to a “perfect storm” of tech-sector hiring and stricter return-to-office mandates driving more renters into the city as well as supply-chain constraints. The city’s vacancy rate has fallen back to pre-pandemic levels, and new housing construction is at its weakest pace in a decade, the report added.

    Will Goodman, a principal at Strada Investment Group, which developed the luxury complex where Cluely leased its eight apartments, told The Times that half of the 501 units in the complex were leased within two months of its May opening.

    “Honestly, I’ve never seen anything like it before,” he said

    [ad_2]

    Nino Paoli

    Source link

  • Silicon Valley spooks the AI safety advocates | TechCrunch

    [ad_1]

    Silicon Valley leaders including White House AI & Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason Kwon caused a stir online this week for their comments about groups promoting AI safety. In separate instances, they alleged that certain advocates of AI safety are not as virtuous as they appear, and are either acting in the interest of themselves or billionaire puppet masters behind the scenes.

    AI safety groups that spoke with TechCrunch say the allegations from Sacks and OpenAI are Silicon Valley’s latest attempt to intimidate its critics, but certainly not the first. In 2024, some venture capital firms spread rumors that a California AI safety bill, SB 1047, would send startup founders to jail. The Brookings Institution labeled the rumor as one of many “misrepresentations” about the bill, but Governor Gavin Newsom ultimately vetoed it anyway.

    Whether or not Sacks and OpenAI intended to intimidate critics, their actions have sufficiently scared several AI safety advocates. Many nonprofit leaders that TechCrunch reached out to in the last week asked to speak on the condition of anonymity to spare their groups from retaliation.

    The controversy underscores Silicon Valley’s growing tension between building AI responsibly and building it to be a massive consumer product — a theme my colleagues Kirsten Korosec, Anthony Ha, and I unpack on this week’s Equity podcast. We also dive into a new AI safety law passed in California to regulate chatbots, and OpenAI’s approach to erotica in ChatGPT.

    On Tuesday, Sacks wrote a post on X alleging that Anthropic — which has raised concerns over AI’s ability to contribute to unemployment, cyberattacks, and catastrophic harms to society — is simply fearmongering to get laws passed that will benefit itself and drown out smaller startups in paperwork. Anthropic was the only major AI lab to endorse California’s Senate Bill 53 (SB 53), a bill that sets safety reporting requirements for large AI companies, which was signed into law last month.

    Sacks was responding to a viral essay from Anthropic co-founder Jack Clark about his fears regarding AI. Clark delivered the essay as a speech at the Curve AI safety conference in Berkeley weeks earlier. Sitting in the audience, it certainly felt like a genuine account of a technologist’s reservations about his products, but Sacks didn’t see it that way.

    Sacks said Anthropic is running a “sophisticated regulatory capture strategy,” though it’s worth noting that a truly sophisticated strategy probably wouldn’t involve making an enemy out of the federal government. In a follow up post on X, Sacks noted that Anthropic has positioned “itself consistently as a foe of the Trump administration.”

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Also this week, OpenAI’s chief strategy officer, Jason Kwon, wrote a post on X explaining why the company was sending subpoenas to AI safety nonprofits, such as Encode, a nonprofit that advocates for responsible AI policy. (A subpoena is a legal order demanding documents or testimony.) Kwon said that after Elon Musk sued OpenAI — over concerns that the ChatGPT-maker has veered away from its nonprofit mission — OpenAI found it suspicious how several organizations also raised opposition to its restructuring. Encode filed an amicus brief in support of Musk’s lawsuit, and other nonprofits spoke out publicly against OpenAI’s restructuring.

    “This raised transparency questions about who was funding them and whether there was any coordination,” said Kwon.

    NBC News reported this week that OpenAI sent broad subpoenas to Encode and six other nonprofits that criticized the company, asking for their communications related to two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also asked Encode for communications related to its support of SB 53.

    One prominent AI safety leader told TechCrunch that there’s a growing split between OpenAI’s government affairs team and its research organization. While OpenAI’s safety researchers frequently publish reports disclosing the risks of AI systems, OpenAI’s policy unit lobbied against SB 53, saying it would rather have uniform rules at the federal level.

    OpenAI’s head of mission alignment, Joshua Achiam, spoke out about his company sending subpoenas to nonprofits in a post on X this week.

    “At what is possibly a risk to my whole career I will say: this doesn’t seem great,” said Achiam.

    Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI seems convinced its critics are part of a Musk-led conspiracy. However, he argues this is not the case, and that much of the AI safety community is quite critical of xAI’s safety practices, or lack thereof.

    “On OpenAI’s part, this is meant to silence critics, to intimidate them, and to dissuade other nonprofits from doing the same,” said Steinhauser. “For Sacks, I think he’s concerned that [the AI safety] movement is growing and people want to hold these companies accountable.”

    Sriram Krishnan, the White House’s senior policy advisor for AI and a former a16z general partner, chimed in on the conversation this week with a social media post of his own, calling AI safety advocates out of touch. He urged AI safety organizations to talk to “people in the real world using, selling, adopting AI in their homes and organizations.”

    A recent Pew study found that roughly half of Americans are more concerned than excited about AI, but it’s unclear what worries them exactly. Another recent study went into more detail and found that American voters care more about job losses and deepfakes than catastrophic risks caused by AI, which the AI safety movement is largely focused on.

    Addressing these safety concerns could come at the expense of the AI industry’s rapid growth — a trade-off that worries many in Silicon Valley. With AI investment propping up much of America’s economy, the fear of over-regulation is understandable.

    But after years of unregulated AI progress, the AI safety movement appears to be gaining real momentum heading into 2026. Silicon Valley’s attempts to fight back against safety-focused groups may be a sign that they’re working.

    [ad_2]

    Maxwell Zeff

    Source link

  • New adult ChatGPT version coming soon

    [ad_1]

    This week, OpenAI’s ChatGPT announced a policy change that will soon allow adult users to access a less censored version of the chatbot that will include erotica. Ashley Gold, senior tech reporter at Axios, joins “The Takeout” to discuss the upcoming change.

    [ad_2]

    Source link

  • Your AI tools run on fracked gas and bulldozed Texas land | TechCrunch

    [ad_1]

    The AI era is giving fracking a second act, a surprising twist for an industry that, even during its early 2010s boom years, was blamed by climate advocates for poisoned water tables, man-made earthquakes, and the stubborn persistence of fossil fuels.

    AI companies are building massive data centers near major gas-production sites, often generating their own power by tapping directly into fossil fuels. It’s a trend that’s been overshadowed by headlines about the intersection of AI and healthcare (and solving climate change), but it’s one that could reshape — and raise difficult questions for — the communities that host these facilities.

    Take the latest example. This week, the Wall Street Journal reported that AI coding assistant startup Poolside is constructing a data center complex on more than 500 acres in West Texas — about 300 miles west of Dallas — a footprint two-thirds the size of Central Park. The facility will generate its own power by tapping natural gas from the Permian Basin, the nation’s most productive oil and gas field, where hydraulic fracturing isn’t just common but really the only game in town.

    The project, dubbed Horizon, will produce two gigawatts of computing power. That’s equivalent to the Hoover Dam’s entire electric capacity, except instead of harnessing the Colorado River, it’s burning fracked gas. Poolside is developing the facility with CoreWeave, a cloud computing company that rents out access to Nvidia AI chips and that’s supplying access to more than 40,000 of them. The Journal calls it an “energy Wild West,” which seems apt.

    Yet Poolside is far from alone. Nearly all the major AI players are pursuing similar strategies. Last month, OpenAI CEO Sam Altman toured his company’s flagship Stargate data center in Abilene, Texas — around 200 miles from the Permian Basin — where he was candid, saying, “We’re burning gas to run this data center.”

    The complex requires about 900 megawatts of electricity across eight buildings and includes a new gas-fired power plant using turbines similar to those that power warships, according to the Associated Press. The companies say the plant provides only backup power, with most electricity coming from the local grid. That grid, for the record, draws from a mix of natural gas and the sprawling wind and solar farms in West Texas.

    But the people living near these projects aren’t exactly comforted. Arlene Mendler lives across the street from Stargate. She told the AP she wishes someone had asked her opinion before bulldozers eliminated a huge tract of mesquite shrubland to make room for what’s being built atop it.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    “It has completely changed the way we were living,” Mendler told the AP. She moved to the area 33 years ago seeking “peace, quiet, tranquility.” Now construction is the soundtrack in the background, and bright lights on the scene have spoiled her nighttime views.

    Then there’s the water. In drought-prone West Texas, locals are particularly nervous about how new data centers will impact the water supply. The city’s reservoirs were at roughly half-capacity during Altman’s visit, with residents on a twice-weekly outdoor watering schedule. Oracle claims each of the eight buildings will need just 12,000 gallons per year after an initial million-gallon fill for closed-loop cooling systems. But Shaolei Ren, a University of California, Riverside professor who studies AI’s environmental footprint, told the AP that’s misleading. These systems require more electricity, which means more indirect water consumption at the power plants generating that electricity.

    Meta is pursuing a similar strategy. In Richland Parish, the poorest region of Louisiana, the company plans to build a $10 billion data center the size of 1,700 football fields that will require two gigawatts of power for computation alone. Utility company Entergy will spend $3.2 billion to build three large natural-gas power plants with 2.3 gigawatts of capacity to feed the facility by burning gas extracted through fracking in the nearby Haynesville Shale. Louisiana residents, like those in Abilene, aren’t thrilled to be encircled by bulldozers around the clock.

    (Meta is also building in Texas, though elsewhere in the state. This week the company announced a $1.5 billion data center in El Paso, near the New Mexico border, with one gigawatt of capacity expected online in 2028. El Paso isn’t near the Permian Basin, and Meta says the facility will be matched with 100% clean and renewable energy. One point for Meta.)

    Even Elon Musk’s xAI, whose Memphis facility has generated considerable controversy this year, has fracking connections. Memphis Light, Gas and Water – which currently sells power to xAI but will eventually own the substations xAI is building – purchases natural gas on the spot market and pipes it to Memphis via two companies: Texas Gas Transmission Corp. and Trunkline Gas Company.

    Texas Gas Transmission is a bidirectional pipeline carrying natural gas from Gulf Coast supply areas and several major hydraulically fractured shale formations through Arkansas, Mississippi, Kentucky, and Tennessee. Trunkline Gas Company, the other Memphis supplier, also carries natural gas from fracked sources.

    If you’re wondering why AI companies are pursuing this path, they’ll tell you it’s not just about electricity; it’s also about beating China.

    That was the argument Chris Lehane made last week. Lehane, a veteran political operative who joined OpenAI as vice president of global affairs in 2024, laid out the case during an on-stage interview with TechCrunch.

    “We believe that in the not-too-distant future, at least in the U.S., and really around the world, we are going to need to be generating in the neighborhood of a gigawatt of energy a week,” Lehane said. He pointed to China’s massive energy buildout: 450 gigawatts and 33 nuclear facilities constructed in the last year alone.

    When TechCrunch asked about Stargate’s decision to build in economically challenged areas like Abilene, or Lordstown, Ohio, where more gas-powered plants are planned, Lehane returned to geopolitics. “If we [as a country] do this right, you have an opportunity to re-industrialize countries, bring manufacturing back and also transition our energy systems so that we do the modernization that needs to take place.”

    The Trump administration is certainly on board. The July 2025 executive order fast-tracks gas-powered AI data centers by streamlining environmental permits, offering financial incentives, and opening federal lands for projects using natural gas, coal, or nuclear power — while explicitly excluding renewables from support.

    For now, most AI users remain largely unaware of the carbon footprint behind their dazzling new toys and work tools. They’re more focused on capabilities like Sora 2 – OpenAI’s hyperrealistic video-generation product that requires exponentially more energy than a simple chatbot – than on where the electricity comes from.

    The companies are counting on this. They’ve positioned natural gas as the pragmatic, inevitable answer to AI’s exploding power demands. But the speed and scale of this fossil fuel buildout deserves more attention than it’s getting.

    If this is a bubble, it won’t be pretty. The AI sector has become a circular firing squad of dependencies: OpenAI needs Microsoft needs Nvidia needs Broadcom needs Oracle needs data center operators who need OpenAI. They’re all buying from and selling to each other in a self-reinforcing loop. The Financial Times noted this week if the foundation cracks, there’ll be a lot of expensive infrastructure left standing around, both the digital and the gas-burning kind.

    OpenAI’s ability alone to meet its obligations is “increasingly a concern for the wider economy,” the outlet wrote.

    One key question that’s been largely absent from the conversation is whether all this new capacity is even necessary. A Duke University study found that utilities typically use only 53% of their available capacity throughout the year. That suggests significant room to accommodate new demand without constructing new power plants, as MIT Technology Review reported earlier this year.

    The Duke researchers estimate that if data centers reduced electricity consumption by roughly half for just a few hours during annual peak demand periods, utilities could handle an additional 76 gigawatts of new load. That would effectively absorb the 65 gigawatts data centers are projected to need by 2029.

    That kind of flexibility would allow companies to launch AI data centers faster. More importantly, it could provide a reprieve from the rush to build natural gas infrastructure, giving utilities time to develop cleaner alternatives.

    But again, that would mean losing ground to an autocratic regime, per Lehane and many others in the industry, so instead, the natural gas building spree appears likely to saddle regions with more fossil-fuel plants and leave residents with soaring electricity bills to finance today’s investments, including long after the tech companies’ contracts expire.

    Meta, for instance, has guaranteed it will cover Entergy’s costs for the new Louisiana generation for 15 years. Poolside’s lease with CoreWeave runs for 15 years. What happens to customers when those contracts end remains an open question.

    Things may eventually change. A lot of private money is being funneled into small modular reactors and solar installations with the expectation that these cleaner energy alternatives will become more central energy sources for these data centers. Fusion startups like Helion and Commonwealth Fusion Systems have similarly raised substantial funding from those the front lines of AI, including Nvidia and Altman.

    This optimism isn’t confined to private investment circles. The excitement has spilled over into public markets, where several “non-revenue-generating” energy companies that have managed to go public have truly anticipatory, market caps, based on the expectation that they will one day fuel these data centers.

    In the meantime — which could still be decades — the most pressing concern is that the people who’ll be left holding the bag, financially and environmentally, never asked for any of this in the first place.

    [ad_2]

    Connie Loizos

    Source link

  • Should AI do everything? OpenAI thinks so | TechCrunch

    [ad_1]

    Silicon Valley’s rule? It’s not cool to be cautious. As OpenAI removes guardrails and VCs criticize companies like Anthropic for supporting AI safety regulations, it’s becoming clearer who the industry thinks should shape AI development. 

    On this episode of Equity, Kirsten Korosec, Anthony Ha, and Max Zeff discuss how the line between innovation and responsibility is getting blurrier, plus what happens when pranks go from digital to physical. 

    Watch to the full episode for more about: 

    • Why advocating for AI safety has become “uncool” in Silicon Valley from Anthropic facing backlash to California’s SB 243 regulation of AI companion chatbots and the success of companies like Character.AI 
    • Which startups are using an SEC workaround to file for IPOs during the shutdown 

    Equity is TechCrunch’s flagship podcast, produced by Theresa Loconsolo, and posts every Wednesday and Friday.  

    Subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod. 

    [ad_2]

    Theresa Loconsolo

    Source link

  • Looks Like JD Vance Didn’t Get the Memo That This Admin Hates AI Guardrails

    [ad_1]

    Republicans have largely been embracing a “hands-off” approach to regulating artificial intelligence, but Vice President JD Vance has found where he draws the line: weird porn. During an appearance on Newsmax’s “The Record with Greta Van Susteren,” Vance called out OpenAI’s recent announcement that it would allow adult users to create erotica with ChatGPT as an example of “bad” uses of AI.

    “Artificial intelligence is still in many cases very dumb,” Vance said during the interview, spotted by The Daily Beast. “Is it good or is it bad, or is it going to help us or going to hurt us? The answer is probably both, and we should be trying to maximize as much of the good and minimize as much of the bad.”

    The VP went on to offer examples of what he sees as both sides of the spectrum. On the good: “finding new cures for diseases.” Reasonable enough. As for the “bad,” Vance name-checked OpenAI CEO Sam Altman to lay out where he thinks AI has gone too far. “I saw an announcement, I think it was from Sam Altman from OpenAI, who said basically, they’re going to start using AI to introduce erotica and porn and things like that,” Vance said. “If it’s helping us come up with increasingly weird porn, that’s bad.”

    Gizmodo reached out to OpenAI for a response to Vance’s comment, but did not receive a response at the time of publication.

    To be fair to Vance here, his basic premise isn’t wrong—though no one said the porn had to be weird, he decided that part. Altman took a lot of heat over the erotica announcement, which he later tried to downplay as “just one example of us allowing more user freedom for adults,” but it’s clearly not a feature that offers anything resembling productivity or obvious human benefit. If anything, it presents even more risk for people getting emotionally or romantically attached to a chatbot in a way that is almost certainly unhealthy.

    But it’s also a departure from the guardrail-free approach that many Republicans have been pushing for. Politicians like Ted Cruz have actively been working to help AI firms avoid regulations, first by trying to block states from creating their own standards and more recently by proposing legislation that would provide AI firms with a waiver for federal regulations, allowing them to test new products without standard scrutiny or oversight. The Trump administration issued its AI Action Plan earlier this year, which specifically took aim at cutting any sort of regulatory red tape that may even slightly hinder AI development. And, of course, Elon Musk loves to brag about his disregard for guardrails when it comes to his personal chatbot, Grok. Back in August, Musk had become so obsessed with posting about Grok’s erotic chatbot characters that his own fans were begging him to “stop gooning to AI anime and take us to Mars.”

    For the rightwing tech crowd, the attitude is basically let the chatbot talk dirty or China will beat us in the race to AGI.

    But while Republicans may not want to regulate these companies, a large chunk of them do want to play the morality police. Basically, the only thing that raises their ire when it comes to AI is the invocation of anything sexual. AI producing misinformation, using an incredible amount of energy, being used to expand the surveillance state—none of that really raises red flags for these folks. But “sensual” chats and erotica? It’s time for the government to step in.

    [ad_2]

    AJ Dellinger

    Source link

  • Can AI Avoid the Enshittification Trap?

    [ad_1]

    I recently vacationed in Italy. As one does these days, I ran my itinerary past GPT-5 for sightseeing suggestions and restaurant recommendations. The bot reported that the top choice for dinner near our hotel in Rome was a short walk down Via Margutta. It turned out to be one of the best meals I can remember. When I got home, I asked the model how it chose that restaurant, which I hesitate to reveal here in case I want a table sometime in the future (Hell, who knows if I’ll even return: It is called Babette. Call ahead for reservations.) The answer was complex and impressive. Among the factors were rave reviews from locals, notices in food blogs and the Italian press, and the restaurant’s celebrated combination of Roman and contemporary cooking. Oh, and the short walk.

    Something was required from my end as well: trust. I had to buy into the idea that GPT-5 was an honest broker, picking my restaurant without bias; that the restaurant wasn’t shown to me as sponsored content and wasn’t getting a cut of my check. I could have done deep research on my own to double-check the recommendation (I did look up the website), but the point of using AI is to bypass that friction.

    The experience bolstered my confidence in AI results but also made me wonder: As companies like OpenAI get more powerful, and as they try to pay back their investors, will AI be prone to the erosion of value that seems endemic to the tech apps we use today?

    Word Play

    Writer and tech critic Cory Doctorow calls that erosion “enshittification.” His premise is that platforms like Google, Amazon, Facebook, and TikTok start out aiming to please users, but once the companies vanquish competitors, they intentionally become less useful to reap bigger profits. After WIRED republished Doctorow’s pioneering 2022 essay about the phenomenon, the term entered the vernacular, mainly because people recognized that it was totally on the mark. Enshittification was chosen as the American Dialect Society’s 2023 Word of the Year. The concept has been cited so often that it transcends its profanity, appearing in venues that normally would hold their noses at such a word. Doctorow just published an eponymous book on the subject; the cover image is the emoji for … guess what.

    If chatbots and AI agents become enshittified, it could be worse than Google Search becoming less useful, Amazon results getting plagued with ads, and even Facebook showing less social content in favor of anger-generating clickbait.

    AI is on a trajectory to be a constant companion, giving one-shot answers to many of our requests. People already rely on it to help interpret current events and get advice on all sorts of buying choices—and even life choices. Because of the massive costs of creating a full-blown AI model, it’s fair to assume that only a few companies will dominate the field. All of them plan to spend hundreds of billions of dollars over the next few years to improve their models and get them into the hands of as many people as possible. Right now, I’d say AI is in what Doctorow calls the “good to the users” stage. But the pressure to make back the massive capital investments will be tremendous—especially for companies whose user base is locked in. Those conditions, as Doctorow writes, allow companies to abuse their users and business customers “to claw back all the value for themselves.”

    When one imagines the enshittification of AI, the first thing that comes to mind is advertising. The nightmare is that AI models will make recommendations based on which companies have paid for placement. That’s not happening now, but AI firms are actively exploring the ad space. In a recent interview, OpenAI CEO Sam Altman said, “I believe there probably is some cool ad product we can do that is a net win to the user and a sort of positive to our relationship with the user.” Meanwhile, OpenAI just announced a deal with Walmart so the retailer’s customers can shop inside the ChatGPT app. Can’t imagine a conflict there! The AI search platform Perplexity has a program where sponsored results appear in clearly labeled follow-ups. But, it promises, “these ads will not change our commitment to maintaining a trusted service that provides you with direct, unbiased answers to your questions.”

    [ad_2]

    Steven Levy

    Source link