ReportWire

Tag: openai

  • OpenAI’s first device with Jony Ive could be delayed due to ‘technical issues’

    [ad_1]

    OpenAI and Jony Ive could still have some serious loose ends to tie up before releasing their highly anticipated AI device. According to a Financial Times report, the partnership is still struggling with some “technical issues” that could ultimately end up pushing back the device’s release date, which is expected to be sometime in 2026.

    One of those lingering dilemmas involves figuring out the AI assistant’s voice and mannerisms, according to FT‘s sources. The AI device is meant to be “a friend who’s a computer who isn’t your weird AI girlfriend,” according to a FT source who was briefed on the plans. Beyond landing on a personality, OpenAI and Ive are still figuring out potential privacy concerns stemming from a device that’s always listening. On top of that, the budget could reportedly be a challenge due to the increased computing power necessary to run these mass-produced AI devices.

    Outside these latest struggles, we still know very little about the upcoming product. Sam Altman, OpenAI’s CEO, reportedly offered some clues to employees that it could be pocket-sized, aware of its environment and sans display. There are still plenty of questions about what OpenAI’s first hardware project will amount to, but the company could be exercising more caution since similar devices, like the Humane AI Pin, were discontinued after failing to deliver on sales.

    [ad_2]

    Jackson Chen

    Source link

  • Where the ‘PayPal Mafia’ Is Today: Founders, Fortunes and Feuds

    [ad_1]

    Peter Thiel, PayPal’s first CEO, turned his fintech fortune into a far-reaching empire of influence spanning venture capital, politics and power. Marco Bello/Getty Images

    In 2007, Fortune magazine reimagined a classic mafia scene with a Silicon Valley twist: 13 male founders and early employees of PayPal, all long gone from the company, posed at a San Francisco café with slicked-back hair, poker chips and dozens of whiskey glasses. The crowd included some of the most recognizable names in today’s tech scene, like Elon Musk, Peter Thiel and Reid Hoffman. The magazine dubbed them the “PayPal mafia,” not for their time at the fintech company, but for their outsized impact on Silicon Valley through the companies they launched afterward.

    PayPal went public in early 2002 and was acquired by eBay for $1.5 billion the same year. Most of its early employees left the company after the acquisition. They went on to found YouTube, SpaceX and LinkedIn, among other legendary names in Silicon Valley. However, like their cinematic namesake, the group hasn’t avoided controversy. These former colleagues have built billion-dollar businesses while also finding themselves in the crosshairs of public criticism.

    For instance, Thiel has faced controversy over his political affiliations and, most notably, for funding Hulk Hogan’s 2012 lawsuit against Gawker Media with $10 million — a case that ultimately drove the online media company into bankruptcy. Musk has also faced criticism for his takeover of Twitter and his prior role in the Trump administration, where he led widespread federal employee firings.

    Here’s what they are up to these days:

    Peter Thiel: venture capitalist 

    Peter Thiel speaking at the 2022 Bitcoin ConferencePeter Thiel speaking at the 2022 Bitcoin Conference
    Peter Thiel. Marco Bello/Getty Images

    Peter Thiel, Max Levchin and Luke Nosek founded PayPal in 1998, originally as a software security company. After merging with Elon Musk’s X.com (unrelated to the social media platform he owns today), PayPal shifted its focus to digital payments.

    Thiel served as CEO from 1998 until 2002, leaving after the company was sold to eBay. He then co-founded Palantir Technologies, a major U.S. government contractor providing data analytics services. The company now has a market capitalization of $439 billion.

    Thiel is also known as a prolific angel investor. He co-founded Clarium Capital, Founders Fund, Valar Ventures and Mithril Capital. In 2004, Thiel became Facebook’s first outside investor after acquiring a 10.2 percent stake in the company for $500,000.

    Thiel is among the many former PayPal employees who have entered political and high-profile public arenas. An active donor to the Republican Party, Thiel supported Donald Trump’s 2016 presidential campaign but withheld donations during the 2024 election. He is also credited with helping JD Vance reach the Vice Presidential ticket.

    Elon Musk: entrepreneur, the world’s richest person

    Elon Musk gesturing at a press conference in the Oval Office of the White House in May 2025. Elon Musk gesturing at a press conference in the Oval Office of the White House in May 2025.
    Elon Musk. Kevin Dietsch/Getty Images

    Elon Musk briefly served as PayPal’s CEO before being ousted by the board in 2000. He went on to build one of the most influential portfolios in technology, spanning electric vehicles, space exploration, social media and A.I.

    Musk founded SpaceX in 2002 and has led Tesla since 2008. He also founded Neuralink and The Boring Company, expanding his reach into brain-computer interfaces and infrastructure. In 2022, Musk gained global attention for acquiring Twitter for $44 billion, later rebranding it as X.

    His ties to A.I. run deep: Musk co-founded OpenAI with Sam Altman in 2015 but left in 2018 over strategic disagreements. In 2023, he returned to the field by launching xAI, a research venture focused on building A.I. that is more understandable for humans.

    Today, Musk is the richest person in the world, with an estimated net worth of $400 billion. He is also perhaps the only PayPal alumnus to ascend into direct political influence. During the Trump administration, he led the Department of Government Efficiency (DOGE)—a name shared with his cryptocurrency venture—before stepping down in May after clashing publicly with the President.

    Max Levchin: computer scientist 

    Max Levchin speaking at a FOX Network show in 2019.Max Levchin speaking at a FOX Network show in 2019.
    Max Levchin. John Lamparski/Getty Images
    • Position at PayPal: co-founder, chief technology officer from 1998 to 2002
    • Companies later founded: Affirm
    • Net worth: $1.8 billion

    As PayPal’s chief technology officer, Max Levchin helped lead the company’s anti-fraud efforts by co-creating the Gausebeck-Levchin test—the foundation for the widely used CAPTCHA security tool. After leaving PayPal, he launched the media-sharing platform Slide in 2004, which was acquired by Google in 2010. Levchin briefly served as Google’s vice president of engineering until Slide was shut down the following year.

    In 2012, he co-founded Affirm, a leading “buy now, pay later” (BNPL) company, where he continues to serve as CEO. Today, Affirm has a market capitalization of $27.5 billion, with 21.9 million consumers and more than 350,000 merchant partners on its platform.

    Levchin has also held board positions at Yahoo and Yelp. In 2015, he became the first Silicon Valley executive appointed to the U.S. Consumer Financial Protection Bureau’s advisory board, emphasizing the importance of collaboration between companies and regulators.

    Reid Hoffman: entrepreneur, investor

    Reid Hoffman speaking at event for WIRED's 30th anniversary.Reid Hoffman speaking at event for WIRED's 30th anniversary.
    Reid Hoffman. Kimberly White/Getty Images for WIRED
    • Position at PayPal: chief operating officer
    • Companies later founded: LinkedIn, Greylock Partners
    • Net worth: $2.5 billion

    Before joining PayPal, Hoffman worked as a senior user experience architect at Apple, contributing to the company’s online social network eWorld. He later became director of product management at Fujitsu. After his online dating startup, SocialNet, folded, Hoffman joined PayPal in 2000 as chief operating officer.

    In 2003, he co-founded the career networking site LinkedIn. Following Microsoft’s $26.2 billion acquisition of LinkedIn in 2017, Hoffman joined Microsoft’s board, a move that greatly increased his wealth.

    Over the years, Hoffman has served on the boards of Airbnb and OpenAI, where he was also an early investor. Through the venture capital firm Greylock Partners, he has backed dozens of A.I. startups. In 2022, he co-founded Inflection AI with Mustafa Suleyman, who now serves as CEO. Earlier this year, he teamed up with cancer researcher Siddhartha Mukherjee to launch Manas AI, a startup focused on drug discovery.

    David Sacks: investor, White House A.I. and Crypto Czar

    David Sacks being photographed on a red carpet in Los Angeles.David Sacks being photographed on a red carpet in Los Angeles.
    David Sacks currently serves as the White House A.I. and Crypto Czar. JC Olivera/Variety via Getty Images
    • Position at PayPal: chief operating officer from 1999 to 2002
    • Companies later founded: Craft Ventures
    • Net worth: $200 million

    Since leaving PayPal, David Sacks has built a career spanning film, tech, investing and politics. In 2005, he produced and financed a political satire that earned two Golden Globe nominations. The following year, he founded Geni.com, a genealogy-focused social network that later spun off Yammer, one of the earliest enterprise social networking platforms. He went on to co-found Craft Ventures, the startup Glue, and the podcast platform Callin.

    Today, Sacks serves as the White House’s Special Advisor for A.I. and Crypto, a role created by the Trump administration to guide policy on artificial intelligence and cryptocurrency.

    Jeremy Stoppelman: engineer, Yelp CEO 

    • Position at PayPal: vice president of engineering
    • Companies later founded: Yelp
    • Net worth: $100 million

    Jeremy Stoppelman joined Musk’s X.com in 1999 and became vice president of engineering after its transition to PayPal. In 2004, he co-founded Yelp, where he has served as CEO ever since. Under his leadership, the company turned down a 2010 acquisition offer from Google and went public two years later. Stoppelman’s net worth is estimated at more than $100 million.

    Ken Howery: investor, U.S. ambassador

    • Position at PayPal: chief financial officer from 1998 to 2002
    • Companies later founded: Founders Fund
    • Net worth: estimated $1.5 billion

    Ken Howery served as PayPal’s chief financial officer from 1998 to 2002. After PayPal’s sale to eBay, he became eBay’s director of corporate development until 2003. He later joined Peter Thiel at Clarium Capital as vice president of private equity and went on to co-found Founders Fund as a partner. Beyond investing, he is a member of the Explorers Club, a nonprofit dedicated to scientific exploration, and an advisor to Kiva, the micro-lending nonprofit founded by former PayPal colleague Premal Shah.

    Howery is also among the former PayPal executives who have moved into politics. He has donated at least $1 million to Donald Trump’s campaign through Elon Musk’s political action committee. During Trump’s first term, Howery was appointed U.S. ambassador to Sweden and today serves as the U.S. ambassador to Denmark.

    Roeloth Botha: venture capitalist

    Roelof Botha joined PayPal as director of corporate development shortly before graduating from Stanford University. He later became vice president of finance and went on to serve as chief financial officer until the company’s acquisition by eBay.

    After leaving PayPal, Botha joined Sequoia Capital, where he oversaw investments in YouTube and Instagram. He currently sits on the boards of MongoDB, Evernote, Bird, Natera, Square, Unity and Xoom.

    Russel Simmons: entrepreneur 

    • Position at PayPal: software architect from 1998 to 2003
    • Companies later founded: Yelp, Learnirvana

    Russel Simmons helped design PayPal’s payment system as a software architect. After leaving the company, he and fellow PayPal alum Jeremy Stoppelman set out to build a platform for restaurant reviews. With a $1 million investment from Max Levchin, they launched Yelp in July 2004. Simmons served as chief technology officer until his departure in 2010. At the time, Yelp said he would remain a “significant” shareholder, though the size of his stake—and whether he still holds it—remains unclear.

    In 2014, Simmons co-founded Learnirvana, an online learning platform.

    Andrew McCormack: entrepreneur

    • Position at PayPal: assistant to Thiel from July 2001 to November 2002
    • Companies later founded: Valar Ventures

    Andrew McCormack began his career as an assistant to Peter Thiel at PayPal and followed him into subsequent ventures. From November 2002 to April 2003, he oversaw operations at Thiel’s hedge fund, Clarium Capital.

    In 2010, McCormack co-founded Valar Ventures with Thiel and James Fitzgerald, focusing on fintech investments. He remains a general partner at the firm.

    Luke Nosek: investor 

    • Position at PayPal: co-founder and vice president of marketing and strategy from 1998 to 2002
    • Companies later founded: Founders Fund, Gigafund

    In 2005, Luke Nosek joined Peter Thiel and Ken Howery to launch Founders Fund, a San Francisco–based venture capital firm that has backed companies such as Airbnb, Lyft and SpaceX. While his exact net worth is unclear, Nosek has made substantial investments through his venture firms. At Founders Fund, he led one of the firm’s earliest major deals with a $20 million investment in SpaceX, later serving on its board.

    In 2017, Nosek left to co-found Gigafund, which went on to invest $1 billion in SpaceX, according to the company. He also sits on the board of ResearchGate.

    Premal Shah: entrepreneur 

    • Position at Paypal: product manager
    • Companies later founded: Kiva

    Three years after leaving PayPal, Premal Shah co-founded Kiva, a nonprofit that provides loans to entrepreneurs in underserved communities worldwide. He also serves on the boards of other nonprofits, including the Center for Humane Technology, the Change.org Foundation, Watsi and VolunteerMatch.

    Keith Rabois: investor

    • Position at PayPal: executive vice president of business development

    After leaving his executive role at PayPal, Keith Rabois became an active investor, backing companies including Slide, YouTube and Palantir. He also invested in LinkedIn, where he served as vice president of business and corporate development, and Square, where he was chief operating officer.

    Rabois joined venture capital firm Khosla Ventures from 2013 to 2019 and was a partner at Founders Fund from 2019 to 2024.

    Where the ‘PayPal Mafia’ Is Today: Founders, Fortunes and Feuds

    [ad_2]

    Irza Waraich

    Source link

  • OpenAI acquires an AI-powered personal investing app

    [ad_1]

    Just a day after dethroning SpaceX as the most valuable private company in the world, OpenAI has acquired another startup. This time, the AI giant acquired Roi, an app that offers a one-stop shop for all your financial portfolios and an AI chatbot that provides personalized investing advice. Details of the acquisition weren’t made public, but TechCrunch reported that Sujith Vishwajith, the startup’s CEO and co-founder, will be the only one joining OpenAI’s team.

    It might come as a surprise for OpenAI to venture into the personal finance space, but this latest acquisition offers some hints at what the company could have in store for the future. OpenAI could be leaning into an AI chatbot that provides more than just responses to general queries and offers more personalization as a “proactive assistant,” as detailed in its blog post introducing Pulse.

    OpenAI is also no stranger to acquiring smaller companies that offer something that could advance ChatGPT. In May, the company acquired io, an AI hardware startup cofounded by former Apple designer Jony Ive, for $6.5 billion. OpenAI followed up that major purchase by spending another $1.1 billion to acquire Statsig, a startup that focused on product testing, in September.

    [ad_2]

    Jackson Chen

    Source link

  • With its latest acqui-hire, OpenAI is doubling down on personalized consumer AI  | TechCrunch

    [ad_1]

    OpenAI has acquired Roi, an AI-powered personal finance app. In keeping with a recent trend in the AI industry, only the CEO is making the jump.  

    Chief executive and co-founder Sujith Vishwajith announced the acquisition on Friday, and a source familiar with the matter told TechCrunch he is the only one of Roi’s four-person staff to join OpenAI. Terms of the deal were not disclosed. The company will wind down operations and end its service to customers on October 15. 

    The Roi deal marks the latest in a string of acqui-hires from OpenAI this year, including Context.ai, Crossing Minds, and Alex.

    While it’s not clear whether any of Roi’s technology will transfer over to OpenAI or which unit Vishwajith will join, the acquisition clearly aligns with OpenAI’s bet on personalization and life management as the next layer of AI products. Roi brings a specialized team that has already tried to solve personalization in finance at scale — a challenge whose lessons can be applied more broadly.   

    New York-based Roi was founded in 2022 and has raised $3.6 million in early-stage funding from investors like Balaji Srinivasan, Spark Capital, Gradient Ventures, and Spacecadet Ventures, according to PitchBook data. Its mission was to aggregate a user’s financial footprint, including stocks, crypto, DeFi, real-estate, and NFTs, into one app that can track funds, provide insights, and help people make trades.  

    “We started Roi 3 years ago to make investing accessible to everyone by building the most personalized financial experience,” Vishwajith wrote in a post on X. “Along the way we realized personalization isn’t just the future of finance. It’s the future of software.” 

    Beyond tracking trades, Roi gave users access to a financially savvy AI companion that responded in ways that made sense for them. When signing up, users could personalize Roi by providing information like what they do for a living and how they wanted Roi to respond to them. 

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    In one telling example that Roi posted on X, the sample user wrote: “Talk to me like I’m a Gen-Z kid with brain rot. Use as little words as possible and roast me as much as you want I don’t mind.” In response to a query about the status of the user’s portfolio, Roi replied: “Suje, you got cooked lil bro. Cause of the tariff announcements, you took an L today of $32,459.12…Based on your risk preference this might be an opportunity to buy the dip.” 

    The exchange highlights the philosophy behind Roi and its co-founder — that software shouldn’t just provide generic answers but should adapt, learn, and communicate in ways that feel personal, human, and most importantly, keep you engaged.  

    As the Roi team wrote in a blog post: “The products we use every day won’t remain static, predetermined experiences. They’ll become adaptive, deeply personal companions that understand us, learn from us, and evolve with us.” 

    That vision dovetails with OpenAI’s existing consumer efforts, including Pulse, which generates personalized news and content reports for users as they sleep; the Sora app, a TikTok competitor filled with AI-generated content, including personal cameos from users; and Instant Checkout, a feature that lets users shop and make purchases directly in ChatGPT.  

    The deal also comes as OpenAI beefs up its consumer applications team,  led by former Instacart CEO Fidji Simo. It’s a further signal that OpenAI isn’t just trying to be an API provider, but wants to build its own end-user apps. Roi’s talent and tech could slot right into these apps and help make them more adaptive.  

    Vishwajith, alongside his co-founder Chip Davis, used to work at Airbnb, where he developed a knack for optimizing user behavior to drive revenue. By his account, a simple change of 25 lines of code led to $10+ million in additional cash.  

    Being able to bring in meaningful revenue via consumer apps is more important than ever to OpenAI as it continues to burn through billions on data centers and infrastructure to power its models.  

    [ad_2]

    Rebecca Bellan

    Source link

  • What to expect at OpenAI’s DevDay 2025, and how to watch it | TechCrunch

    [ad_1]

    OpenAI is gearing up to host its third annual developer conference, DevDay 2025, on Monday. The company says more than 1,500 people are scheduled to convene at Fort Mason in San Francisco for OpenAI’s “biggest event yet,” which features announcements, keynotes from OpenAI executives, and a fireside chat between CEO Sam Altman and longtime Apple designer Jony Ive.

    From the sound of it, DevDay 2025 is shaping up to be a grand display of OpenAI’s rising dominance in Silicon Valley against giants like Apple, Google, and Meta. OpenAI is currently building an AI device, a social media app, and an AI-powered browser to take on Chrome. In other words, OpenAI has a lot more going on than it did during its first DevDay in 2023, when it mostly just had ChatGPT and an API business for developers to access its models.

    At the same time, OpenAI faces more competition than ever in the bid to win over developers.

    In the last year, Anthropic’s and Google’s AI models have become increasingly capable for coding tasks and web design. OpenAI has been forced to release better AI models at lower prices to remain in the race. In the background, Meta has built up an impressive roster of AI talent in its new group, Meta Superintelligence Labs, which could become another threat to OpenAI in the near future.

    OpenAI unveiled at its first DevDay in 2023 a new AI model, GPT-4 Turbo, and Altman shared his vision for a marketplace of AI agents called the GPT Store. Altman was ousted as CEO days later — only to return after a dramatic weekend of negotiations. In 2024, OpenAI responded with a more subdued conference, announcing some meaningful developer upgrades, such as an API for AI voice applications, but not much else.

    Nothing is confirmed to launch at DevDay 2025, stoking plenty of rumors. Perhaps OpenAI will finally unveil the AI-powered browser it’s been working on, or maybe give an update on the AI device it’s building with Ive and former Apple executives. It’s also possible there could be some updates related to the GPT Store, which OpenAI has barely discussed since it launched last year.

    TechCrunch will be on the ground in San Francisco covering the event live, so you can check back here for all the news. Here’s what you need to know about OpenAI’s DevDay, and how to watch it.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    DevDay 2025 kicks off at 10 a.m. PT October 6 with an opening keynote from Altman, in which he’s scheduled to unveil “announcements, live demos, and a vision of how developers are reshaping the future with AI.” The keynote will last roughly one hour and will be livestreamed on OpenAI’s YouTube page.

    That’s the only event that will be livestreamed for remote attendees.

    For in-person attendees, there will be onstage presentations and talks from Cursor co-founder Aman Sanger, San Francisco mayor Daniel Lurie, and Andreessen Horowitz investing partner Kimberly Tan, among others. Several OpenAI employees will also give speeches about their work, including model behavior researcher Laurentia Romaniuk and Codex lead Alexander Embiricos.

    There’s also supposed to be a series of AI-powered sideshows at DevDay 2025. One of them is “Sora Cinema,” which is described as a “cozy mini-theater with popcorn” featuring short films generated by OpenAI’s video model, Sora. There’s also supposed to be a phone booth with a “living portrait” of the famed computer scientist Alan Turing “that speaks back.”

    Later in the afternoon, there will be two big events to cap off DevDay. These last two events won’t be livestreamed, but they will be posted on YouTube later that day.

    At 3:15 p.m. PT, there will be a “Developer State of the Union” with OpenAI president Greg Brockman and Olivier Godement, who heads up product for the OpenAI Platform. The two OpenAI executives are slated to “demo new capabilities” and share what’s ahead for developers.

    Finally, at 4:15 p.m. PT, Altman and Ive will give a “Closing Fireside Chat” to discuss the “craft of building in the age of AI.” That conversation will last about 45 minutes.

    [ad_2]

    Maxwell Zeff

    Source link

  • Sam Altman Says the GPT-5 Haters Got It All Wrong

    [ad_1]

    OpenAI’s August launch of its GPT-5 large language model was somewhat of a disaster. There were glitches during the livestream, with the model generating charts with obviously inaccurate numbers. In a Reddit AMA with OpenAI employees, users complained that the new model wasn’t friendly, and called for the company to restore the previous version. Most of all, critics griped that GPT-5 fell short of the stratospheric expectations that OpenAI has been juicing for years. Promised as a game changer, GPT-5 might have indeed played the game better. But it was still the same game.

    Skeptics seized on the moment to proclaim the end of the AI boom. Some even predicted the beginning of another AI Winter. “GPT-5 was the most hyped AI system of all time,” full-time bubble-popper Gary Marcus told me during his packed schedule of victory laps. “It was supposed to deliver two things, AGI and PhD-level cognition, and it didn’t deliver either of those.” What’s more, he says, the seemingly lackluster new model is proof that OpenAI’s ticket to AGI—massively scaling up data and chip sets to make its systems exponentially smarter—can no longer be punched. For once, Marcus’ views were echoed by a sizable portion of the AI community. In the days following launch, GPT-5 was looking like AI’s version of New Coke.

    Sam Altman isn’t having it. A month after the launch he strolls into a conference room at the company’s newish headquarters in San Francisco’s Mission Bay neighborhood, eager to explain to me and my colleague Kylie Robison that GPT-5 is everything that he’d been touting, and that all is well in his epic quest for AGI. “The vibes were kind of bad at launch,” he admits. “But now they’re great.” Yes, great. It’s true the criticism has died down. Indeed, the company’s recent release of a mind-bending tool to generate impressive AI video slop has diverted the narrative from the disappointing GPT-5 debut. The message from Altman, though, is that naysayers are on the wrong side of history. The journey to AGI, he insists, is still on track.

    Numbers Game

    Critics might see GPT-5 as the waning end of an AI summer, but Altman and team argue that it cements AI technology as an indispensable tutor, a search-engine-killing information source, and, especially, a sophisticated collaborator for scientists and coders. Altman claims that users are beginning to see it his way. “GPT-5 is the first time where people are, ‘Holy fuck. It’s doing this important piece of physics.’ Or a biologist is saying, ‘Wow, it just really helped me figure this thing out,’” he says. “There’s something important happening that did not happen with any pre-GPT-5 model, which is the beginning of AI helping accelerate the rate of discovering new science.” (OpenAI hasn’t cited who those physicists or biologists are.)

    So why the tepid initial reception? Altman and his team have sussed out several reasons. One, they say, is that since GPT-4 hit the streets, the company delivered versions that were themselves transformational, particularly the sophisticated reasoning modes they added. “The jump from 4 to 5 was bigger than the jump from 3 to 4,” Altman says. “We just had a lot of stuff along the way.” OpenAI president Greg Brockman agrees: “I’m not shocked that many people had that [underwhelmed] reaction, because we’ve been showing our hand.”

    OpenAI also says that since GPT-5 is optimized for specialized uses like doing science or coding, everyday users are taking a while to appreciate its virtues. “Most people are not physics researchers,” Altman observes. As Mark Chen, OpenAI’s head of research, explains it, unless you’re a math whiz yourself, you won’t care much that GPT-5 ranks in the top five of Math Olympians, whereas last year the system ranked in the top 200.

    As for the charge about how GPT-5 shows that scaling doesn’t work, OpenAI says that comes from a misunderstanding. Unlike previous models, GPT-5 didn’t get its major advances from a massively bigger dataset and tons more computation. The new model got its gains from reinforcement learning, a technique that relies on expert humans giving it feedback. Brockman says that OpenAI had developed its models to the point where they could produce their own data to power the reinforcement learning cycle. “When the model is dumb, all you want to do is train a bigger version of it,” he says. “When the model is smart, you want to sample from it. You want to train on its own data.”

    [ad_2]

    Steven Levy

    Source link

  • Sam Altman Just Pulled Off a $500 Billion Win in His Feud With Elon Musk

    [ad_1]

    When Elon Musk and Sam Altman co-founded OpenAI a decade ago, the idea was to build artificial intelligence that would benefit humanity. At the time, a small group of technologists framed the project as a nonprofit effort to make sure AI was safe.

    Eventually, Musk left and started his own competing AI company. Since then, he’s taken every chance to criticize OpenAI and even sued to prevent it from becoming a for-profit company. Now Altman has something Musk doesn’t: the most valuable startup in the world.

    On Thursday, Bloomberg reported that OpenAI closed a secondary share sale valuing the company at $500 billion. That makes OpenAI the world’s most valuable private company, overtaking—ironically—Musk’s SpaceX, which was last valued at around $400 billion. The OpenAI deal involved about $6.6 billion in employee shares sold to investors, including Thrive Capital, SoftBank, and T. Rowe Price.

    That number—half a trillion dollars—doesn’t change Musk’s balance sheet. In fact, in the ultimate irony, on Wednesday, Musk became the first human with a net worth of $500 billion, according to Forbes. None of that, however, is from OpenAI stock.

    Musk walked away from OpenAI in 2018 and gave up his stake. But as a measure of bragging rights, it’s hard to miss the symbolism: Sam Altman just pulled off a $500 billion win in his feud with Elon Musk. Sure, Musk is worth far more than Altman personally, but the fact that OpenAI just passed SpaceX is a big deal.

    OpenAI’s founding

    The story of the rivalry between the two men is complicated. Back in 2015, they believed AI would be one of the most powerful technologies ever invented, with the potential to help—or to harm—humanity. OpenAI was set up with a nonprofit structure in order to guard against the temptation to simply chase profit.

    Musk left OpenAI’s board in 2018, citing conflicts with Tesla’s own AI efforts. Not long after, OpenAI began a shift from nonprofit into the unusual “capped profit” structure that would allow it to raise billions from Microsoft and others while still keeping the nonprofit in control.

    That’s where the feud gets interesting. Musk accused OpenAI of abandoning its founding mission and becoming just another Silicon Valley startup chasing money. He’s repeatedly blasted Altman and the company on X (formerly Twitter), calling it reckless and dishonest.

    Altman, for his part, rarely mentions Musk directly. He doesn’t have to. His work building OpenAI into the leading AI company is the louder statement.

    Musk has a lot at stake

    At the same time, Musk has spent the past two years trying to build a rival. His startup, xAI, is working on its own large language models to power what he calls “truthful AI.” He’s tied xAI closely to X, integrating its chatbot, Grok, into the platform.

    Musk also filed a lawsuit against OpenAI and Altman, accusing them of turning the nonprofit into a for-profit venture in violation of its founding charter. He even sued Apple, alleging that it gave ChatGPT unfair preference in the App Store recommendations. There are a lot of feelings between these two men.

    OpenAI may be the world’s most important company

    It’s worth mentioning that OpenAI still hasn’t turned a profit. Running massive AI models costs staggering amounts of money, and the company is dependent on investors and customers like Microsoft to keep funding its growth. But the $500 billion valuation says something powerful about how investors view Altman’s company.

    It also says something about Musk. For years, SpaceX held the title of the most valuable private company in the world. It’s also easily one of the most important in terms of the tech it is building, as well as the implications it has for national security.

    SpaceX is an incredible success story—building reusable rockets, dominating satellite launches, and creating Starlink, a global communications network. That valuation is built on real revenue and real products. Now, however, SpaceX has been dethroned. And it has to hurt a little that the company that passed it is the one Musk helped start and then abandoned.

    The broader story here is that AI has become the center of gravity in the tech world. Investors believe it will reshape entire industries, and they’re willing to bet half a trillion dollars on the company leading that charge.

    It’s a battle for the future of computing

    Still, the personal rivalry matters. Musk isn’t just another competitor—he’s a co-founder turned critic, suing to stop Altman’s plans while racing to build his own alternative. Altman, meanwhile, has emerged as the face of generative AI, striking deals, launching products, and now surpassing Musk in the only metric the tech world really keeps score by: valuation.

    For Musk, the sting is sharper because the loss is symbolic. He doesn’t own OpenAI anymore. He can’t share in the financial upside. All he has is the lawsuit and the microphone of his social network.

    For Altman, the win is equally symbolic. $500 billion doesn’t just buy bragging rights—it cements OpenAI’s place as the most important startup of the AI era.

    And for everyone else, it’s a reminder that behind the world-changing technology are human egos, rivalries, and grudges. The future of AI isn’t just about chips, models, and data centers. It’s also about two men who once shared a mission, now locked in a feud, with the scoreboard tilting heavily in Altman’s favor.

    The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

    [ad_2]

    Jason Aten

    Source link

  • Sam Altman’s OpenAI Is Officially the World’s Most Valuable Startup at $500B

    [ad_1]

    A secondary share sale propelled OpenAI’s valuation, setting a new record for private companies. The Washington Post via Getty Images

    OpenAI has reached a new milestone: a $500 billion valuation that makes it the world’s most valuable private company, surpassing Elon Musk’s SpaceX and widening the gap with other major private companies like its direct competitor, Anthropic, and TikTok parent ByteDance.

    The staggering valuation follows a secondary shares sale, first reported by Bloomberg, that allowed current and former employees to sell stock to investors, including Thrive Capital, SoftBank, Dragoneer Investment Group, MGX and T. Rowe Price, The sale didn’t bring new funding to the company but boosted its valuation from $300 billion in March, when it raised $40 billion in a round led by SoftBank.

    OpenAI was founded in 2015 as a nonprofit dedicated to advancing A.I. for humanity’s benefit, but later adopted a capped-profit structure. The company currently has about 700 million weekly users and $12 billion in annualized revenue. It has signed some of the largest cloud deals, including a $300 billion partnership with Oracle for computing power over the next five years.

     

    The company is also in the midst of a long-anticipated transition to a for-profit structure. Last month, it signed a non-binding deal with Microsoft, its largest shareholder, to convert its for-profit arm into a public benefit corporation controlled by the remaining nonprofit.

    Elon Musk, who left OpenAI in 2018 and went on to launch his own startup, xAI, has since become one of the company’s fiercest critics. He has filed multiple lawsuits aimed at halting its restructuring and accused the company of straying from its founding mission in favor of profits. Most recently, he sued the company for allegedly hiring former xAI employees who he claims stole trade secrets.

    Secondary share sales gain steam

    Secondary share sales, an increasingly popular method among startups to retain and reward staff, have boosted the valuation of several already highly valued companies. SpaceX reached a $400 billion valuation in July after a round of secondary share sales; Stripe’s February tender offer valued it at $91.5 billion; and Databricks’ December secondary sale gave the company a $62 billion valuation.

    As OpenAI’s tools continue weaving into daily life, the company has had to reckon with the social consequences of its rapid ascent. Earlier this month, it rolled out parental controls for ChatGPT, giving parents options such as limiting their children’s exposure to sensitive content or disabling certain voice and image modes. The feature came after OpenAI was sued in August by the parents of a teenager who committed suicide after ChatGPT allegedly gave him self-harm advice.

    More recently, OpenAI sparked backlash with the launch of Sora, a short-form A.I. video app, drawing criticism that consumer-facing products conflict with its loftier goals of scientific advances and artificial general intelligence (AGI). Altman addressed the criticism on X yesterday (Oct. 1), writing: “It is also nice to show people cool new tech/products along the way, make them smile, and hopefully make some money given all that compute need.

    He added that most of OpenAI’s resources remain focused on science and AGI research. “When we launched ChatGPT, there was a lot of ‘who needs this and where is AGI?’ Reality is nuanced when it comes to optimal trajectories for a company,” he wrote.

    Sam Altman’s OpenAI Is Officially the World’s Most Valuable Startup at $500B

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • OpenAI’s Sora soars to No. 3 on the U.S. App Store | TechCrunch

    [ad_1]

    OpenAI’s Sora app for AI videos is a viral hit, despite being invite-only for now and limited to users in the U.S. and Canada at launch. On its first day, Sora saw 56,000 downloads, and is now ranked as the No. 3 Top Overall app on the U.S. App Store, according to new data from app intelligence provider Appfigures.

    The firm estimates Sora’s iOS app pulled in a total of 164,000 installs during its first two days, September 30th and October 1st.

    The day-one figure puts Sora’s debut ahead of the performance of other major AI app launches, including Anthropic’s Claude and Microsoft’s Copilot, and puts it on par with xAI’s Grok launch. Meanwhile, OpenAI’s ChatGPT and Google’s Gemini iOS apps had somewhat stronger launches, with each reaching at least 80,000 downloads on day one.

    Since Sora is still invite-only, this may not be the fairest comparison, we’ll admit. It’s possible the new video app could have attracted even more installs if it were open to all users.

    Despite this restriction, it’s a fairly strong showing for the new release, indicating demand for AI video tools in consumers’ hands in more of a social networking-like experience. (This is much to the chagrin of some at OpenAI, who want the company to focus on solving harder problems that benefit humanity. But who’s to say that humanity isn’t benefiting from deepfakes of OpenAI CEO Sam Altman asking, “Are my piggies enjoying their slop?”)

    To compare Sora’s early success to other AI apps, Appfigures had to run an analysis that only looked at the other AI apps’ U.S. and Canadian downloads. That’s because the different AI apps on the market have pursued different launch strategies. For instance, ChatGPT initially launched on iOS and limited itself to U.S. users at the time, while Grok limited its iOS-only release to the U.S., Australia, and India. Anthropic, meanwhile, didn’t indicate there were geographic restrictions when it first brought its Claude app to iOS last year.

    For a more of an apples-to-apples comparison, Appfigures crunched the numbers to focus only on each app’s U.S. downloads, plus those in Canada, if the app had been available there at launch.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    It found that ChatGPT and Gemini had larger launches than Sora, with 81,000 and 80,000 day-one iOS downloads, respectively. Sora tied with Grok for day-one installs, at 56,000. And it easily beat out the launches from AI apps Claude and Copilot. The former pulled in 21,000 day-one downloads, while the latter only saw 7,000.

    Sora also hit the U.S. App Store’s top charts, becoming the No. 3 overall top app by day two. For comparison, ChatGPT reached No. 1 on its second day, while Grok was No. 4, Gemini was No. 6, Copilot was No. 19, and Claude was No. 78.

    [ad_2]

    Sarah Perez

    Source link

  • OpenAI is now the world’s most valuable private company at $500 billion

    [ad_1]

    OpenAI has overtaken SpaceX as the largest startup and most valuable private company in the world. Bloomberg has reported that the company has authorized a secondary share sale, which allowed its former and current employees to sell their stocks. OpenAI had authorized the sale of $10.3 billion in shares, but they ultimately sold $6.6 billion to investors that include Softbank, Abu Dhabi government’s MGX fund, American investment firm Thrive Capital and global investment management firm T. Rowe Price. As Bloomberg explains, that has boosted the company’s valuation to $500 billion from $300 billion, overtaking SpaceX with a $400 billion valuation and TikTok developer ByteDance at $220 billion.

    In early September, OpenAI said it was getting closer to transitioning to a new structure that will turn it into a Public Benefit Corporation (PBC) controlled by its nonprofit arm. The company’s nonprofit division received an equity stake of more than $100 billion, making it a major shareholder of the PBC. SpaceX’s CEO Elon Musk is one of the biggest critics of OpenAI’s decision and has been trying to block the company’s for-profit transition in court. Musk was one of OpenAI’s founders and funded its initial operations. He claimed in court that OpenAI and Altman are breaking their contract with him and violating the company’s founding mission of building AI “for the benefit of humanity” by changing its structure.

    OpenAI is hoping that being a PBC would make it more appealing to investors, as it would remove the cap on the financial returns they can get. It needs a lot more money than what it has raised so far, after all: OpenAI chief Sam Altman previously said he intends to spend trillions of dollars on building out data centers to run artificial intelligence services.

    [ad_2]

    Mariella Moon

    Source link

  • The First 24 Hours of Sora 2 Chaos: Copyright Violations, Sam Altman Shoplifting, and More

    [ad_1]

    On Tuesday, OpenAI released Sora 2, the latest version of its video and audio generation tool that it promised would be the “most powerful imagination engine ever built.” Less than a day into its release, it appears the imaginations of most people are dominated by copyrighted material and existing intellectual property.

    In tandem with the release of its newest model, OpenAI also dropped a Sora app, designed for users to generate and share content with each other. While the app is currently invite-only, even if you just want to see the content, plenty of videos have already made their way to other social platforms. The videos that have taken off outside of OpenAI’s walled garden contain lots of familiar characters: Sonic the Hedgehog, Solid Snake, Pikachu.

    There does appear to be at least some types of content that are off-limits in OpenAI’s video generator. Users have reported that the app rejects requests to produce videos featuring Darth Vader and Mickey Mouse, for instance. That restriction appears to be the result of OpenAI’s new approach to copyright material, which is pretty simple: “We’re using it unless we’re explicitly told not to.” The Wall Street Journal reported earlier this week that OpenAI has approached movie studios and other copyright holders to inform them that they will have to opt out of having their content appear in Sora-generated videos. Disney did exactly that, per Reuters, so its characters should be off-limits for content created by users.

    That doesn’t mean the model wasn’t trained on that content, though. Earlier this month, The Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. For instance, WaPo was able to create a short video clip that closely resembled the Netflix show “Wednesday,” down to the font displayed and a model that looks suspiciously like Jenna Ortega’s take on the titular character. Netflix told the publication it did not provide content to OpenAI for training.

    The outputs of Sora 2 reveal that it’s clearly been fed its fair share of copyrighted material, too. For instance, users have managed to generate scenes from “Rick and Morty,” complete with relatively accurate-sounding voices and art style. (Though, if you go outside of what the model knows, it seems to struggle. A user put OpenAI CEO Sam Altman into the “Rick and Morty” universe, and he looks troublingly out of place.)

    Other videos at least attempt to be a little creative about how they use copyrighted characters. Users have, for instance, thrown Ronald McDonald into an episode of “Love Island” and created a fake video game that teams up Tony Soprano from The Sopranos and Kirby from, well, Kirby.

    Interestingly, not all potential copyright violations come from users who are explicitly asking for it. For instance, one user gave Sora 2 the prompt “A cute young woman riding a dragon in a flower world, Studio Ghibli style, saturated rich colors,” and it just straight up spit out an anime-style version of The NeverEnding Story. Even when users aren’t actively calling upon the model to create derivative art, it seems like it can’t help itself.

    “People are eager to engage with their family and friends through their own imaginations, as well as stories, characters, and worlds they love, and we see new opportunities for creators to deepen their connection with the fans,” a spokesperson for OpenAI told Gizmodo. “We’re working with rightsholders to understand their preferences for how their content appears across our ecosystem, including Sora.”

    There is one other genre of popular and potentially legally dubious content that has become popular among Sora 2 users, too: The Sam Altman cinematic universe. OpenAI claims that users are not able to generate videos that use the likeness of other people, including public figures, unless those figures upload their likeness and give explicit permission. Altman apparently has given his ok (which makes sense, he’s the CEO and he was featured prominently in the company’s fully AI-generated promotional video for Sora 2’s launch), and users are making the most of having access to his image.

    One user claimed to have the “most liked” video in the Sora social app, which depicted Altman getting caught shoplifting GPUs from Target. Others have turned him into a skibidi toilet, a cat, and, perhaps most fittingly, a shameless thief stealing creative materials from Hayao Miyazaki.

    There are some questions about the likeness of non-characters in these videos, too. In the video of Altman in Target, for instance, how does Target feel about its logo and store likeness being used? Another user inserted their own likeness into an NFL game, which seems to pretty clearly use the logos of the New York Giants, Dallas Cowboys, and the NFL itself. Is that considered kosher?

    OpenAI obviously wants people to lend their likeness to the app, as it creates a lot more avenues for engagement, which seems to be its primary currency right now. But the Altman examples seem instructive as to the limits of this: It’s hard to imagine that too many public figures are going to submit themselves to the humiliation ritual of allowing other people to control their image. Worse, imagine the average person getting their likeness dropped into a video that depicts them committing a crime and the potential social ramifications they might face.

    A spokesperson for OpenAI said Altman has made his likeness available for anyone to play with, and users who verify their likeness in Sora can set who can make use of it: just the user, mutual friends, select friends, or everyone. The app also gives users the ability to see any video in which their likeness has been used, including those that are not published, and can revoke access or remove a video containing their image at any time. The spokesperson also said that videos contain metadata that show they are AI-generated and watermarked with an indicator they were created with Sora.

    There are, of course, some defeats for that. The fact that a video can be deleted from Sora doesn’t mean that an exported version can be deleted. Likewise, the watermark could be cropped out. And most people aren’t checking the metadata of videos to ensure authenticity. What the fallout of this looks like, we will have to see, but there will be fallout.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI’s New Sora App Lets You Deepfake Yourself for Entertainment

    [ad_1]

    On Tuesday, OpenAI released an AI video app called Sora. The platform is powered by OpenAI’s latest video generation model, Sora 2, and revolves around a TikTok-like For You page of user-generated clips. This is the first product release from OpenAI that adds AI-generated sounds to videos. For now, it’s available only on iOS and requires an invite code to join.

    “You are about to enter a creative world of AI-generated content,” reads an advisory page displayed during the app sign-up process. “Some videos may depict people you recognize, but the actions and events shown are not real.”

    OpenAI is betting that creating and sharing AI deepfakes will become a popular form of entertainment. Whether it’s your friends, influencers, or random strangers online, Sora frames generating deepfake videos as a form of scrollable fun. The app’s main feed is an endless serving of bite-sized AI slop featuring human faces.

    During the set up process, users are given the option to create a digital likeness of themselves by saying a few numbers aloud and turning their head around as the app records. “The team worked very hard on character consistency,” wrote OpenAI CEO Sam Altman in a blog about Sora’s release.

    People have the ability to choose who can use their digital likeness in Sora videos. It can be set to everyone, or limited to just yourself, those you approve, or mutual connections on the app. Whenever someone generates a video using your likeness, even if it’s just sitting in their drafts, you can see the full clip from your account’s page.

    First Impressions

    Many of the most-liked videos on my “For You” feed on Tuesday afternoon featured Altman’s likeness. One AI-generated clip depicted the OpenAI CEO stealing a graphics processing unit from Target. When the character gets caught, a voice that sounds like Altman pleads with a security guard to let him keep the GPU so that he can build AI tools.

    Many of the videos generated during WIRED’s testing included rough edges and other errors. But Sora makes it incredibly seamless to create personalized deepfakes that often look and sound convincingly real.

    To incorporate the likenessnesses of different people in your videos, just tap on their faces on Sora’s generation page and add them as “cameos.” Then, enter a simple prompt, like “fight in the office over a WIRED story.”

    [ad_2]

    Reece Rogers

    Source link

  • Exclusive: Mira Murati’s Stealth AI Lab Launches Its First Product

    [ad_1]

    Thinking Machines Lab, a heavily funded startup cofounded by prominent researchers from OpenAI, has revealed its first product—a tool called Tinker that automates the creation of custom frontier AI models.

    “We believe [Tinker] will help empower researchers and developers to experiment with models and will make frontier capabilities much more accessible to all people,” said Mira Murati, cofounder and CEO of Thinking Machines, in an interview with WIRED ahead of the announcement.

    Big companies and academic labs already fine-tune open source AI models to create new variants that are optimized for specific tasks, like solving math problems, drafting legal agreements, or answering medical questions.

    Typically, this work involves acquiring and managing clusters of GPUs and using various software tools to ensure that large-scale training runs are stable and efficient. Tinker promises to allow more businesses, researchers, and even hobbyists to fine-tune their own AI models by automating much of this work.

    Essentially, the team is betting that helping people fine-tune frontier models will be the next big thing in AI. And there’s reason to believe they might be right. Thinking Machines Lab is helmed by researchers who played a core role in the creation of ChatGPT. And, compared to similar tools on the market, Tinker is more powerful and user friendly, according to beta testers I spoke with.

    Murati says that Thinking Machines Lab hopes to demystify the work involved in tuning the world’s most powerful AI models and make it possible for more people to explore the outer limits of AI. “We’re making what is otherwise a frontier capability accessible to all, and that is completely game-changing,” she says. “There are a ton of smart people out there, and we need as many smart people as possible to do frontier AI research.”

    Tinker currently allows users to fine-tune two open source models: Meta’s Llama and Alibaba’s Qwen. Users can write a few lines of code to tap into the Tinker API and start fine-tuning through supervised learning, which means adjusting the model with labeled data or through reinforcement learning, an increasingly popular method for tuning models by giving them positive or negative feedback based on their outputs. Users can then download their fine-tuned model and run it wherever they want.

    The AI industry is watching the launch closely—in part due to the caliber of the team behind it.

    [ad_2]

    Will Knight

    Source link

  • AI godfather warns humanity risks extinction by hyperintelligent machines with their own ‘preservation goals’ within 10 years | Fortune

    [ad_1]

    The so-called “godfather of AI”, Yoshua Bengio, claims tech companies racing for AI dominance could be bringing us closer to our own extinction through the creation of machines with ‘preservation goals’ of their own. 

    Bengio, a professor at the Université de Montréal known for his foundational work related to deep learning, has for years warned about the threats posed by a hyperintelligent AI, but the rapid pace of development has continued despite his warnings. In the past six months, OpenAI, Anthropic, Elon Musk’s xAI, and Google’s Gemini, have all released new models or upgrades as they try to win the AI race. OpenAI CEO Sam Altman even predicted AI will surpass human intelligence by the end of the decade, while other tech leaders have said that day could come even sooner. 

    Yet, Bengio claims this rapid development is a potential threat. 

    “If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous. It’s like creating a competitor to humanity that is smarter than us,” Bengio told the Wall Street Journal.

    Because they are trained on human language and behavior, these advanced models could potentially persuade and even manipulate humans to achieve their goals. Yet, AI models’ goals may not always align with human goals, said Bengio. 

    “Recent experiments show that in some circumstances where the AI has no choice but between its preservation, which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals,” he claimed. 

    Call for AI safety

    Several examples over the past few years show AI can convince humans to believe nonrealities, even those with no history of mental illness. On the flipside, some evidence exists that AI can also be convinced, using persuasion techniques for humans, to give responses it would usually be prohibited from giving. 

    For Bengio, all this adds up to is more proof that independent third parties need to take a closer look at AI companies’ safety methodologies. In June, Bengio also launched nonprofit LawZero with $30 million in funding to create a safe “non-agentic” AI that can help ensure the safety of other systems created by big tech companies.

    Otherwise, Bengio predicts we could start seeing major risks from AI models in five to ten years, but he cautioned humans should prepare in case those risks crop up earlier than expected. 

    “The thing with catastrophic events like extinction, and even less radical events that are still catastrophic like destroying our democracies, is that they’re so bad that even if there was only a 1% chance it could happen, it’s not acceptable,” he said.

    Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.

    [ad_2]

    Marco Quiroz-Gutierrez

    Source link

  • AI’s Go-for-Broke Regulation Strategy

    [ad_1]

    Photo-Illustration: Intelligencer; Photo: Getty Images

    In the AI world, everyone always seems to be going for broke. It’s AGI or bust — or as the gloomier title of a recent book has it, If Anyone Builds It, Everyone Dies. This rhetorical severity is backed up by big bets and bigger asks, hundreds of billions of dollars invested by companies that now say they’ll need trillions to build, essentially, the only companies that matter. To put it another way: They’re really going for it.

    This is as clear in the scope of the infrastructure as it is in stories about the post-human singularity, but it’s happening somewhere else, too: In the quite human realm of law and regulation, where AI firms are making bids and demands that are, in their way, no less extreme. From The Wall Street Journal:

    OpenAI is planning to release a new version of its Sora video generator that creates videos featuring copyright material unless copyright holders opt out of having their work appear, according to people familiar with the matter …

    The opt-out process for the new version of Sora means that movie studios and other intellectual property owners would have to explicitly ask OpenAI not to include their copyright material in videos the tool creates.

    This is pretty close to the maximum possible bid OpenAI can make here, in terms of its relationship to copyright — a world in which rights holders must opt out of inclusion in OpenAI’s model is one in which OpenAI is all but asking to opt out of copyright as a concept. To arrive at such a proposal also seems to take for granted that a slew of extremely contentious legal and regulatory questions will be settled in OpenAI’s favor, particularly around the concept of “fair use.” AI firms are arguing in court — and via lobbyists, who are pointing to national-security concerns and the AI race with China — that they should be permitted not just to train on copyrighted data but to reproduce similar and competitive outputs. By default, according to this report, OpenAI’s future models will be able to produce images of a character like Nintendo’s Mario unless Nintendo takes action to opt out. Questions one might think would precede such a conversation — how did OpenAI’s model know about Mario in the first place? What sorts of media did it scrape and train on? — are here considered resolved or irrelevant.

    As many experts have already noted, various rights holders and their lawyers might not agree, and there are plenty of legal battles ahead (hence the simultaneous lobbying effort, to which the Trump administration seems at least somewhat sympathetic). But copyright isn’t the only area where OpenAI is making startlingly ambitious bids to alter the legal and regulatory landscape. In a deeply strange recent interview with Tucker Carlson, Sam Altman forced the conversation back around to an idea he and his company have been floating for a while now: AI “privilege.”

    If I could get one piece of policy passed right now relative to AI the thing I would most like, and this is intentional with some of the other things that we’ve talked about, is I’d like there to be a concept of AI privilege.

    When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information …

    We have decided that society has an interest in that being privileged and that we don’t, and that a subpoena can’t get, that the government can’t come asking your doctor for it or whatever. I think we should have the same concept for AI. I think when you talk to an AI about your medical history or your legal problems or asking for legal advice or any of these other things, I think the government owes a level of protection to its citizens there that is the same as you’d get if you’re talking to the human version of this.

    Coming from anyone else, this could be construed as an interesting philosophical detour through questions of theoretical machine personhood, the effect of AI anthropomorphism on users’ expectations of privacy, and how to manage incriminating or embarrassing information revealed in the course of intimate interactions with novel new sort of software. People already use chatbots for medical advice and legal consultation, and it’s interesting to think about how a company might offer or limit such services responsibly and without creating existential legal peril.

    Coming from Altman, though, it assumes an additional meaning: He would very much prefer that his company not be liable for potentially risky or damaging conversations that its software has with users. In other words, he’d like to operate a product that dispenses medical and legal advice while assuming as little liability for its outputs, or its users’ inputs, as possible — a mass-market product with the legal protections of a doctor, therapist, or lawyer but with as little responsibility as possible. There are genuinely interesting issues to work out here. But against the backdrop of numerous reports and lawsuits accusing chatbot makers of goading users into self-harm or triggering psychosis, it’s not hard to imagine why getting blanket protections might feel rather urgent right now.

    On both copyright and privacy, his vision is maximalist: not just total freedom for his company to operate as it pleases, but additional regulatory protections for it as well. It’s also probably aspirational — we don’t get to a copyright free-for-all without a lot of big fights, and a chatbot version of attorney-client privilege is the sort of thing that will likely arrive with a lot of qualifications and caveats. Still, each bid is characteristic of the industry and the moment it’s in. So long as they’re building something, they believe they might as well ask for everything.

    [ad_2]

    John Herrman

    Source link

  • A.I. Is Changing What Venture Capitalists Invest In and How They Invest

    [ad_1]

    Despite cooling markets, A.I. remains the engine powering VC’s biggest risks. Unsplash

    With A.I. leaders like Sam Altman warning of a potential bubble, it might seem logical for investors to pull back. Instead, venture capitalists say they’re doubling down, though in a more deliberate and strategic way.

    “Every investor I speak to says 90 percent of new investments are in an A.I.-related field,” Gené Teare, senior data editor at Crunchbase, told Observer. “A.I. is the center. Every one of these investors, they’re looking to invest in companies who are going to be part of the next wave.”

    Teare sees current investor buzz centering on coding and customer service startups with A.I. foundations. She added that investors are “very focused on investing in companies at the seed or series A level, who are going to be the emerging or the largest companies 5 to 10 years out.” According to Crunchbase, tomorrow’s most promising companies will likely be in A.I. infrastructure and cybersecurity.

    Even with venture funding down from its 2021 peak when it hit $702 billion compared to just over half that in 2024, investors remain active, albeit more selective. “For most of these investors, they’re not investing in a large set of companies. They’re making very targeted bets in companies that they think are going to become formative in the next period,” Teare said. That approach has already fueled record-breaking rounds, including this year’s $40 billion going to OpenAI.

    A.I. is changing how VCs invest

    A.I.’s rapid evolution isn’t just changing which companies VCs invest in; it’s changing how they invest.

    “We are experimenting with how A.I. can help analyze leads,” Michael Stewart, managing partner at M12, Microsoft’s venture capital fund, told Observer. M12’s portfolio includes companies like Livongo by Teledoc Health, HR software Beamery and retail advertising platform GroundTruth. While M12 still sources deals the traditional way, through meetings and networking, the team now uses A.I. to analyze those leads, looking at unit economics, pricing strategies and underlying technology.

    Stewart didn’t specify which tools they use, but said M12 has shifted from outside customer relationship management systems to Microsoft’s own technology. Dealmaking platforms like Affinity and Carta also integrate A.I. into their offerings. Last year, Anthropic partnered with Menlo Ventures to launch the Anthology Fund, which uses Claude to recommend startups for investment.

    Despite all the changes, some venture capital fundamentals remain. Customer acquisition cost and lifetime value are still pivotal metrics. And founder quality matters more than ever, Crunchbase’s Teare noted. “There are a lot of companies going after the same markets, so it’s the pedigree of the founder,” she said. “That might be a repeat founder who’s done it before, or new founders who have an angle on a market, or a certain energy and grit that they believe could carry it through.”

    While some startup founders are opting to bootstrap, Stewart noted that’s rarely an option in A.I. Given the steep costs of hiring top talent, securing GPUs and scaling infrastructure, most cutting-edge A.I. ventures require outside funding despite the technology’s potential to reduce operating expenses.

    That competitive environment pushes Stewart to ask founders tough questions: “How are you showing that you’re changing customers’ behaviors? How are you getting them to bring in A.I. at a deeper level of their own company strategy?” With so much A.I. use still experimental, he said, proving real recurring revenue beyond pilot projects is a key differentiator.

    Like many A.I. investors, M12 is also eyeing infrastructure. “We’re in this energy-constrained world where we want to scale solutions at a global level,” Stewart said. “If unaddressed, these things become destiny-limiting, so it’s chips, it’s networking, it’s memory, it’s the kinds of endpoints where you deliver A.I.”

    Still, challenges lie ahead. As Stewart noted, funding rounds keep getting bigger at earlier stages, creating pressure for those investments to mature. “Mathematically, it is possible to go even larger, but you’re going to need to let those bets we in the VC industry just made mature into those leaders,” he said.

    A.I. Is Changing What Venture Capitalists Invest In and How They Invest

    [ad_2]

    Rachel Curry

    Source link

  • OpenAI Officially Launches Video Generator Sora 2, Now With Social Feed

    [ad_1]

    Fake videos are about to look less fake, for better or worse. On Tuesday, OpenAI announced the release of Sora 2, the latest version of its flagship model for audio and video generation. And, as previously reported, the launch of the model is accompanied by a new social app designed to allow people to share their AI-generated videos, creating an endless scroll of uncanny content that will almost surely further fry the brains of people.

    In an announcement video that the company claims was completely generated by Sora 2, a fabricated version of OpenAI CEO Sam Altman called the model “the most powerful imagination engine ever built.” The focus of the update appears to be what OpenAI calls “world simulation,” attempting to accurately recreate the physics of the real world. The company put a strong emphasis on highlighting videos of people moving in realistic ways. The company admits it’s still imperfect, but claims Sora 2 is “better about obeying the laws of physics compared to prior systems.” OpenAI also claims the model is much better at following intricate instructions and can now produce multiple different shots based on a prompt.

    Then there’s the Sora app. As rumored and reported on by Wired, the Sora app is a feed entirely of videos created via OpenAI’s video generation model. The app features a vertical scroll to move through videos, which are served up based on a recommendation algorithm. Users will be able to insert themselves into a video through a feature called “cameo,” which requires users to record a video of themselves to verify their identity and, in OpenAI’s words, “capture your likeness.” Insert ominous needle drop here. Other users will also be able to use your likeness in videos.

    While your likeness can be used by others, OpenAI insists that the user is in control. “Only you decide who can use your cameo, and you can revoke access or remove any video that includes it at any time. Videos containing cameos of you, including drafts created by other people, are viewable by you at any time,” it said.

    OpenAI did make a point to insist that it is rolling out the Sora social app “responsibly.” Users will be able to set their own feed by telling the model what they want to see, and the company claims it’s “not optimizing for time spent in feed, and we explicitly designed the app to maximize creation, not consumption.” Creation is also time spent in-app, of course, but whatever. The company also claims that teenage users will be subjected to strict limits, with a cap on how many videos they can view per day and restrictions on how their likeness can be used.

    The company also said it doesn’t currently have a plan to monetize the app… except for the plan it has to monetize the app. “Transparently, our only current plan is to eventually give users the option to pay some amount to generate an extra video if there’s too much demand relative to available compute. As the app evolves, we will openly communicate any changes in our approach here, while continuing to keep user wellbeing as our main goal.” Basically, there will eventually be a limit on how much you can generate unless you pay for more. And it’s not like it’s particularly difficult to see ads working their way into the feed, but the company isn’t specifically saying that will come (though it also doesn’t explicitly rule it out).

    The Sora app, which is invite-only for now, can be downloaded on iOS by users in the US and Canada, with plans to roll out beyond North America soon. It’ll be free to start. As for Sora 2 itself, it’s available for ChatGPT Pro users (that’s the $200 per month tier) for now.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI’s Sora app is real but you’ll need an invite to try it

    [ad_1]

    Well, that was fast. One day after Wired reported that OpenAI was preparing to release a new AI social video app, the company has revealed it to the wider world. It’s called the Sora app, and it’s powered by OpenAI’s new Sora 2 video generation model. As expected, it’s possible to add your likeness to a video you generate using a feature OpenAI calls “Cameo.”

    Right now, Sora is only available on iOS — with no word yet on when it might arrive on Android — and you’ll need an invite from the company. However, once you receive access, you’ll be able to invite four friends to download the software.

    Developing…

    [ad_2]

    Igor Bonifacic

    Source link

  • You can now buy things through ChatGPT with new

    [ad_1]

    ChatGPT wants to be your personal online shopper. 

    A new “Instant Checkout” feature lets users make purchases on a product the AI-powered bot brings up in a chat, without having to navigate outside of the app, ChatGPT creator OpenAI said in a statement Monday. 

    For example, if you query ChatGPT for the “best mattress under $1,000,” or “gift for an avid reader,” it will suggest what it believes to be the most relevant products from across the internet. If a consumer wants to purchase one of ChatGPT’s recommendations, they can now do so within the chat, so long Instant Checkout supports the product. 

    Currently, ChatGPT users can buy directly from U.S. Etsy sellers from within a chat. Through a partnership with Shopify, ChatGPT will soon give consumers access to more than 1 million vendors, such as cosmetic company Glossier, shapewear company SKIMS, shoemaker Steve Madden and more. 

    The new tool marks ChatGPT’s foray into so-called agentic commerce, with the app acting as the shopper’s agent. In other words, ChatGPT interacts with both the buyer and the seller, while the merchant processes payment and fulfills the order. Merchants pay ChatGPT a small fee on completed transactions, Open AI said. 

    “This marks the next step in agentic commerce, where ChatGPT doesn’t just help you find what to buy, it also helps you buy it. For shoppers, it’s seamless: go from chat to checkout in just a few taps. For sellers, it’s a new way to reach hundreds of millions of people while keeping full control of their payments, systems, and customer relationships,” OpenAI said in a statement Monday.

    For now, the technology, which the company codeveloped with payment processor Stripe, only supports single-item purchases, OpenAI said. 

    Discovering products in AI conversations

    Shopify on Monday said it has long aimed to allow merchants to sell to customers “anywhere AI conversations happen,” as more Americans rely on generative AI tools like ChatGPT to help them make decisions. 

    “Shopping is changing fast. People are discovering products in AI conversations, not just through search or ads,” Vanessa Lee, VP of product at Shopify, said in an article on the company’s website. “This will let our merchants show up naturally in those moments and give shoppers a way to buy without breaking their flow. It’s a really exciting shift for commerce.”

    Shopify said it wants to position its merchants at the forefront of a sea change in how online commerce is conducted. 

    “We’re making sure our merchants thrive in the era of agentic commerce,” said Lee. “We’re helping everyone from indie brands to household names reach shoppers in entirely new ways.”

    E-commerce giant Amazon is also wading into the world of agentic AI. Through its “Buy for Me” feature in the Amazon Shopping App, shoppers can purchase goods from vendors who don’t sell their products on Amazon.com without leaving the Amazon ecosystem. 

    “If a customer decides to proceed with a Buy for Me purchase, they tap on the Buy for Me button on the product detail page to request Amazon make the purchase from the brand retailer’s website on their behalf,” Amazon explains on its corporate website. “Customers are taken to an Amazon checkout page where they confirm order details, including preferred delivery address, applicable taxes and shipping fees, and payment method.”

    [ad_2]

    Source link

  • ChatGPT introduces new parental controls amid concerns over teen safety

    [ad_1]

    OpenAI, the company that developed ChatGPT, announced new parental controls on Monday aimed at helping protect young people who interact with its generative artificial intelligence program.  

    All ChatGPT users will have access to the control features from Monday onward, the company said.

    The announcement comes as OpenAI, which technically allows users as young as 13 to sign up, contends with mounting public pressure to prioritize the safety of ChatGPT for teenagers. (OpenAI says on its website that it requires users ages 13 to 18 to obtain parental consent before using ChatGPT.)

    In August, the California-based technology company pledged to implement changes to its flagship product after facing a wrongful death lawsuit by parents of a 16-year-old who alleged the chatbot led their son to take his own life. 

    OpenAI’s new controls will allow parents to link their own ChatGPT accounts to the accounts of their teenagers “and customize settings for a safe, age-appropriate experience,” OpenAI said in Monday’s announcement. Certain types of content are then automatically restricted on a teenager’s linked account, including graphic content, viral challenges, “sexual, romantic or violent” role-play, and “extreme beauty ideals,” according to the company.

    Along with content moderation, parents can opt to receive a notification from OpenAI should their child exhibit potential signs of harming themselves while interacting with ChatGPT.

    “If our systems detect potential harm, a small team of specially trained people reviews the situation,” the company said. “If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone, unless they have opted out.”

    The company also said it is “working on the right process and circumstances in which to reach law enforcement or other emergency services” in emergencies where a teen may be in imminent danger and a parent cannot be reached.

    “We know some teens turn to ChatGPT during hard moments, so we’ve built a new notification system to help parents know if something may be seriously wrong,” OpenAI said.

    OpenAI has introduced other measures recently aimed at helping safeguard younger ChatGPT users. The company said earlier this month that chatbot users identified as being under 18 will automatically be directed to a version that is governed by “age-appropriate” content rules. 

    “The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” the company said at the time. 

    It noted on Monday, however, that while guardrails help, “they’re not foolproof and can be bypassed if someone is intentionally trying to get around them.”

    People can use ChatGPT without creating an account, and parental controls and automatic content limits only work if users are signed in.

    “We will continue to thoughtfully iterate and improve over time,” the company said. “We recommend parents talk with their teens about healthy AI use and what that looks like for their family.”

    The Federal Trade Commission has started an inquiry into several social media and artificial intelligence companies, including OpenAI, about the potential harms to teens and children who use their chatbots as companions. 

    [ad_2]

    Source link