Former OpenAI cofounder Ilya Sutskever has no immediate plans for his AIstartup Safe Superintelligence (SSI) to release a product, but he has plenty of capital: $3 billion, to be exact. During a rare appearance on podcaster Dwarkesh Patel’s show, Sutskever explained the thinking behind his research-heavy strategy, and why he wants to stay out of the “rat race” of the current AI market.
“It’s very nice to not be affected by the day-to-day market competition,” Sutskever said on the podcast.
Sutskever is widely-regarded as one of the definitive voices in AI. He was one of the original founders of OpenAI, prior to which he helped create AlexNet, an image-recognition AI model that formed the basis for much of the deep learning work being done in the industry today.
In May 2024, Sutskever said that he would be leaving OpenAI. One month later, he announced his new company, Safe Superintelligence Inc. Instead of following the business model of other frontier AI labs like OpenAI and Anthropic, which release new products in order to fund their massively expensive research, SSI claims to be entirely focused on building a world-changingly powerful artificial intelligence, far more capable than today’s models. At the time, Sutskever said that his company would build super-intelligent AI “in a straight shot, with one focus, one goal, and one product.”
An Inc.com Featured Presentation
“Our singular focus means no distraction by management overhead or product cycles,” the company wrote on its website, “and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
On Patel’s podcast, Sutskever explained that competitors in the AI market have to participate in “the rat race,” which forces business leaders to make difficult trade-offs in order to balance commercial success with safety considerations.
By not joining this race and not needing to worry about releasing new products, Sutskever told Patel, his company will be able to make the $3 billion that it’s raised go much further than his commercially-minded competitors. Those companies have to set aside much of their funds to constantly design, run, and maintain their AI models for customers, Sutskever said, while SSI can focus all of its resources on research.
But Sutskever isn’t entirely married to his straight-shot plan. He acknowledged that if the timeline to building a super-intelligent AI system is longer than anticipated, his company may be forced to release a product.
Sutskever also opined that if he felt it would be useful for the world to see powerful AI in action, he could release a product sooner than he anticipates. He shared a prediction: As AI becomes more powerful, people will change their behavior. If giving the world a glimpse of powerful (but not yet superintelligent) AI inspires the public to advocate for greater safety standards, something he claimed to be heavily in favor of, that could be a compelling reason to release a product.
Mira Murati, Ilya Sutskever, Greg Brockman and Andrej Karpathy (clockwise, starting at top left). Photos by Slaven Vlasic/Getty Images, JACK GUEZ/AFP via Getty Images, Anna Moneymaker/Getty Images and Michael Macor/The San Francisco Chronicle via Getty Images
Since ChatGPT took the world by storm in late 2022, OpenAI’s revenue and market value have skyrocketed. But internally, the company hasn’t necessarily had the smoothest ride. The A.I. giant, valued at $150 billion, lost a slew of top executives this year. On Wednesday (Sept. 25) alone, a trio of leaders, including chief technology officer Mira Murati, chief research officer Bob McGrew, and VP of research Barret Zoph, all announced their departures. They join a larger group of former OpenAI employees who have left for rival A.I. developers and startups. As of now, CEO Sam Altman is one of only two active remaining members of the company’s original 11-person founding team.
OpenAI hasn’t just lost employees—it has also rehired some familiar faces. In May, OpenAI welcomed back Kyle Kosic, who worked at the company between 2021 and 2023 on its technical staff. Kosic left last year to join Elon Musk’s xAI. Several other outgoing OpenAI employees have taken similar routes and gone on to work for competing A.I. companies, showing just how competitive the industry is at the moment.
Here’s a look at some of the top leaders OpenAI has lost in 2024 thus far:
Andrej Karpathy, research scientist
Andrej Karpathy has left OpenAI not once but twice. One of OpenAI’s 11 founders, Karpathy helped build the company’s team on computer vision, generative modeling and reinforcement learning. He first departed in 2017 to lead Tesla’s Autopilot effort. Returning to OpenAI in 2023, Karpathy left once again in February this year to focus on “personal projects.” He subsequently established Eureka Labs, an A.I. education startup.
Ilya Sutskever, chief scientist and co-head of the super alignment team
A renowned machine learning researcher, Ilya Sutskever helped co-found OpenAI nearly a decade ago and served as the company’s chief scientist. He was also notably a member of the four-person board that temporarily ousted Altman last year before reinstating him. Sutskever, who was subsequently removed from the board, later said he regretted his involvement in the brief ouster. In May, he announced his departure from OpenAI and said he was leaving for a venture that is “very personally meaningful.”
Just days after Sutskever left, OpenAI executive Jan Leike announced his resignation as well. Sutskever and Leike co-ran the company’s safety team, which has since been disbanded. Leike said he decided to leave in part due to disagreements with OpenAI leadership “about the company’s core priorities,” citing a lack of focus on safety processes around developing AGI. Leike has since taken up a new role as head of alignment science at Anthropic, an OpenAI rival founded by former OpenAI employees Dario Amodei and Daniela Amodei.
John Schulman, head of alignment science
John Schulman, another OpenAI co-founder, made significant contributions to the creation of ChatGPT. After Leike’s departure, Schulman became head of OpenAI’s alignment science efforts and was appointed to its new safety committee in May. That’s why Schulman’s decision in August to step away from the company came as a surprise—especially when he revealed that he would be joining Anthropic. “This choice stems from my desire to deepen my focus on A.I. alignment and to start a new chapter of my career where I can return to hands-on technical work,” said Schulman on X, where he also clarified that his decision to step away from OpenAI wasn’t connected to a lack of support for alignment research.
Peter Deng, vice president of consumer product
Peter Deng, a top OpenAI product executive, also decided to step away from the company earlier this year. Having first joined OpenAI last year, he ended his tenure as vice president of product in July, according to his LinkedIn. Deng, who also previously held product leader positions at companies like Uber (UBER) and Meta (META), has not publicly revealed his next steps.
Greg Brockman, president
Greg Brockman, often seen as Altman’s right-hand man, hasn’t technically left the company but is instead taking a sabbatical through the end of 2024. In August, he announced his time off and described it as the “first time to relax since co-founding OpenAI nine years ago.” Brockman started off as OpenAI’s chief technology officer before becoming the company’s president in 2022. He indicated that he plans to return to OpenAI, noting that “the mission is far from complete; we still have a safe AGI to build.”
Mira Murati, chief technology officer
Mira Murati, one of OpenAI’s most public-facing figures, resigned earlier this week after more than six years with the company. “I’m stepping away because I want to create the time and space to do my own exploration,” said Murati, who notably served as interim CEO during Altman’s brief ousting last year, on X. Adding that she will “still be rooting” for OpenAI, Murati said her primary focus currently is “doing everything in my power to ensure a smooth transition, maintaining the momentum we’ve built.” Altman praised her leadership in a statement on X, describing Murati as instrumental to OpenAI’s “development from an unknown research lab to an important company.”
Bob McGrew, chief research officer
Shortly after Murati’s resignation, Bob McGrew, OpenAI’s chief research officer, also announced plans to leave the company. He simply said on X, “It is time for me to take a break.” Having previously worked at PayPal (PYPL) and Palantir, McGrew started off as a member of OpenAI’s technical staff and has been serving as OpenAI’s chief research officer since August.
Barret Zoph, vice president of research
Barret Zoph is the third executive who announced his resignation this week. Like his two colleagues, Zoph said it’s a “personal decision based on how I want to evolve the next phase of my career.” Zoph, a former research scientist at Google (GOOGL), joined OpenAI in 2022 and played a large role in overseeing OpenAI’s post-training team.
Murati, McGrew and Zoph made their decisions independently of each other, according to Altman, but decided to depart simultaneously “so that we can work together for a smooth handover to the next generation of leadership.” The CEO conceded that, while the abruptness of the leadership changes isn’t the most natural, “we are not a normal company.”
OpenAI CEO Sam Altman has previously discussed his desire to achieve human-level reasoning in A.I. Justin Sullivan/Getty Images
As part of OpenAI’s path towards artificial general intelligence (A.G.I), a term for technology matching the intelligence of humans, the company is reportedly attempting to enable A.I. models to perform advanced reasoning. Such work is taking place under a secretive project code-named ‘Strawberry,’ as reported by Reuters, which noted that the project was previously known as Q* or Q Star. While its name may have changed, the project isn’t exactly new. Researchers and co-founders of OpenAI have previously warned against the initiative, with concerns over it reportedly playing a part in the brief ousting of Sam Altman as OpenAI’s CEO in November.
Strawberry uses a unique method of post-training A.I. models, a process that improves their performance after being trained on datasets, according to Reuters, which cited internal OpenAI documents and a person familiar with the project. With the help of “deep-research” datasets, the company aims to create models that display human-level reasoning. OpenAI reportedly is looking into how Strawberry can allow models to be able to complete tasks over an extended period of time, search the web by themselves and take actions on its findings, and perform the work of engineers. OpenAI did not respond to requests for comment from Observer.
Elon Musk and Ilya Sutskever raised concerns about Q*
Altman, who has previously reiterated OpenAI’s desire to create models able to reason, briefly lost control of his company last year when his board fired him for four days. Shortly before the ousting, several OpenAI employees had become concerned over breakthroughs presented by what was then known as Q*, a project spearheaded by Ilya Sutskever, OpenAI’s former chief scientist. Sutskever himself had reportedly begun to worry about the project’s technology, as did OpenAI employees working on A.I. safety at the time. After his reinstatement, Altman referred to news reports about Q* as an “unfortunate leak” in an interview with the Verge.
Elon Musk, another OpenAI co-founder, has also raised the alarm about Q* in the past. The billionaire, who severed ties with the company in 2018, referred to the project in a lawsuit filed against OpenAI and Altman that has since been dropped. While discussing OpenAI’s close partnership with Microsoft (MSFT), Musk’s suit claimed that the terms of the deal dictate that Microsoft only has rights to OpenAI’s pre-A.G.I. technology and that it is up to OpenAI’s board to determine when the company has achieved A.G.I.
Musk argued that OpenAI’s GPT-4 model constitutes as A.G.I, which he believes “poses a grave threat to humanity,” according to the suit. Court filings stated that “OpenAI is currently developing a model known as Q* that has an even stronger claim to A.G.I.”
Recent internal meetings have suggested that OpenAI is making rapid progress toward the type of human-level reasoning that Strawberry is working on. In an OpenAI all-hands meeting held earlier this month, the company unveiled a five-tiered system to track its progress towards A.G.I., as reported by Bloomberg. While the company said it is currently on the first level, known as “chatbots,” it revealed that it has nearly reached the second level of “reasoners,” which involves technology that can display human-level problem-solving. The subsequent steps consist of A.I. systems acting as “agents” that can take actions, “innovators” that aid in invention and “organizations” that do the work of an organization.
OpenAI, the San Francisco-based A.I. powerhouse now valued at $80 billion, operates by a unique structure where it is a nonprofit entity that runs a capped-profit subsidiary in which investors can buy equity. However, CEO Sam Altman may be looking to transition the organization into a fully for-profit one, The Information reported last month. The move would be unusual, however, as OpenAI has already simultaneously reaped the benefits of positive publicity from being a nonprofit while receiving significant investments that typically go into a for-profit company.
OpenAI was founded as a nonprofit research lab in 2015 by Altman, Elon Musk, and Ilya Sutskever, among others. Born out of concern that financial incentives could lead A.I. astray, OpenAI declared in a blog post published upon its founding, “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”
OpenAI describes its existing structure as “a partnership between our original nonprofit and a new capped profit arm” on its website.
In 2019, OpenAI introduced a capped-profit arm. The company describes its structure as “a partnership between our original nonprofit and a new capped profit arm.” Finding that relying purely on donations made it difficult for the organization to stay competitive, this dual-model allowed OpenAI to raise money for its capital-intensive research while staying true to its nonprofit mission.
However, in the fine print, OpenAI reveals that the cap on returns for investors is an outstanding 100x. For context, the most prominent A.I. stock, Nvidia, has risen around 30 times in the last five years. OpenAI’s profit cap is so high that it might as well not exist.
At the center of the model transition is OpenAI’s board
OpenAI maintains that it is accountable to an independent nonprofit board, whose members own no equity in the company. However, observers began questioning who actually gets to call the shots at the company after its former board tried to fire Altman late last year. Microsoft (MSFT), the largest corporate investor behind OpenAI with a $13 billion stake, agreed to hire Altman within three days of his firing. Altman won his job back at OpenAI only days after, and surprisingly, Microsoft appeared to have encouraged it. This raises the question: in the fierce race for A.I. talent, why did Microsoft not try harder to retain Altman from re-joining its competitor, OpenAI?
“What we call OpenAI should be called Microsoft A.I. Microsoft controls OpenAI,” said NYU Professor Scott Galloway in an interview with Tech.Eu. (In March, Microsoft tapped Mustafa Suleyman, a co-founder of Google’s A.I. lab DeepMind, to lead a new unit called Microsoft A.I.) Microsoft holds a non-voting observer role on the board of OpenAI. On July 3, Apple, which in June announced a partnership with OpenAI, said its App Store chief Phil Schiller would receive a similar seat on the board.
It is unclear how OpenAI may transition to a for-profit model; it likely may involve doing away with its non-profit board that oversees the company. In a request for comment from Reuters, OpenAI said, “We remain focused on building A.I. that benefits everyone. The nonprofit is core to our mission and will continue to exist.”
OpenAI’s capped-profit model is rare, but its hybrid governance model has a long history of precedent. Food retailer Newman’s Own is a nonprofit that wholly owns for-profit distributor No Limit, which produces and sells all Newman’s Own products. In 2022, Patagonia’s founder donated 100 percent of the for-profit clothing brand’s voting shares to a nonprofit, making it another for-profit corporation owned by a nonprofit.
Sam Altman will have a key role in OpenAI’s new safety committee. Justin Sullivan/Getty Images
Following the dissolution of an OpenAI team focused on artificial intelligence safety, the company has formed a new safety and security committee that will be led by CEO Sam Altman and other board members to guide its safety recommendations going forward, as revealed by the startup in a blog post yesterday (May 28). The announcement also noted that OpenAI has begun training a new A.I. model to succeed GPT-4, the one currently powering its ChatGPT chatbot.
The committee’s formation comes shortly after OpenAI’s “Superalignment” team, which worked on preparations regarding the long-term risks of A.I., was disbanded with members dispersed across different areas of the company. Key employees overseeing the safety team left OpenAI earlier this month, with some citing concerns on the company’s current trajectory.
“It’s pretty clear that there were these different camps within OpenAI that were leading to friction,” Sarah Kreps, a professor of government and director of the Tech Policy Institute at Cornell University, told Observer. “It seems that the people who were not aligned with Sam Altman’s vision have off-ramped either forcibly or by their own volition, and what’s left now is that they’re all speaking with one voice and that voice is Sam Altman.”
Members of the new safety and security committee will be responsible for advising OpenAI’s board on recommendations regarding company projects and operations. But with its CEO leading the group, “I would not anticipate that these other committee members would have anywhere close to an equal voice in any decisions,” said Kreps. In addition to Altman, it will be headed by OpenAI chairman and former Salesforce co-CEO Bret Taylor alongside board members Nicole Seligman, a former Sony Entertainment executive, and Adam D’Angelo, a co-founder of Quora. D’Angelo notably was the only member of the original OpenAI board to stay on as a director after its failed firing of Altman.
OpenAI’s technical and policy experts who have previously expressed their support for Altman will make up the rest of the committee. These include Jakub Pachocki, who recently filled Sutskever’s role as chief scientist, and Aleksander Madry, who oversees OpenAI’s preparedness team. Both researchers publicly resigned amid Altman’s brief removal last year and returned following his reinstatement. The committee is rounded out by Lilian Weng, John Schulman and Matt Knight, who respectively oversee the safety systems, alignment science and security teams at OpenAI and in November were among the more than 700 employees who signed a letter threatening to quit unless Altman was reinstated.
OpenAI also revealed plans to consult cybersecurity officials like John Carlin, a former Justice Department official, and Rob Joyce, previously a cybersecurity director for the National Security Agency. “Happy to be able to support the important security and safety efforts of OpenAI!” said Joyce in an X post announcing the news. The company’s newly formed committee will spend the next 90 days developing processes and safeguards, which will be subsequently given to the board and shared in a public update describing adopted recommendations.
While OpenAI didn’t provide a timeline for its new A.I. model, its blog post described it as one that will “bring us to the next level of capabilities” on its path to artificial general intelligence, or A.G.I., a term used for A.I. systems matching the capabilities of humans. Earlier this month, the company unveiled an updated version of ChatGPT based on a new A.I. model known as GPT-4o that showcased enhanced capabilities across audio, image and video.
“We’ve seen in the last several months and last few days more indications that OpenAI is going in an accelerated direction toward artificial general intelligence,” said Kreps, adding that the company “seems to be signaling that there’s less interest in the safety and alignment principles that had been part of its focus earlier.”
Ilya Sutskever, OpenAI’s co-founder and chief scientist, announced he was leaving the company on Tuesday. OpenAI confirmed the departure in a press release. Sutskever’s official exit comes nearly six months after he helped lead an effort with other board members to fire CEO Sam Altman, the move backfired days later.
I Gave Sam Altman a Copy of My Eyeballs | Future Tech
“After almost a decade, I have made the decision to leave OpenAI,” said Sutskever via a tweet on Tuesday afternoon. “I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.”
“Ilya and OpenAI are going to part ways,” said Altman in a tweet shortly after. “This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend.”
Altman went on to say that Jakub Pachocki, a senior researcher on Sutskever’s team, would be replacing him as OpenAI’s Chief Scientist. Sutskever notes an undisclosed project that is very “meaningful” to him moving forward. It’s unclear at this time what that project is.
Jan Leike, another OpenAI executive who worked with Sutskever on safeguarding future AI, also resigned on Tuesday, according to The Information. Leike and Sutskever led OpenAI’s superalignment team, charged with the grandiose task of making sure the company’s super-powerful AI does not turn against humans.
For the last six months, Sutskever’s status has been unclear at OpenAI. When Altman returned to the company in late Nov. of 2023, he said this on Sutskever: “we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.” Sutskever was the only member of OpenAI left in limbo at the time—neither fired nor rehired.
Since then, Altman has refused to answer questions about Sutskever’s status at the company in multiple interviews. We barely heard from Sutskever himself during this time period. This is Sutskever’s first tweet in over five months, and OpenAI’s chief scientist was missing from major announcements such as Sora and this week’s GPT-4 Omni.
Earlier this year, founding OpenAI member Andrej Karpathy left the company. In that case as well, Karpathy did not provide a particular reason for his exit, and later described that he would work on personal projects.
Sutskever posted a photo with OpenAI leaders Altman, Mira Murati, Greg Brockman, and Jakub Pachocki shortly after announcing his exit. Severa; featured in the photo posted kind messages about Sutskever’s tenure at OpenAI, praising the well-renowned scientist for his contributions to the artificial intelligence world.