ReportWire

Tag: Jan Leike

  • OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    [ad_1]

    Mira Murati, Ilya Sutskever, Greg Brockman and Andrej Karpathy (clockwise, starting at top left). Photos by Slaven Vlasic/Getty Images, JACK GUEZ/AFP via Getty Images, Anna Moneymaker/Getty Images and Michael Macor/The San Francisco Chronicle via Getty Images

    Since ChatGPT took the world by storm in late 2022, OpenAI’s revenue and market value have skyrocketed. But internally, the company hasn’t necessarily had the smoothest ride. The A.I. giant, valued at $150 billion, lost a slew of top executives this year. On Wednesday (Sept. 25) alone, a trio of leaders, including chief technology officer Mira Murati, chief research officer Bob McGrew, and VP of research Barret Zoph, all announced their departures. They join a larger group of former OpenAI employees who have left for rival A.I. developers and startups. As of now, CEO Sam Altman is one of only two active remaining members of the company’s original 11-person founding team.

    OpenAI hasn’t just lost employees—it has also rehired some familiar faces. In May, OpenAI welcomed back Kyle Kosic, who worked at the company between 2021 and 2023 on its technical staff. Kosic left last year to join Elon Musk’s xAI. Several other outgoing OpenAI employees have taken similar routes and gone on to work for competing A.I. companies, showing just how competitive the industry is at the moment.

    Here’s a look at some of the top leaders OpenAI has lost in 2024 thus far:

    Andrej Karpathy, research scientist

    Andrej Karpathy has left OpenAI not once but twice. One of OpenAI’s 11 founders, Karpathy helped build the company’s team on computer vision, generative modeling and reinforcement learning. He first departed in 2017 to lead Tesla’s Autopilot effort. Returning to OpenAI in 2023, Karpathy left once again in February this year to focus on “personal projects.” He subsequently established Eureka Labs, an A.I. education startup.

    Ilya Sutskever, chief scientist and co-head of the super alignment team

    A renowned machine learning researcher, Ilya Sutskever helped co-found OpenAI nearly a decade ago and served as the company’s chief scientist. He was also notably a member of the four-person board that temporarily ousted Altman last year before reinstating him. Sutskever, who was subsequently removed from the board, later said he regretted his involvement in the brief ouster. In May, he announced his departure from OpenAI and said he was leaving for a venture that is “very personally meaningful.”

    This project was revealed to be Safe Superintelligence, a startup focused on developing a safe form of artificial general intelligence (AGI), a type of A.I. that can think and learn on par with humans. Earlier this month, the company was valued at $5 billion after raising $1 billion from investors, including Andreessen Horowitz and Sequoia Capital.

    Jan Leike, co-head of the super alignment team

    Just days after Sutskever left, OpenAI executive Jan Leike announced his resignation as well. Sutskever and Leike co-ran the company’s safety team, which has since been disbanded. Leike said he decided to leave in part due to disagreements with OpenAI leadership “about the company’s core priorities,” citing a lack of focus on safety processes around developing AGI. Leike has since taken up a new role as head of alignment science at Anthropic, an OpenAI rival founded by former OpenAI employees Dario Amodei and Daniela Amodei.

    John Schulman, head of alignment science

    John Schulman, another OpenAI co-founder, made significant contributions to the creation of ChatGPT. After Leike’s departure, Schulman became head of OpenAI’s alignment science efforts and was appointed to its new safety committee in May. That’s why Schulman’s decision in August to step away from the company came as a surprise—especially when he revealed that he would be joining Anthropic. “This choice stems from my desire to deepen my focus on A.I. alignment and to start a new chapter of my career where I can return to hands-on technical work,” said Schulman on X, where he also clarified that his decision to step away from OpenAI wasn’t connected to a lack of support for alignment research.

    Peter Deng, vice president of consumer product

    Peter Deng, a top OpenAI product executive, also decided to step away from the company earlier this year. Having first joined OpenAI last year, he ended his tenure as vice president of product in July, according to his LinkedIn. Deng, who also previously held product leader positions at companies like Uber (UBER) and Meta (META), has not publicly revealed his next steps.

    Greg Brockman, president

    Greg Brockman, often seen as Altman’s right-hand man, hasn’t technically left the company but is instead taking a sabbatical through the end of 2024. In August, he announced his time off and described it as the “first time to relax since co-founding OpenAI nine years ago.” Brockman started off as OpenAI’s chief technology officer before becoming the company’s president in 2022. He indicated that he plans to return to OpenAI, noting that “the mission is far from complete; we still have a safe AGI to build.”

    Mira Murati, chief technology officer

    Mira Murati, one of OpenAI’s most public-facing figures, resigned earlier this week after more than six years with the company. “I’m stepping away because I want to create the time and space to do my own exploration,” said Murati, who notably served as interim CEO during Altman’s brief ousting last year, on X. Adding that she will “still be rooting” for OpenAI, Murati said her primary focus currently is “doing everything in my power to ensure a smooth transition, maintaining the momentum we’ve built.” Altman praised her leadership in a statement on X, describing Murati as instrumental to OpenAI’s “development from an unknown research lab to an important company.”

    Bob McGrew, chief research officer

    Shortly after Murati’s resignation, Bob McGrew, OpenAI’s chief research officer, also announced plans to leave the company. He simply said on X, “It is time for me to take a break.” Having previously worked at PayPal (PYPL) and Palantir, McGrew started off as a member of OpenAI’s technical staff and has been serving as OpenAI’s chief research officer since August.

    Barret Zoph, vice president of research

    Barret Zoph is the third executive who announced his resignation this week. Like his two colleagues, Zoph said it’s a “personal decision based on how I want to evolve the next phase of my career.” Zoph, a former research scientist at Google (GOOGL), joined OpenAI in 2022 and played a large role in overseeing OpenAI’s post-training team.

    Murati, McGrew and Zoph made their decisions independently of each other, according to Altman, but decided to depart simultaneously “so that we can work together for a smooth handover to the next generation of leadership.” The CEO conceded that, while the abruptness of the leadership changes isn’t the most natural, “we are not a normal company.”

    OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    [ad_1]

    Sam Altman will have a key role in OpenAI’s new safety committee. Justin Sullivan/Getty Images

    Following the dissolution of an OpenAI team focused on artificial intelligence safety, the company has formed a new safety and security committee that will be led by CEO Sam Altman and other board members to guide its safety recommendations going forward, as revealed by the startup in a blog post yesterday (May 28). The announcement also noted that OpenAI has begun training a new A.I. model to succeed GPT-4, the one currently powering its ChatGPT chatbot.

    The committee’s formation comes shortly after OpenAI’s “Superalignment” team, which worked on preparations regarding the long-term risks of A.I., was disbanded with members dispersed across different areas of the company. Key employees overseeing the safety team left OpenAI earlier this month, with some citing concerns on the company’s current trajectory.

    The “Superalignment” team was led by Ilya Sutskever, OpenAI’s co-founder and former chief scientist who played a lead role in the unsuccessful ousting of Altman last November. Sutskever announced his resignation on May 14, ending his almost decade-long tenure at the company. Jan Leike, who co-ran the Superalignment team alongside Sutskever, left the startup shortly afterwards and in an X post claimed that “safety culture and processes have taken a backseat to shiny products” at OpenAI. He recently joined Anthropic, a rival A.I. startup founded by former OpenAI employees Dario and Daniela Amodei.

    “It’s pretty clear that there were these different camps within OpenAI that were leading to friction,” Sarah Kreps, a professor of government and director of the Tech Policy Institute at Cornell University, told Observer. “It seems that the people who were not aligned with Sam Altman’s vision have off-ramped either forcibly or by their own volition, and what’s left now is that they’re all speaking with one voice and that voice is Sam Altman.”

    Members of the new safety and security committee will be responsible for advising OpenAI’s board on recommendations regarding company projects and operations. But with its CEO leading the group, “I would not anticipate that these other committee members would have anywhere close to an equal voice in any decisions,” said Kreps. In addition to Altman, it will be headed by OpenAI chairman and former Salesforce co-CEO Bret Taylor alongside board members Nicole Seligman, a former Sony Entertainment executive, and Adam D’Angelo, a co-founder of Quora. D’Angelo notably was the only member of the original OpenAI board to stay on as a director after its failed firing of Altman.

    Meanwhile, former board members Helen Toner and Tasha McCauley recently urged for increased A.I. regulation in an Economist article that described Altman as having “undermined the board’s oversight of key decisions and internal safety protocols.”

    The new committee is filled with OpenAI insiders

    OpenAI’s technical and policy experts who have previously expressed their support for Altman will make up the rest of the committee. These include Jakub Pachocki, who recently filled Sutskever’s role as chief scientist, and Aleksander Madry, who oversees OpenAI’s preparedness team. Both researchers publicly resigned amid Altman’s brief removal last year and returned following his reinstatement. The committee is rounded out by Lilian Weng, John Schulman and Matt Knight, who respectively oversee the safety systems, alignment science and security teams at OpenAI and in November were among the more than 700 employees who signed a letter threatening to quit unless Altman was reinstated.

    OpenAI also revealed plans to consult cybersecurity officials like John Carlin, a former Justice Department official, and Rob Joyce, previously a cybersecurity director for the National Security Agency. “Happy to be able to support the important security and safety efforts of OpenAI!” said Joyce in an X post announcing the news. The company’s newly formed committee will spend the next 90 days developing processes and safeguards, which will be subsequently given to the board and shared in a public update describing adopted recommendations.

    While OpenAI didn’t provide a timeline for its new A.I. model, its blog post described it as one that will “bring us to the next level of capabilities” on its path to artificial general intelligence, or A.G.I., a term used for A.I. systems matching the capabilities of humans. Earlier this month, the company unveiled an updated version of ChatGPT based on a new A.I. model known as GPT-4o that showcased enhanced capabilities across audio, image and video.

    “We’ve seen in the last several months and last few days more indications that OpenAI is going in an accelerated direction toward artificial general intelligence,” said Kreps, adding that the company “seems to be signaling that there’s less interest in the safety and alignment principles that had been part of its focus earlier.”

    Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Ilya Sutskever Quits OpenAI

    Ilya Sutskever Quits OpenAI

    [ad_1]

    Ilya Sutskever, OpenAI’s co-founder and chief scientist, announced he was leaving the company on Tuesday. OpenAI confirmed the departure in a press release. Sutskever’s official exit comes nearly six months after he helped lead an effort with other board members to fire CEO Sam Altman, the move backfired days later.

    “After almost a decade, I have made the decision to leave OpenAI,” said Sutskever via a tweet on Tuesday afternoon. “I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.”

    “Ilya and OpenAI are going to part ways,” said Altman in a tweet shortly after. “This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend.”

    Altman went on to say that Jakub Pachocki, a senior researcher on Sutskever’s team, would be replacing him as OpenAI’s Chief Scientist. Sutskever notes an undisclosed project that is very “meaningful” to him moving forward. It’s unclear at this time what that project is.

    Jan Leike, another OpenAI executive who worked with Sutskever on safeguarding future AI, also resigned on Tuesday, according to The Information. Leike and Sutskever led OpenAI’s superalignment team, charged with the grandiose task of making sure the company’s super-powerful AI does not turn against humans.

    For the last six months, Sutskever’s status has been unclear at OpenAI. When Altman returned to the company in late Nov. of 2023, he said this on Sutskever: “we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.” Sutskever was the only member of OpenAI left in limbo at the time—neither fired nor rehired.

    Since then, Altman has refused to answer questions about Sutskever’s status at the company in multiple interviews. We barely heard from Sutskever himself during this time period. This is Sutskever’s first tweet in over five months, and OpenAI’s chief scientist was missing from major announcements such as Sora and this week’s GPT-4 Omni.

    Earlier this year, founding OpenAI member Andrej Karpathy left the company. In that case as well, Karpathy did not provide a particular reason for his exit, and later described that he would work on personal projects.

    Sutskever posted a photo with OpenAI leaders Altman, Mira Murati, Greg Brockman, and Jakub Pachocki shortly after announcing his exit. Severa; featured in the photo posted kind messages about Sutskever’s tenure at OpenAI, praising the well-renowned scientist for his contributions to the artificial intelligence world.

    [ad_2]

    Maxwell Zeff

    Source link