ReportWire

Tag: Jakub Pachocki

  • Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    [ad_1]

    Sam Altman will have a key role in OpenAI’s new safety committee. Justin Sullivan/Getty Images

    Following the dissolution of an OpenAI team focused on artificial intelligence safety, the company has formed a new safety and security committee that will be led by CEO Sam Altman and other board members to guide its safety recommendations going forward, as revealed by the startup in a blog post yesterday (May 28). The announcement also noted that OpenAI has begun training a new A.I. model to succeed GPT-4, the one currently powering its ChatGPT chatbot.

    The committee’s formation comes shortly after OpenAI’s “Superalignment” team, which worked on preparations regarding the long-term risks of A.I., was disbanded with members dispersed across different areas of the company. Key employees overseeing the safety team left OpenAI earlier this month, with some citing concerns on the company’s current trajectory.

    The “Superalignment” team was led by Ilya Sutskever, OpenAI’s co-founder and former chief scientist who played a lead role in the unsuccessful ousting of Altman last November. Sutskever announced his resignation on May 14, ending his almost decade-long tenure at the company. Jan Leike, who co-ran the Superalignment team alongside Sutskever, left the startup shortly afterwards and in an X post claimed that “safety culture and processes have taken a backseat to shiny products” at OpenAI. He recently joined Anthropic, a rival A.I. startup founded by former OpenAI employees Dario and Daniela Amodei.

    “It’s pretty clear that there were these different camps within OpenAI that were leading to friction,” Sarah Kreps, a professor of government and director of the Tech Policy Institute at Cornell University, told Observer. “It seems that the people who were not aligned with Sam Altman’s vision have off-ramped either forcibly or by their own volition, and what’s left now is that they’re all speaking with one voice and that voice is Sam Altman.”

    Members of the new safety and security committee will be responsible for advising OpenAI’s board on recommendations regarding company projects and operations. But with its CEO leading the group, “I would not anticipate that these other committee members would have anywhere close to an equal voice in any decisions,” said Kreps. In addition to Altman, it will be headed by OpenAI chairman and former Salesforce co-CEO Bret Taylor alongside board members Nicole Seligman, a former Sony Entertainment executive, and Adam D’Angelo, a co-founder of Quora. D’Angelo notably was the only member of the original OpenAI board to stay on as a director after its failed firing of Altman.

    Meanwhile, former board members Helen Toner and Tasha McCauley recently urged for increased A.I. regulation in an Economist article that described Altman as having “undermined the board’s oversight of key decisions and internal safety protocols.”

    The new committee is filled with OpenAI insiders

    OpenAI’s technical and policy experts who have previously expressed their support for Altman will make up the rest of the committee. These include Jakub Pachocki, who recently filled Sutskever’s role as chief scientist, and Aleksander Madry, who oversees OpenAI’s preparedness team. Both researchers publicly resigned amid Altman’s brief removal last year and returned following his reinstatement. The committee is rounded out by Lilian Weng, John Schulman and Matt Knight, who respectively oversee the safety systems, alignment science and security teams at OpenAI and in November were among the more than 700 employees who signed a letter threatening to quit unless Altman was reinstated.

    OpenAI also revealed plans to consult cybersecurity officials like John Carlin, a former Justice Department official, and Rob Joyce, previously a cybersecurity director for the National Security Agency. “Happy to be able to support the important security and safety efforts of OpenAI!” said Joyce in an X post announcing the news. The company’s newly formed committee will spend the next 90 days developing processes and safeguards, which will be subsequently given to the board and shared in a public update describing adopted recommendations.

    While OpenAI didn’t provide a timeline for its new A.I. model, its blog post described it as one that will “bring us to the next level of capabilities” on its path to artificial general intelligence, or A.G.I., a term used for A.I. systems matching the capabilities of humans. Earlier this month, the company unveiled an updated version of ChatGPT based on a new A.I. model known as GPT-4o that showcased enhanced capabilities across audio, image and video.

    “We’ve seen in the last several months and last few days more indications that OpenAI is going in an accelerated direction toward artificial general intelligence,” said Kreps, adding that the company “seems to be signaling that there’s less interest in the safety and alignment principles that had been part of its focus earlier.”

    Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Ilya Sutskever Quits OpenAI

    Ilya Sutskever Quits OpenAI

    [ad_1]

    Ilya Sutskever, OpenAI’s co-founder and chief scientist, announced he was leaving the company on Tuesday. OpenAI confirmed the departure in a press release. Sutskever’s official exit comes nearly six months after he helped lead an effort with other board members to fire CEO Sam Altman, the move backfired days later.

    “After almost a decade, I have made the decision to leave OpenAI,” said Sutskever via a tweet on Tuesday afternoon. “I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.”

    “Ilya and OpenAI are going to part ways,” said Altman in a tweet shortly after. “This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend.”

    Altman went on to say that Jakub Pachocki, a senior researcher on Sutskever’s team, would be replacing him as OpenAI’s Chief Scientist. Sutskever notes an undisclosed project that is very “meaningful” to him moving forward. It’s unclear at this time what that project is.

    Jan Leike, another OpenAI executive who worked with Sutskever on safeguarding future AI, also resigned on Tuesday, according to The Information. Leike and Sutskever led OpenAI’s superalignment team, charged with the grandiose task of making sure the company’s super-powerful AI does not turn against humans.

    For the last six months, Sutskever’s status has been unclear at OpenAI. When Altman returned to the company in late Nov. of 2023, he said this on Sutskever: “we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.” Sutskever was the only member of OpenAI left in limbo at the time—neither fired nor rehired.

    Since then, Altman has refused to answer questions about Sutskever’s status at the company in multiple interviews. We barely heard from Sutskever himself during this time period. This is Sutskever’s first tweet in over five months, and OpenAI’s chief scientist was missing from major announcements such as Sora and this week’s GPT-4 Omni.

    Earlier this year, founding OpenAI member Andrej Karpathy left the company. In that case as well, Karpathy did not provide a particular reason for his exit, and later described that he would work on personal projects.

    Sutskever posted a photo with OpenAI leaders Altman, Mira Murati, Greg Brockman, and Jakub Pachocki shortly after announcing his exit. Severa; featured in the photo posted kind messages about Sutskever’s tenure at OpenAI, praising the well-renowned scientist for his contributions to the artificial intelligence world.

    [ad_2]

    Maxwell Zeff

    Source link