ReportWire

Tag: Daniela Amodei

  • OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    [ad_1]

    Mira Murati, Ilya Sutskever, Greg Brockman and Andrej Karpathy (clockwise, starting at top left). Photos by Slaven Vlasic/Getty Images, JACK GUEZ/AFP via Getty Images, Anna Moneymaker/Getty Images and Michael Macor/The San Francisco Chronicle via Getty Images

    Since ChatGPT took the world by storm in late 2022, OpenAI’s revenue and market value have skyrocketed. But internally, the company hasn’t necessarily had the smoothest ride. The A.I. giant, valued at $150 billion, lost a slew of top executives this year. On Wednesday (Sept. 25) alone, a trio of leaders, including chief technology officer Mira Murati, chief research officer Bob McGrew, and VP of research Barret Zoph, all announced their departures. They join a larger group of former OpenAI employees who have left for rival A.I. developers and startups. As of now, CEO Sam Altman is one of only two active remaining members of the company’s original 11-person founding team.

    OpenAI hasn’t just lost employees—it has also rehired some familiar faces. In May, OpenAI welcomed back Kyle Kosic, who worked at the company between 2021 and 2023 on its technical staff. Kosic left last year to join Elon Musk’s xAI. Several other outgoing OpenAI employees have taken similar routes and gone on to work for competing A.I. companies, showing just how competitive the industry is at the moment.

    Here’s a look at some of the top leaders OpenAI has lost in 2024 thus far:

    Andrej Karpathy, research scientist

    Andrej Karpathy has left OpenAI not once but twice. One of OpenAI’s 11 founders, Karpathy helped build the company’s team on computer vision, generative modeling and reinforcement learning. He first departed in 2017 to lead Tesla’s Autopilot effort. Returning to OpenAI in 2023, Karpathy left once again in February this year to focus on “personal projects.” He subsequently established Eureka Labs, an A.I. education startup.

    Ilya Sutskever, chief scientist and co-head of the super alignment team

    A renowned machine learning researcher, Ilya Sutskever helped co-found OpenAI nearly a decade ago and served as the company’s chief scientist. He was also notably a member of the four-person board that temporarily ousted Altman last year before reinstating him. Sutskever, who was subsequently removed from the board, later said he regretted his involvement in the brief ouster. In May, he announced his departure from OpenAI and said he was leaving for a venture that is “very personally meaningful.”

    This project was revealed to be Safe Superintelligence, a startup focused on developing a safe form of artificial general intelligence (AGI), a type of A.I. that can think and learn on par with humans. Earlier this month, the company was valued at $5 billion after raising $1 billion from investors, including Andreessen Horowitz and Sequoia Capital.

    Jan Leike, co-head of the super alignment team

    Just days after Sutskever left, OpenAI executive Jan Leike announced his resignation as well. Sutskever and Leike co-ran the company’s safety team, which has since been disbanded. Leike said he decided to leave in part due to disagreements with OpenAI leadership “about the company’s core priorities,” citing a lack of focus on safety processes around developing AGI. Leike has since taken up a new role as head of alignment science at Anthropic, an OpenAI rival founded by former OpenAI employees Dario Amodei and Daniela Amodei.

    John Schulman, head of alignment science

    John Schulman, another OpenAI co-founder, made significant contributions to the creation of ChatGPT. After Leike’s departure, Schulman became head of OpenAI’s alignment science efforts and was appointed to its new safety committee in May. That’s why Schulman’s decision in August to step away from the company came as a surprise—especially when he revealed that he would be joining Anthropic. “This choice stems from my desire to deepen my focus on A.I. alignment and to start a new chapter of my career where I can return to hands-on technical work,” said Schulman on X, where he also clarified that his decision to step away from OpenAI wasn’t connected to a lack of support for alignment research.

    Peter Deng, vice president of consumer product

    Peter Deng, a top OpenAI product executive, also decided to step away from the company earlier this year. Having first joined OpenAI last year, he ended his tenure as vice president of product in July, according to his LinkedIn. Deng, who also previously held product leader positions at companies like Uber (UBER) and Meta (META), has not publicly revealed his next steps.

    Greg Brockman, president

    Greg Brockman, often seen as Altman’s right-hand man, hasn’t technically left the company but is instead taking a sabbatical through the end of 2024. In August, he announced his time off and described it as the “first time to relax since co-founding OpenAI nine years ago.” Brockman started off as OpenAI’s chief technology officer before becoming the company’s president in 2022. He indicated that he plans to return to OpenAI, noting that “the mission is far from complete; we still have a safe AGI to build.”

    Mira Murati, chief technology officer

    Mira Murati, one of OpenAI’s most public-facing figures, resigned earlier this week after more than six years with the company. “I’m stepping away because I want to create the time and space to do my own exploration,” said Murati, who notably served as interim CEO during Altman’s brief ousting last year, on X. Adding that she will “still be rooting” for OpenAI, Murati said her primary focus currently is “doing everything in my power to ensure a smooth transition, maintaining the momentum we’ve built.” Altman praised her leadership in a statement on X, describing Murati as instrumental to OpenAI’s “development from an unknown research lab to an important company.”

    Bob McGrew, chief research officer

    Shortly after Murati’s resignation, Bob McGrew, OpenAI’s chief research officer, also announced plans to leave the company. He simply said on X, “It is time for me to take a break.” Having previously worked at PayPal (PYPL) and Palantir, McGrew started off as a member of OpenAI’s technical staff and has been serving as OpenAI’s chief research officer since August.

    Barret Zoph, vice president of research

    Barret Zoph is the third executive who announced his resignation this week. Like his two colleagues, Zoph said it’s a “personal decision based on how I want to evolve the next phase of my career.” Zoph, a former research scientist at Google (GOOGL), joined OpenAI in 2022 and played a large role in overseeing OpenAI’s post-training team.

    Murati, McGrew and Zoph made their decisions independently of each other, according to Altman, but decided to depart simultaneously “so that we can work together for a smooth handover to the next generation of leadership.” The CEO conceded that, while the abruptness of the leadership changes isn’t the most natural, “we are not a normal company.”

    OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    [ad_1]

    Sam Altman will have a key role in OpenAI’s new safety committee. Justin Sullivan/Getty Images

    Following the dissolution of an OpenAI team focused on artificial intelligence safety, the company has formed a new safety and security committee that will be led by CEO Sam Altman and other board members to guide its safety recommendations going forward, as revealed by the startup in a blog post yesterday (May 28). The announcement also noted that OpenAI has begun training a new A.I. model to succeed GPT-4, the one currently powering its ChatGPT chatbot.

    The committee’s formation comes shortly after OpenAI’s “Superalignment” team, which worked on preparations regarding the long-term risks of A.I., was disbanded with members dispersed across different areas of the company. Key employees overseeing the safety team left OpenAI earlier this month, with some citing concerns on the company’s current trajectory.

    The “Superalignment” team was led by Ilya Sutskever, OpenAI’s co-founder and former chief scientist who played a lead role in the unsuccessful ousting of Altman last November. Sutskever announced his resignation on May 14, ending his almost decade-long tenure at the company. Jan Leike, who co-ran the Superalignment team alongside Sutskever, left the startup shortly afterwards and in an X post claimed that “safety culture and processes have taken a backseat to shiny products” at OpenAI. He recently joined Anthropic, a rival A.I. startup founded by former OpenAI employees Dario and Daniela Amodei.

    “It’s pretty clear that there were these different camps within OpenAI that were leading to friction,” Sarah Kreps, a professor of government and director of the Tech Policy Institute at Cornell University, told Observer. “It seems that the people who were not aligned with Sam Altman’s vision have off-ramped either forcibly or by their own volition, and what’s left now is that they’re all speaking with one voice and that voice is Sam Altman.”

    Members of the new safety and security committee will be responsible for advising OpenAI’s board on recommendations regarding company projects and operations. But with its CEO leading the group, “I would not anticipate that these other committee members would have anywhere close to an equal voice in any decisions,” said Kreps. In addition to Altman, it will be headed by OpenAI chairman and former Salesforce co-CEO Bret Taylor alongside board members Nicole Seligman, a former Sony Entertainment executive, and Adam D’Angelo, a co-founder of Quora. D’Angelo notably was the only member of the original OpenAI board to stay on as a director after its failed firing of Altman.

    Meanwhile, former board members Helen Toner and Tasha McCauley recently urged for increased A.I. regulation in an Economist article that described Altman as having “undermined the board’s oversight of key decisions and internal safety protocols.”

    The new committee is filled with OpenAI insiders

    OpenAI’s technical and policy experts who have previously expressed their support for Altman will make up the rest of the committee. These include Jakub Pachocki, who recently filled Sutskever’s role as chief scientist, and Aleksander Madry, who oversees OpenAI’s preparedness team. Both researchers publicly resigned amid Altman’s brief removal last year and returned following his reinstatement. The committee is rounded out by Lilian Weng, John Schulman and Matt Knight, who respectively oversee the safety systems, alignment science and security teams at OpenAI and in November were among the more than 700 employees who signed a letter threatening to quit unless Altman was reinstated.

    OpenAI also revealed plans to consult cybersecurity officials like John Carlin, a former Justice Department official, and Rob Joyce, previously a cybersecurity director for the National Security Agency. “Happy to be able to support the important security and safety efforts of OpenAI!” said Joyce in an X post announcing the news. The company’s newly formed committee will spend the next 90 days developing processes and safeguards, which will be subsequently given to the board and shared in a public update describing adopted recommendations.

    While OpenAI didn’t provide a timeline for its new A.I. model, its blog post described it as one that will “bring us to the next level of capabilities” on its path to artificial general intelligence, or A.G.I., a term used for A.I. systems matching the capabilities of humans. Earlier this month, the company unveiled an updated version of ChatGPT based on a new A.I. model known as GPT-4o that showcased enhanced capabilities across audio, image and video.

    “We’ve seen in the last several months and last few days more indications that OpenAI is going in an accelerated direction toward artificial general intelligence,” said Kreps, adding that the company “seems to be signaling that there’s less interest in the safety and alignment principles that had been part of its focus earlier.”

    Inside OpenAI’s 9-Person Safety Committee Led by All-Powerful Sam Altman

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Anthropic’s Sibling Founders On Leaving OpenAI to Start a $15B Startup

    Anthropic’s Sibling Founders On Leaving OpenAI to Start a $15B Startup

    [ad_1]

    Anthropic Co-Founder & CEO Dario Amodei speaks onstage during TechCrunch Disrupt 2023 at Moscone Center on September 20, 2023 in San Francisco, California. Kimberly White/Getty Images for TechCrunch

    The Bloomberg Tech Summit yesterday (May 9) opened with brother and sister technologists Dario and Daniela Amodei, once principal scientists at OpenAI who later stepped away to found their own A.I. company, Anthropic, now valued at $15 billion. The entrepreneur duo are now engaged in “scaling up” Anthropic by creating models and relationships to serve emerging markets.

    Dario and Daniela left OpenAI in late 2020 to start their own company, with the goal of building A.I. systems that are not just powerful and intelligent but are also aligned with human values. “We left OpenAI because of concerns around the direction,” Daniela Amodei, who serves as president of Anthropic, said during an onstage interview yesterday. “We wanted to be sure the tools were being used reliably and responsibly…We want to be the most responsible A.I. we can, always asking the question, ‘What could go wrong here?’”

    “Our focus is on scaling with more data, along with models, and creating the relationships necessary to scale up the company in a more enterprise direction,” said Dario Amodei, the company’s CEO.

    Asked why users should trust them after last year’s debacle between the OpenAI’s board and the company’s CEO Sam Altman, Dario said, “You shouldn’t. Look at all the companies out there. Who can you trust? It’s a very good question. We believe in doing what you say, and saying what you do. The broader societal question is, is A.I. so big that there needs to be some kind of democratic mandate on the technology?…We need to put positive pressure on this industry to always do the right thing for our users.”

    Asked how a brother-and-sister duo both ended up in the tech world, Dario and Daniela said it was a natural result of growing up in San Francisco. “Ever since the time we were kids, we always had a desire to make things better. It may sound corny, but it was a really deep thing with us,” Daniela said. “Growing up in San Francisco in the 1990s, we saw that things were happening but we didn’t yet have the language for what that was. We just saw a lot of well-dressed people going into swanky offices and we wondered, what are all these people doing? What are they working on? They were all young people with good jobs and that was attractive.”

    “For me, in the 90s, my fascination was with theories of the early universe more than business,” said Dario. “But over time we began to realize that if you wanted to make science or anything else and be socially responsible you had to be involved and, later on, joining one of these A.I. companies.”

    Dario noted the entry point for creating a new A.I. model is rapidly becoming restrictive due to its increasingly high cost. The current generation of AI models cost about $100 million to make, he said. “In the next few years it’s going to grow to the $100 billion range. And the models will look very different.

    “Plus, you have to start thinking about the larger ecosystem, carbon offsets for large data centers, so we are looking into that as well,” he added.

    Anthropic’s Sibling Founders On Leaving OpenAI to Start a $15B Startup

    [ad_2]

    Dan Holden

    Source link

  • The Case for Investing in Responsible A.I.

    The Case for Investing in Responsible A.I.

    [ad_1]

    Ford Foundation and Omidyar Network recognize Anthropic’s groundbreaking generative language A.I.—which incorporates and prioritizes humanity—as an alignment with their missions to make investments that generate positive financial returns while benefiting society at large. Unsplash+

    Artificial intelligence (A.I.) is having a very real impact on our politics, our workforce and our world. Chatbots and other large language models, text-to-image programs and video generators are changing how we learn, challenging who we trust and intensifying debates over intellectual property and content ownership. Generative A.I. has the potential to supercharge solutions to some of society’s most pressing problems, from previously incurable diseases to our global climate crisis and more. But without clear intent and proper guardrails, A.I. has the capacity to do great harm. Rampant bias and disinformation threaten democracy; Big Tech’s dominance, if further consolidated, has the potential to crush innovation. Workers are rapidly displaced when they don’t have a voice in how technology is used on the job.  

    As philanthropic leaders who manage both our grants and our capital for social good, we invest in generative A.I. that protects, promotes and prioritizes public interest and the long-term benefit of humanity. With partners at the Nathan Cummings Foundation, we recently acquired shares in Anthropic, a leading generative A.I. company founded by two former Open A.I. executives. Other investors of the company—which is recognized for its commitment to transparency, accountability and safety—include Amazon (AMZN) ($4 billion) and Google (GOOGL) ($2 billion). 

    We understand both the promise and the peril of A.I. The funds we steward are themselves the product of profound technological transformation: the revolutionary horseless carriage at the beginning of the last century and an e-commerce platform made possible by the fledgling internet at the end. Innovation is coded in our DNA, and we feel a profound responsibility to do all we can to steer the next paradigm-shifting technology toward its highest ideals and away from its worst impulses. 

    Every harbinger of progress carries with it new risks—a Pandora’s box of intended and unintended consequences. Indeed, as French philosopher Paul Virilio famously observed, “The invention of the ship was also the invention of the shipwreck.” Today’s leaders would do well to heed Tim Cook’s charge to graduates in his 2019 Stanford commencement speech: “If you want credit for the good, take responsibility for the bad.”

    We are doing exactly this. At the Ford Foundation, we invest in organizations that help companies scale responsibly by developing frameworks for ethical technology innovation. We’re backing public-interest venture capital that funds companies like Reality Defender, which works to detect deep fakes before they become a larger problem. And we’re betting big on the emerging field of public interest technology. From organizations like the Algorithmic Justice League, which recently pressed the IRS to stop forcing taxpayers to use facial recognition software to log into their IRS accounts, ultimately leading to the end of that practice, to initiatives like the Disability and Tech Fund, which advances the leadership of people with disabilities in tech development, civil society is walking in lockstep with tech leaders to ensure that the public interest remains front and center. 

    Similarly, Omidyar Network aims to build a more inclusive infrastructure that explicitly addresses the social impact of generative A.I., elevating diversity in A.I. development and governance and promoting innovation and competition to democratize and maximize generative A.I.’s promise. It’s why, for example, Omidyar Network funds Humane Intelligence, an organization that works with companies to ensure their products are developed and deployed safely and ethically. 

    And now, Ford Foundation and Omidyar Network recognize Anthropic’s groundbreaking generative language A.I.—which incorporates and prioritizes humanity—an alignment with our own missions to make investments that generate positive financial returns while benefiting society at large. Anthropic is a Public Benefit Corporation with a charter and governance structure that mandates balancing social and financial interests, underscoring a responsibility to develop and maintain A.I. for human benefit. Founders Dario and Daniela Amodei started the company with trust and safety at its core, pioneering technology that guards against implicit bias.

    Their pioneering chatbot, “Claude” distinguishes itself from competitors with its adherence to “Constitutional A.I.,” Anthropic’s method of training a language model not just on human interaction but also on adherence to ethical rules and normative principles. For instance, Claude’s coding incorporates the UN’s Universal Declaration of Human Rights, as well as a democratically designed set of rules based on public input.

    Today, we see a unique opportunity for our colleagues in business and philanthropy to lay an early stake in a rapidly evolving field, putting the public interest front and center. According to Bloomberg, the generative A.I. market is poised to become a $1.3 trillion industry over the next decade. Investors who recognize this growing field as an opportunity to do well must also prioritize the public good and consider the full range of stakeholders who are implicated in the advent of this technology. 

    Ultimately, everyone with an interest in preserving democracy, strengthening the economy, and securing a more just and equal future for all has a responsibility to ensure that this emerging technology helps, rather than harms, people, communities and society in the years and generations to come.

    The Case for Investing in Responsible A.I.

    [ad_2]

    Roy Swan and Mike Kubzansky

    Source link