ReportWire

Tag: Omidyar Network

  • 10 Major Foundations Pledge $500M to Keep A.I. Focused on Humanity

    [ad_1]

    Michele Jawando serves as president of the Omidyar Network. Photo by Jerod Harris/Getty Images for Vox Media

    Some of the nation’s largest philanthropic players are banding together with one goal in mind: ensuring Silicon Valley isn’t the only force shaping how A.I. impacts society as the technology becomes increasingly embedded in areas like labor, education and art. The new initiative, called Humanity AI, will see ten foundations commit at least $500 million over the next five years to that mission.

    Humanity AI will be co-chaired by the Omidyar Network, a philanthropic venture established by eBay founder Pierre Omidyar that has committed nearly $2 billion over the past 21 years, and the 55-year-old MacArthur Foundation, which has awarded more than $8.27 billion to some 10,000 recipients since its establishment.

    “The message I want to resonate far and wide is this: A.I. is not destiny, it is design,” said Michele Jawando, president of the Omidyar Network, in a statement. “The decisions we make now about who builds A.I., who benefits from it, and whose values shape it will determine whether it amplifies human needs or erodes them.”

    Foundations joining the coalition must commit to making grants in at least one of Humanity AI’s five priority areas: equipping workers for an A.I.-driven economy; protecting artists from theft; addressing security risks in sectors such as climate and energy; promoting democracy; and supporting thoughtful integration of A.I. in education.

    A pooled fund of grants will be managed by Rockefeller Philanthropy Advisors, which expects to begin distributing funds early next year.

    The initiative’s wide-ranging goals are reflected in its diverse roster of members. The Mellon Foundation, for instance, is known for championing the arts and humanities; the Kapor Foundation focuses on making the tech ecosystem more equitable; and the Lumina Foundation works to boost U.S. economic prosperity through education. Other founding members include the Doris Duke Foundation, Ford Foundation, Siegel Family Endowment and David and Lucile Packard Foundation.

    Big Philanthropy takes on A.I.

    This isn’t the first time major U.S. foundations have teamed up to mitigate A.I.’s risks. In 2023, several of Humanity AI’s current members—including the Omidyar Network, MacArthur Foundation and Ford Foundation—launched a $200 million initiative aimed at funding A.I. projects that promote the public interest and responsible use.

    More recently, in July, a separate philanthropic coalition led by billionaires Bill Gates, Steve Ballmer and Charles Koch announced NextLadder Ventures, a $1 billion initiative to use emerging technologies to expand economic opportunity. That effort will prioritize providing A.I.-based tools to frontline workers and people facing job or housing instability.

    Humanity AI, meanwhile, hopes to grow its coalition in the coming months. “The stakes are too high to defer decisions to a handful of companies and leaders within them,” said John Palfrey, president of the MacArthur Foundation, in a statement. “Humanity AI seeks to shift that dynamic by resourcing technologists, researchers and advocates who are united by a shared vision of ensuring A.I. is a force for good, putting people and the planet first.”

    10 Major Foundations Pledge $500M to Keep A.I. Focused on Humanity

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • The Case for Investing in Responsible A.I.

    The Case for Investing in Responsible A.I.

    [ad_1]

    Ford Foundation and Omidyar Network recognize Anthropic’s groundbreaking generative language A.I.—which incorporates and prioritizes humanity—as an alignment with their missions to make investments that generate positive financial returns while benefiting society at large. Unsplash+

    Artificial intelligence (A.I.) is having a very real impact on our politics, our workforce and our world. Chatbots and other large language models, text-to-image programs and video generators are changing how we learn, challenging who we trust and intensifying debates over intellectual property and content ownership. Generative A.I. has the potential to supercharge solutions to some of society’s most pressing problems, from previously incurable diseases to our global climate crisis and more. But without clear intent and proper guardrails, A.I. has the capacity to do great harm. Rampant bias and disinformation threaten democracy; Big Tech’s dominance, if further consolidated, has the potential to crush innovation. Workers are rapidly displaced when they don’t have a voice in how technology is used on the job.  

    As philanthropic leaders who manage both our grants and our capital for social good, we invest in generative A.I. that protects, promotes and prioritizes public interest and the long-term benefit of humanity. With partners at the Nathan Cummings Foundation, we recently acquired shares in Anthropic, a leading generative A.I. company founded by two former Open A.I. executives. Other investors of the company—which is recognized for its commitment to transparency, accountability and safety—include Amazon (AMZN) ($4 billion) and Google (GOOGL) ($2 billion). 

    We understand both the promise and the peril of A.I. The funds we steward are themselves the product of profound technological transformation: the revolutionary horseless carriage at the beginning of the last century and an e-commerce platform made possible by the fledgling internet at the end. Innovation is coded in our DNA, and we feel a profound responsibility to do all we can to steer the next paradigm-shifting technology toward its highest ideals and away from its worst impulses. 

    Every harbinger of progress carries with it new risks—a Pandora’s box of intended and unintended consequences. Indeed, as French philosopher Paul Virilio famously observed, “The invention of the ship was also the invention of the shipwreck.” Today’s leaders would do well to heed Tim Cook’s charge to graduates in his 2019 Stanford commencement speech: “If you want credit for the good, take responsibility for the bad.”

    We are doing exactly this. At the Ford Foundation, we invest in organizations that help companies scale responsibly by developing frameworks for ethical technology innovation. We’re backing public-interest venture capital that funds companies like Reality Defender, which works to detect deep fakes before they become a larger problem. And we’re betting big on the emerging field of public interest technology. From organizations like the Algorithmic Justice League, which recently pressed the IRS to stop forcing taxpayers to use facial recognition software to log into their IRS accounts, ultimately leading to the end of that practice, to initiatives like the Disability and Tech Fund, which advances the leadership of people with disabilities in tech development, civil society is walking in lockstep with tech leaders to ensure that the public interest remains front and center. 

    Similarly, Omidyar Network aims to build a more inclusive infrastructure that explicitly addresses the social impact of generative A.I., elevating diversity in A.I. development and governance and promoting innovation and competition to democratize and maximize generative A.I.’s promise. It’s why, for example, Omidyar Network funds Humane Intelligence, an organization that works with companies to ensure their products are developed and deployed safely and ethically. 

    And now, Ford Foundation and Omidyar Network recognize Anthropic’s groundbreaking generative language A.I.—which incorporates and prioritizes humanity—an alignment with our own missions to make investments that generate positive financial returns while benefiting society at large. Anthropic is a Public Benefit Corporation with a charter and governance structure that mandates balancing social and financial interests, underscoring a responsibility to develop and maintain A.I. for human benefit. Founders Dario and Daniela Amodei started the company with trust and safety at its core, pioneering technology that guards against implicit bias.

    Their pioneering chatbot, “Claude” distinguishes itself from competitors with its adherence to “Constitutional A.I.,” Anthropic’s method of training a language model not just on human interaction but also on adherence to ethical rules and normative principles. For instance, Claude’s coding incorporates the UN’s Universal Declaration of Human Rights, as well as a democratically designed set of rules based on public input.

    Today, we see a unique opportunity for our colleagues in business and philanthropy to lay an early stake in a rapidly evolving field, putting the public interest front and center. According to Bloomberg, the generative A.I. market is poised to become a $1.3 trillion industry over the next decade. Investors who recognize this growing field as an opportunity to do well must also prioritize the public good and consider the full range of stakeholders who are implicated in the advent of this technology. 

    Ultimately, everyone with an interest in preserving democracy, strengthening the economy, and securing a more just and equal future for all has a responsibility to ensure that this emerging technology helps, rather than harms, people, communities and society in the years and generations to come.

    The Case for Investing in Responsible A.I.

    [ad_2]

    Roy Swan and Mike Kubzansky

    Source link