ReportWire

Tag: ai pros and cons

  • CEOs may not realize it, but they already know what to do about A.I.

    CEOs may not realize it, but they already know what to do about A.I.

    A.I. has arrived, and CEOs are asking what to do. The answer might surprise them: Do what you know best.

    It’s a safe bet that various forms of artificial intelligence, from algorithmic decision-support systems to machine learning applications, have already made their way into the front and back offices of most companies. Remarkably, generative A.I. is now demonstrating value in creative and imagination-driven tasks.

    We’ve seen this movie before. The Internet. Mobile. Social media. And now artificial intelligence. With each, the business has been confronted with a new technology that holds both great promise and considerable uncertainty, adopted seemingly overnight by consumers, students, professionals, and businesses.

    CEOs recognize the challenge. If they take a wait-and-see approach or simply clamp down on A.I. use, they risk missing a historic opportunity to supercharge their products, services, and operations. On the other hand, allowing the new technology to proliferate within their companies in uncoordinated, even haphazard, ways can lead not only to duplication and fragmentation, but to something much more serious: irresponsible uses of A.I., including the perpetuation of biases, amplification of misinformation, and inadvertent release of proprietary data.

    What to do? A.I. is evolving so rapidly that there is no definitive playbook. But most of today’s CEOs have learned valuable lessons from prior technology inflection points. We believe they are well-equipped to apply three basic lessons:

    Data governance must become data and A.I. governance

    Governance may sound to some like heavy-handed, top-down oversight. But this is not about choosing either centralization or decentralization. It’s about developing company-wide approaches and standards for critical enablers, from the technology architecture needed to support and scale A.I. workloads to the ways you ensure compliance with both regulation and your company’s core values. Without enterprise consistency, you won’t have a clear line of sight into your A.I. applications, and you can’t enable integration and scaling.

    You don’t have to start from scratch. Most companies have established data governance to ensure compliance with data privacy regulations, such as the EU’s GDPR. Now, data governance must become data and A.I. governance.

    A.I. applications and models throughout the company should be inventoried, mapped, and continuously monitored. Most urgently, enterprise standards for data quality should be defined and implemented, including data lineage and data provenance. This involves where, when, and how the data was collected or synthesized and who has the right to use it. Some A.I. systems may be “black boxes,” but the data sets selected to train and feed them are knowable and manageable–in particular for business applications.

    Employees don’t need to become data scientists–they need to become A.I.-literate

    History teaches us that when a technology becomes ubiquitous, virtually everyone’s job changes. Here’s an example: The first project of the Data & Trust Alliance–a consortium we co-chair that develops data and A.I. practices–targeted what some might consider unlikely parts of our companies, human resources and procurement.

    The Alliance developed algorithmic safety tools–safeguards to detect, mitigate and monitor bias in the algorithmic systems supplied by vendors for employment decisions.

    When the tools were introduced to HR and procurement professionals, they asked for education, not in how to be a data scientist, but how to be A.I.-literate HR and procurement professionals. We shared modules on how to evaluate the data used to train models, what types of bias testing to look for, how to assess model performance, and more.

    The lesson? Yes, we need data scientists and machine learning experts. But it’s time to enhance the data and A.I. literacy of our entire workforce.

    Set the right culture

    Many companies have adopted ethical A.I. principles, but we know that trust is earned by what we do, more than by what we say. We need to be transparent with consumers and employees about when they are interacting with an A.I. system. We need to ensure that our A.I. systems–especially for high-consequence applications–are explainable, remain under human control, and can withstand the highest levels of scrutiny, including the auditing required by new and proposed regulations. In short, we need to evolve our corporate cultures for the era of A.I.

    Another project by the Alliance was to create “new diligence” criteria to assess the value and risk inherent in targeting data–and A.I.-centric companies for investment or acquisition. The Alliance created Data Diligence and AI Diligence, but the greatest need was for Responsible Culture Diligence–ensuring that values, team composition, incentives, feedback loops, and decision rights support the new and unique requirements of A.I.-driven business. 

    CEOs have been here before. For some companies, it took decades and a pandemic to fully realize that “digital transformation” implicated every part of the company and its relationships with all stakeholders. And what were the results of misreading the Internet, mobile, and social? Disrupted business models and loss of competitiveness, as well as unintended consequences for society.

    What will be the result of getting this one wrong? We could miss a once-in-a-generation opportunity to achieve radical breakthroughs, solve intractable problems, delight customers, empower employees, reduce waste and errors, and serve society. Far worse, we risk doing harm to our stakeholders and to future generations.

    A.I. is not solely–indeed, not most importantly–a technology challenge. It is the next driver of enterprise transformation. It’s up to the CEO, board, and the entire C-suite to lead that. And the time to do so is now.

    Kenneth I. Chenault and Samuel J. Palmisano are founders and co-chairs of the Data & Trust Alliance, a not-for-profit organization whose 25 cross-industry members develop and adopt responsible data and AI practices. Members include CVS Health, General Catalyst, GM, Humana, Mastercard, Meta, Nike, Pfizer, the Smithsonian Institution, UPS, and Walmart. Chenault is the chairman and managing director of General Catalyst and the former chairman and CEO of American Express. Palmisano is the former chairman and CEO of IBM.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    More must-read commentary published by Fortune:

    Kenneth I. Chenault, Samuel J. Palmisano

    Source link

  • Silicon Valley is knowingly violating A.I. ethical principles. Society can’t respond if we let disagreements poison the debate

    Silicon Valley is knowingly violating A.I. ethical principles. Society can’t respond if we let disagreements poison the debate

    With criticism of ChatGPT much in the news, we are also increasingly hearing about disagreements among thinkers who are critical of A.I. While debating about such an important issue is natural and expected, we can’t allow differences to paralyze our very ability to make progress on A.I. ethics at this pivotal time. Today, I fear that those who should be natural allies across the tech/business, policy, and academic communities are instead increasingly at each other’s throats. When the field of A.I. ethics appears divided, it becomes easier for vested interests to brush aside ethical considerations altogether.

    Such disagreements need to be understood in the context of how we reached the current moment of excitement around the rapid advances in large language models and other forms of generative A.I.

    OpenAI, the company behind ChatGPT, was initially set up as a non-profit amid much fanfare about a mission to solve the A.I. safety problem. However, as it became clear that OpenAI’s work on large language models was lucrative, OpenAI pivoted to become a public company. It deployed ChatGPT and partnered with Microsoft–which has consistently sought to depict itself as the tech corporation most concerned about ethics.

    Both companies knew that ChatGPT violates, for example, the globally endorsed UNESCO AI ethical principles. OpenAI even refused to publicly release a previous version of GPT, citing worry about much the same kinds of potential for misuse we are now witnessing. But for OpenAI and Microsoft, the temptation to win the corporate race trumped ethical considerations. This has nurtured a degree of cynicism about relying on corporate self-governance or even governments to put in place necessary safeguards.

    We should not be too cynical about the leadership of these two companies, which are trapped between their fiduciary responsibility to shareholders and a genuine desire to do the right thing. They remain people of good intent, as are all raising concerns about the trajectory of A.I.

    This tension is perhaps best exemplified in a recent tweet by U.S. Senator Chris Murphy (D-CT) and the response by the A.I. community. In discussing ChatGPT, Murphy tweeted: “Something is coming. We aren’t ready.” And that’s when the A.I. researchers and ethicists piled on. They proceeded to criticize the Senator for not understanding the technology, indulging in futuristic hype, and focusing attention on the wrong issues. Murphy hit back at one critic: “I think the effect of her comments is very clear, to try to stop people like me from engaging in conversation, because she’s smarter and people like her are smarter than the rest of us.”

    I am saddened by disputes such as these. The concerns that Murphy raised are valid, and we need political leaders who are engaged in developing legal safeguards. His critic, however, is not wrong in questioning whether we are focusing attention on the right issues.

    To help us understand the different priorities of the various critics and, hopefully, move beyond these potentially damaging divisions, I want to propose a taxonomy for the plethora of ethical concerns raised about the development of A.I. I see three main baskets: 

    The first basket has to do with social justice, fairness, and human rights. For example, it is now well understood that algorithms can exacerbate racial, gender, and other forms of bias when they are trained on data that embodies those biases.

    The second basket is existential: Some in the A.I. development community are concerned that they are creating a technology that might threaten human existence. A 2022 poll of A.I. experts found that half expect A.I. to grow exponentially smarter than humans by 2059, and recent advances have prompted some to bring their estimates forward.

    The third basket relates to concerns about placing A.I. models in decision-making roles. Two technologies have provided focal points for this discussion: self-driving vehicles and lethal autonomous weapons systems. However, similar concerns arise as A.I. software modules become increasingly embedded in control systems in every facet of human life.

    Cutting across all these baskets is the potential misuse of A.I., such as spreading disinformation for political and economic gain, and the two-century-old concern about technological unemployment. While the history of economic progress has primarily involved machines replacing physical labor, A.I. applications can replace intellectual labor.

    I am sympathetic to all these concerns, though I have tended to be a friendly skeptic towards the more futuristic worries in the second basket. As with the above example of Senator Murphy’s tweet, disagreements among A.I. critics are often rooted in the fear that existential arguments will distract from addressing pressing issues about social justice and control.

    Moving forward, individuals will need to judge for themselves who they believe to be genuinely invested in addressing the ethical concerns of A.I. However, we cannot allow healthy skepticism and debate to devolve into a witch hunt among would-be allies and partners.

    Those within the A.I. community need to remember that what brings us together is more important than differences in emphasis that set us apart.

    This moment is far too important.

    Wendell Wallach is a Carnegie-Uehiro Fellow at Carnegie Council for Ethics in International Affairs, where co-directs the Artificial Intelligence & Equality Initiative (AIEI). He is Emeritus Chair of the Technology and Ethics study group at the Yale University Interdisciplinary Center for Bioethics.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

    More must-read commentary published by Fortune:

    Wendell Wallach

    Source link