ReportWire

Tag: Artificial Intelligence

  • Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    [ad_1]



    CNN
     — 

    Shares of Google-parent Alphabet fell more than 3% in early trading Monday after a report sparked concerns that its core search engine could lose market share to AI-powered rivals, including Microsoft’s Bing.

    Last month, Google employees learned that Samsung was weighing making Bing the default search engine on its devices instead of Google’s search engine, prompting a “panic” inside the company, according to a report from the New York Times, citing internal messages and documents. (CNN has not reviewed the material.)

    In an effort to address the heightened competition, Google is said to be developing a new AI-powered search engine called Project “Magi,” according to the Times. The company, which reportedly has about 160 people working on the project, aims to change the way results appear in Google Search and will include an AI chat tool available to answer questions. The project is expected to be unveiled to the public next month, according to the report.

    In a statement sent to CNN, Google spokesperson Lara Levin said the company has been using AI for years to “improve the quality of our results” and “offer entirely new ways to search,” including with a feature rolled out last year that lets users search by combining images and words.

    “We’ve done so in a responsible and helpful way that maintains the high bar we set for delivering quality information,” Levin said. “Not every brainstorm deck or product idea leads to a launch, but as we’ve said before, we’re excited about bringing new AI-powered features to Search, and will share more details soon.”

    Samsung did not immediately respond to a request for comment.

    Google’s search engine has dominated the market for two decades. But the viral success of ChatGPT, which can generate compelling written responses to user prompts, appeared to put Google on defense for the first time in years.

    In March, Google began opening up access to Bard, its new AI chatbot tool that directly competes with ChatGPT and promises to help users outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    At an event in February, a Google executive also said the company will bring “the magic of generative AI” directly into its core search product and use artificial intelligence to pave the way for the “next frontier of our information products.”

    Microsoft, meanwhile, has invested in and partnered with OpenAI, the company behind ChatGPT, to deploy similar technology in Bing and other productivity tools. Other tech companies, including Meta, Baidu and IBM, as well as a slew of startups, are racing to develop and deploy AI-powered tools.

    But tech companies face risks in embracing this technology, which is known to make mistakes and “hallucinate” responses. That’s particularly true when it comes to search engines, a product that many use to find accurate and reliable information.

    Google was called out after a demo of Bard provided an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.

    Microsoft’s Bing AI demo was also called out for several errors, including an apparent failure to differentiate between the types of vacuums and even made up information about certain products.

    In an interview with 60 Minutes that aired on Sunday, Google and Alphabet CEO Sundar Pichai stressed the need for companies to “be responsible in each step along the way” as they build and release AI tools.

    For Google, he said, that means allowing time for “user feedback” and making sure the company “can develop more robust safety layers before we build, before we deploy more capable models.”

    He also expressed his belief that these AI tools will ultimately have broad impacts on businesses, professions and society.

    “This is going to impact every product across every company and so that’s, that’s why I think it’s a very, very profound technology,” he said. “And so, we are just in early days.”

    [ad_2]

    Source link

  • First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    [ad_1]



    CNN
     — 

    Three US senators are pressing Facebook-parent Meta, Google-parent Alphabet and Twitter about whether their layoffs may have hindered the companies’ ability to fight the spread of misinformation ahead of the 2024 elections.

    In a letter to the companies dated Tuesday, the lawmakers warned that reported staff cuts to content moderation and other teams could make it harder for the companies to fulfill their commitments to election integrity.

    “This is particularly troubling given the emerging use of artificial intelligence to mislead voters,” wrote Minnesota Democratic Sen. Amy Klobuchar, Vermont Democratic Sen. Peter Welch and Illinois Democratic Sen. Dick Durbin, according to a copy of the letter reviewed by CNN.

    Since purchasing Twitter in October, Elon Musk has slashed headcount by more than 80%, in some cases eliminating entire teams.

    Alphabet announced plans to cut roughly 12,000 workers across product areas and regions earlier this year. And Meta has previously said it would eliminate about 21,000 jobs over two rounds of layoffs, hitting across teams devoted to policy, user experience and well-being, among others.

    “We remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community – including our efforts to prepare for elections around the world,” Andy Stone, a spokesperson for Meta, said in a statement to CNN about the letter.

    Alphabet and Twitter did not immediately respond to a request for comment.

    The pullback at those companies has coincided with a broader industry retrenchment in the face of economic headwinds. Peers such as Microsoft and Amazon have also trimmed their workforces, while others have announced hiring freezes.

    But the social media companies are coming under greater scrutiny now in part due to their role facilitating the US electoral process.

    Tuesday’s letter asked Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai and Twitter CEO Linda Yaccarino how each company is preparing for the 2024 elections and for mis- and disinformation surrounding the campaigns.

    To illustrate their concerns, the lawmakers pointed to recent changes at Alphabet-owned YouTube to allow the sharing of false claims that the 2020 presidential election was stolen, along with what they described as content moderation “challenges” at Twitter since the layoffs.

    The letter, which seeks responses by July 10, also asked whether the companies may hire more content moderation employees or contractors ahead of the election, and how the platforms may be specifically preparing for the rise of AI-generated deepfakes in politics.

    Already, candidates such as Florida Gov. Ron DeSantis appear to have used fake, AI-generated images to attack their opponents, raising questions about the risks that artificial intelligence could pose for democracy.

    [ad_2]

    Source link

  • AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, who has been called the ‘Godfather of AI,’ confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it.

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision.

    In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. “Google has acted very responsibly.”

    Jeff Dean, chief scientist at Google, said Hinton “has made foundational breakthroughs in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”

    “We remain committed to a responsible approach to AI,” Dean said in a statement provided to CNN. “We’re continually learning to understand emerging risks while also innovating boldly.”

    Hinton’s decision to step back from the company and speak out on the technology comes as a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.

    In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    In the interview with the Times, Hinton echoed concerns about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Even before stepping aside from Google, Hinton had spoken publicly about AI’s potential to do harm as well as good.

    “I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good,” Hinton said in a 2021 commencement address at the Indian Institute of Technology Bombay in Mumbai. He noted how AI will boost healthcare while also creating opportunities for lethal autonomous weapons. “I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off.”

    Hinton isn’t the first Google employee to raise a red flag on AI. In July, the company fired an engineer who claimed an unreleased AI system had become sentient, saying he violated employment and data security policies. Many in the AI community pushed back strongly on the engineer’s assertion.

    [ad_2]

    Source link

  • AI industry and researchers sign statement warning of ‘extinction’ risk | CNN Business

    AI industry and researchers sign statement warning of ‘extinction’ risk | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority.

    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by the Center for AI Safety.

    The statement was signed by leading industry officials including OpenAI CEO Sam Altman; the so-called “godfather” of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft’s chief technology officer; Bruce Schneier, the internet security and cryptography pioneer; climate advocate Bill McKibben; and the musician Grimes, among others.

    The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. AI experts have said society is still a long way from developing the kind of artificial general intelligence that is the stuff of science fiction; today’s cutting-edge chatbots largely reproduce patterns based on training data they’ve been fed and do not think for themselves.

    Still, the flood of hype and investment into the AI industry has led to calls for regulation at the outset of the AI age, before any major mishaps occur.

    The statement follows the viral success of OpenAI’s ChatGPT, which has helped heighten an arms race in the tech industry over artificial intelligence. In response, a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    Hinton, whose pioneering work helped shape today’s AI systems, previously told CNN he decided to leave his role at Google and “blow the whistle” on the technology after “suddenly” realizing “that these things are getting smarter than us.”

    Dan Hendrycks, director of the Center for AI Safety, said in a tweet Tuesday that the statement first proposed by David Kreuger, an AI professor at the University of Cambridge, does not preclude society from addressing other types of AI risk, such as algorithmic bias or misinformation.

    Hendrycks compared Tuesday’s statement to warnings by atomic scientists “issuing warnings about the very technologies they’ve created.”

    “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and,’” Hendrycks tweeted. “From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”

    [ad_2]

    Source link

  • ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    [ad_1]


    Dortmund, Germany
    CNN
     — 

    Dozens of Europe’s top business leaders have pushed back on the European Union’s proposed legislation on artificial intelligence, warning that it could hurt the bloc’s competitiveness and spur an exodus of investment.

    In an open letter sent to EU lawmakers Friday, C-suite executives from companies including Siemens

    (SIEGY)
    , Carrefour

    (CRERF)
    , Renault

    (RNLSY)
    and Airbus

    (EADSF)
    raised “serious concerns” about the EU AI Act, the world’s first comprehensive AI rules.

    Other prominent signatories include big names in tech, such as Yann LeCun, chief AI scientist of Meta

    (FB)
    , and Hermann Hauser, founder of British chipmaker ARM.

    “In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the group of more than 160 executives said in the letter.

    They argue that the draft rules go too far, especially in regulating generative AI and foundation models, the technology behind popular platforms such as ChatGPT.

    Since the craze over generative AI began this year, technologists have warned of the potential dark side of systems that allow people to use machines to write college essays, take academic tests and build websites. Last month, hundreds of top experts warned about the risk of human extinction from AI, saying mitigating that possibility “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The EU proposal applies a broad brush to such software “regardless of [its] use cases,” and could push innovative companies and investors out of Europe because they would face high compliance costs and “disproportionate liability risks,” according to the executives.

    “Such regulation could lead to highly innovative companies moving their activities abroad” and investors withdrawing their capital from European AI, the group wrote.

    “The result would be a critical productivity gap between the two sides of the Atlantic.”

    The executives are calling for policymakers to revise the terms of the bill, which was agreed upon by European Parliament lawmakers earlier this month and is now being negotiated with EU member states.

    “In a context where we know very little about the real risks, the business model, or the applications of generative AI, European law should confine itself to stating broad principles in a risk-based approach,” the group wrote.

    The business leaders called for a regulatory board of experts to oversee these principles and ensure they can be continuously adapted to changes in the fast-moving technology.

    The group also urged lawmakers to work with their US counterparts, noting that regulatory proposals had also been made in the United States. EU lawmakers should try to “create a legally binding level playing field,” the executives wrote.

    If such action isn’t taken and Europe is constrained by regulatory demands, it could hurt the region’s international standing, the group suggested.

    “Like the invention of the Internet or the breakthrough of silicon chips, generative AI is the kind of technology that will be decisive for the performance capacity and therefore the significance of different regions,” it said.

    Tech experts have increasingly called for greater regulation of AI as it becomes more widely used. In recent months, the United States and China have also laid out plans to regulate the technology. Sam Altman, CEO of ChatGPT maker OpenAI, has used high-profile trips around the world in recent weeks to call for co-ordinated international regulation of AI.

    The EU rules are the world’s “first ever attempt to enact” legally binding rules that apply to different areas of AI, according to the European Parliament.

    Negotiators of the AI Act hope to reach an agreement before the end of the year, and once the final rules are adopted by the European Parliament and EU member states, the act will become law.

    As they stand now, the rules would ban AI systems deemed to be harmful, including real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China.

    The Act also outlines transparency requirements for AI systems. For instance, systems such as ChatGPT would have to disclose that their content was AI-generated and provide safeguards against the generation of illegal content.

    Engaging in prohibited AI practices could lead to hefty fines: up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    But penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for startups.

    Not everyone has pushed back on the legislation so far. Earlier this month, Digital Europe, a trade association that counts SAP

    (SAP)
    and Ericsson

    (ERIC)
    among its members, called the rules “a text we can work with.”

    “However, there remain some areas which can be improved to ensure Europe becomes a competitive hub for AI innovation,” the group said in a statement.

    Dragos Tudorache, a Romanian member of parliament who led the bill’s drafting, said he was convinced that those who signed the new letter “have not read the text but have rather reacted on the stimulus of a few.”

    “The only concrete suggestions made are in fact what the [draft] text now contains: an industry-led process for defining standards, governance with industry at the table, and a light regulatory regime that asks for transparency. Nothing else,” he said in a statement.

    “It is a pity that the aggressive lobby of a few is capturing other serious companies in the net, which unfortunately undermines the undeniable lead that Europe has taken.”

    Brando Benifei, an Italian member of parliament who also led the drafting of the legislation, told CNN “we will listen to all concerns and stakeholders when dealing with AI regulation, but we have a firm commitment to deliver clear and enforceable rules.”

    “Our work could positively affect the global conversation and direction when dealing with artificial intelligence and its impact on fundamental rights, without hindering the necessary pursuit of innovation,” he said.

    [ad_2]

    Source link

  • OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    [ad_1]

    Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman, launched on Monday.

    The project’s core offering is its World ID, which the company describes as a “digital passport” to prove that its holder is a real human, not an AI bot. To get a World ID, a customer signs up to do an in-person iris scan using Worldcoin’s ‘orb’, a silver ball approximately the size of a bowling ball. Once the orb’s iris scan verifies the person is a real human, it creates a World ID.

    The company behind Worldcoin is San Francisco and Berlin-based Tools for Humanity.

    The project has 2 million users from its beta period, and with Monday’s launch, Worldcoin is scaling up “orbing” operations to 35 cities in 20 countries. As an enticement, those who sign up in certain countries will receive Worldcoin’s cryptocurrency token WLD.

    WLD’s price rose in early trading on Monday. On the world’s largest exchange, Binance, it hit a peak of $5.29 and at 1000 GMT was at $2.49 from a starting price of $0.15, having seen $25.1 million of trading volume, according to Binance’s website.

    Blockchains can store the World IDs in a way that preserves privacy and can’t be controlled or shut down by any single entity, co-founder Alex Blania told Reuters.

    The project says World IDs will be necessary in the age of generative AI chatbots like ChatGPT, which produce remarkably humanlike language. World IDs could be used to tell the difference between real people and AI bots online.

    Altman told Reuters Worldcoin also can help address how the economy will be reshaped by generative AI.

    “People will be supercharged by AI, which will have massive economic implications,” he said.

    One example Altman likes is universal basic income, or UBI, a social benefits program usually run by governments where every individual is entitled to payments. Because AI “will do more and more of the work that people now do,” Altman believes UBI can help to combat income inequality. Since only real people can have World IDs, it could be used to reduce fraud when deploying UBI.

    Altman said he thought a world with UBI would be “very far in the future” and he did not have a clear idea of what entity could dole out money, but that Worldcoin lays groundwork for it to become a reality.

    “We think that we need to start experimenting with things so we can figure out what to do,” he said.

    [ad_2]

    Source link

  • The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI policy group claims | CNN Business

    The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI policy group claims | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    An AI policy think tank wants the US government to investigate OpenAI and its wildly popular GPT artificial intelligence product, claiming that algorithmic bias, privacy concerns and the technology’s tendency to produce sometimes inaccurate results may violate federal consumer protection law.

    The Federal Trade Commission should prohibit OpenAI from releasing future versions of GPT, the Center for AI and Digital Policy (CAIDP) said Thursday in an agency complaint, and establish new regulations for the rapidly growing AI sector.

    The complaint seeks to bring the full force of the FTC’s broad consumer protection powers to bear against what CAIDP portrayed as a Wild West of runaway experimentation in which consumers pay for the unintended consequences of AI development. And it could prove to be an early test of the US government’s appetite for directly regulating AI, as tech-skeptic officials such as FTC Chair Lina Khan have warned of the dangers of unchecked data use for commercial purposes and of novel ways that tech companies may try to entrench monopolies.

    The FTC declined to comment. OpenAI didn’t immediately respond to a request for comment.

    “We believe that the FTC should look closely at OpenAI and GPT-4,” said Marc Rotenberg, CAIDP’s president and a longtime consumer protection advocate on technology issues.

    The complaint attacks a range of risks associated with generative artificial intelligence, which has captured the world’s attention after OpenAI’s ChatGPT — powered by an earlier version of the GPT product — was first released to the public late last year. Everyday internet users have used ChatGPT to write poetry, create software and get answers to questions, all within seconds and with surprising sophistication. Microsoft and Google have both begun to integrate that same type of AI into their search products, with Microsoft’s Bing running on the GPT technology itself.

    But the race for dominance in a seemingly new field has also produced unsettling or simply flat-out incorrect results, such as confident claims that Feb. 12, 2023 came before Dec. 16, 2022. In industry parlance, these types of mistakes are known as “AI hallucinations” — and they should be considered legally enforceable violations, CAIDP argued in its complaint.

    “Many of the problems associated with GPT-4 are often described as ‘misinformation,’ ‘hallucinations,’ or ‘fabrications.’ But for the purpose of the FTC, these outputs should best be understood as ‘deception,’” the complaint said, referring to the FTC’s broad authority to prosecute unfair or deceptive business acts or practices.

    The complaint acknowledges that OpenAI has been upfront about many of the limitations of its algorithms. For example, the white paper linked to GPT’s latest release, GPT-4, explains that the model may “produce content that is nonsensical or untruthful in relation to certain sources.” OpenAI also makes similar disclosures about the possibility that tools like GPT can lead to broad-based discrimination against minorities or other vulnerable groups.

    But in addition to arguing that those outcomes themselves may be unfair or deceptive, CAIDP also alleges that OpenAI has violated the FTC’s AI guidelines by trying to offload responsibility for those risks onto its clients who use the technology.

    The complaint alleges that OpenAI’s terms require news publishers, banks, hospitals and other institutions that deploy GPT to include a disclaimer about the limitations of artificial intelligence. That does not insulate OpenAI from liability, according to the complaint.

    Citing a March FTC advisory on chatbots, CAIDP wrote: “Recently [the] FTC stated that ‘Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.’”

    Artificial intelligence also stands to have vast implications for consumer privacy and cybersecurity, said CAIDP, issues that sit squarely within the FTC’s jurisdiction but that the agency has not studied in connection with GPT’s inner workings.

    [ad_2]

    Source link

  • Adobe is adding an AI-powered image generator to Photoshop | CNN Business

    Adobe is adding an AI-powered image generator to Photoshop | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Photoshop is about to look a little different.

    Adobe on Tuesday said it’s incorporating an AI-powered image generator into Photoshop, with the goal of “dramatically accelerating” how users edit their photos.

    The tool, called Firefly, allows users to add or delete elements from images with just a text prompt, according to Adobe. It can also match the lighting and style of the existing images automatically, the company said.

    It’s currently available in a new Photoshop beta app. The company plans to roll the product out to all Photoshop customers by the end of the year.

    Adobe’s move comes after a recent crop of AI tools have launched that can generate compelling written work and images in response to user prompts, with the potential to change how people work, create and communicate with each other.

    “[N]ow that we are entering a new era of AI, the advent of generative models presents a new opportunity to take our imaging capabilities to another level,” Pam Clark, vice president of Photoshop product management and product strategy, wrote in a blog post. “Over the last few months, we have integrated this exciting new technology into Photoshop in a major step toward a more natural, intuitive, and fun way to work.”

    Firefly was launched in March at the Adobe Summit as a web-only beta. It was trained on Adobe’s own collection of stock images, as well as publicly available assets. Adobe has called the tool one of its most successful beta launches ever, with more than 70 million images created in the first month.

    By relying on its own image collection and media available for public use, Adobe may be able to avoid the backlash that some other AI image generator tools have faced for using a vast trove of online content as training.

    In January, Getty Images sued Stability AI, the company behind popular AI art tool Stable Diffusion, alleging the tech company committed copyright infringement. Getty said Stability AI copied and processed millions of its images without obtaining the proper licensing.

    Stability filed a motion earlier this month to dismiss the suit.

    [ad_2]

    Source link

  • Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    [ad_1]


    London
    CNN
     — 

    The European Union took a major step Wednesday toward setting rules — the first in the world — on how companies can use artificial intelligence.

    It’s a bold move that Brussels hopes will pave the way for global standards for a technology used in everything from chatbots such as OpenAI’s ChatGPT to surgical procedures and fraud detection at banks.

    “We have made history today,” Brando Benifei, a member of the European Parliament working on the EU AI Act, told journalists.

    Lawmakers have agreed a draft version of the Act, which will now be negotiated with the Council of the European Union and EU member states before becoming law.

    “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei added.

    Hundreds of top AI scientists and researchers warned last month that the technology posed an extinction risk to humanity, and several prominent figures — including Microsoft President Brad Smith and OpenAI CEO Sam Altman — have called for greater regulation of the technology.

    At the Yale CEO Summit this week, more than 40% of business leaders — including Walmart chief Doug McMillion and Coca-Cola

    (KO)
    CEO James Quincy — said AI had the potential to destroy humanity five to 10 years from now.

    Against that backdrop, the EU AI Act seeks to “promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects.”

    Here are the key takeaways.

    Once approved, the Act will apply to anyone who develops and deploys AI systems in the EU, including companies located outside the bloc.

    The extent of regulation depends on the risks created by a particular application, from minimal to “unacceptable.”

    Systems that fall into the latter category are banned outright. These include real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China, which assign people a “health score” based on their behavior.

    The legislation also sets tight restrictions on “high-risk” AI applications, which are those that threaten “significant harm to people’s health, safety, fundamental rights or the environment.”

    These include systems used to influence voters in an election, as well as social media platforms with more than 45 million users that recommend content to their users — a list that would include Facebook, Twitter and Instagram.

    The Act also outlines transparency requirements for AI systems.

    For instance, systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content.

    Detailed summaries of the copyrighted data used to train these AI systems would also have to be published.

    AI systems with minimal or no risk, such as spam filters, fall largely outside of the rules.

    Most AI systems will likely fall into the high-risk or prohibited categories, leaving their owners exposed to potentially enormous fines if they fall foul of the regulations, according to Racheal Muldoon, a barrister (litigator) at London law firm Maitland Chambers.

    Engaging in prohibited AI practices could lead to a fine of up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    That goes much further than Europe’s signature data privacy law, the General Data Protection Regulation, under which Meta was hit with a €1.2 billion ($1.3 billion) fine last month. GDPR sets fines of up to €10 million ($10.8 million), or up to 2% of a firm’s global turnover.

    Fines under the AI Act serve as a “war cry from the legislators to say, ‘take this seriously’,” Muldoon said.

    At the same time, penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for start-ups.

    The Act also requires EU member states to establish at least one regulatory “sandbox” to test AI systems before they are deployed.

    “The one thing that we wanted to achieve with this text is balance,” Dragoș Tudorache, a member of the European Parliament, told journalists. The Act protects citizens while also “promoting innovation, not hindering creativity, and deployment and development of AI in Europe,” he added.

    The Act gives citizens the right to file complaints against providers of AI systems and makes a provision for an EU AI Office to monitor enforcement of the legislation. It also requires member states to designate national supervisory authorities for AI.

    Microsoft

    (MSFT)
    — which, together with Google, is at the forefront of AI development globally — welcomed progress on the Act but said it looked forward to “further refinement.”

    “We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson said in a statement.

    IBM

    (IBM)
    , meanwhile, called on EU policymakers to take a “risk-based approach” and suggested four “key improvements” to the draft Act, including further clarity around high-risk AI “so that only truly high-risk use cases are captured.”

    The Act may not come into force until 2026, according to Muldoon, who said revisions were likely, given how rapidly AI was advancing. The legislation has already gone through several updates since drafting began in 2021.

    “The law will expand in scope as the technology develops,” Muldoon said.

    [ad_2]

    Source link

  • Alibaba unveils its ChatGPT-style service | CNN Business

    Alibaba unveils its ChatGPT-style service | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Alibaba showed off its answer to the ChatGPT craze on Tuesday, demonstrating new software that it plans to eventually roll out across all its platforms.

    The Chinese tech giant unveiled Tongyi Qianwen, a large language model that will be embedded in its Tmall Genie smart speakers and workplace messaging platform DingTalk. It was trained on vast troves of data in order to generate compelling responses to users’ prompts.

    The technology will initially be integrated into those two products and eventually added to all Alibaba

    (BABA)
    applications, from e-commerce to mapping services, according to the company.

    Group CEO Daniel Zhang, who also oversees Alibaba’s cloud division, presented the new AI-powered service at a conference in Beijing, where the company demonstrated how it will allow users to transcribe meeting notes, craft business pitches and tell children’s stories.

    The company has opened up Tongyi Qianwen — which roughly translates as “seeking truth by asking a thousand questions” — to enterprise customers for testing before making it available to more users.

    “We are at a technological watershed moment, driven by generative AI and cloud computing,” Zhang said.

    Generative AI refers to the technology that underpins platforms like ChatGPT. The service has exploded in popularity in recent months, and Chinese tech companies have been racing to release their own versions, prompting some critics to predict that the trend will add fuel to an existing US-China rivalry in emerging technologies.

    Alibaba, which has a large cloud computing business, will also allow clients of that division to use the new technology to build their own customized large language models, the firm said in a statement.

    The debut comes after that of Baidu

    (BIDU)
    , which launched its own ChatGPT-style service last month. During a similar presentation, Baidu

    (BIDU)
    showed how its chatbot, called ERNIE, could generate a company newsletter, come up with a corporate slogan and solve a math riddle.

    On Monday, SenseTime, one of China’s most prominent AI companies, launched a suite of new services, including a chatbot called SenseChat.

    China will be setting rules to govern the operation of such services. In draft guidelines issued Tuesday to solicit public feedback, the country’s cyberspace regulator said generative AI services would be required to undergo security reviews before they can operate.

    Service providers will also be required to verify users’ real identities. In addition, they must provide information about the scale and type of data they use, their basic algorithms and other technical information.

    Alibaba’s shares in Hong Kong ticked up 1.6% following its demonstration.

    The company announced last month that it planned to split its business into six units. Most of those units, including its cloud services business that oversees AI projects, will be authorized to raise capital and pursue public listings.

    — Juliana Liu contributed to this report.

    [ad_2]

    Source link

  • Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    [ad_1]



    CNN
     — 

    Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products.

    The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in a federal court in California on Tuesday, and was brought by Clarkson Law Firm. The firm previously filed a similar suit against ChatGPT-maker OpenAI last month. (OpenAI did not previously respond to a request for comment on the suit.)

    The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

    Halimah DeLaine Prado, Google’s general counsel, called the claims in the suit “baseless” in a statement to CNN. “We’ve been clear for years that we use data from public sources — like information published to the open web and public datasets — to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” DeLaine Prado said.

    “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” the statement added.

    Alphabet and DeepMind did not immediately respond to a request for comment.

    The complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

    In response to an earlier Verge report on the update, the company said its policy “has long been transparent” about this practice and “this latest update simply clarifies that newer services like Bard are also included.”

    The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

    In the process, however, companies are also drawing mounting legal scrutiny over copyright issues from works swept up in these data sets, as well as their apparent use of personal and possibly sensitive data from everyday users, including data from children, according to the Google lawsuit.

    “Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

    The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

    Giordano contrasted the benefits and alleged harms of how Google typically indexes online data to support its core search engine with the new allegations of it scraping data to train AI tools.

    With its search engine, he said, Google can “serve up an attributed link to your work that can actually drive somebody to purchase it or engage with it.” Data scraping to train AI tools, however, is creating “an alternative version of the work that radically alters the incentives for anybody to need to purchase the work,” Giordano added.

    While some internet users may have grown accustomed to their digital data being collected and used for search results or targeted advertising, the same may not be true for AI training. “People could not have imagined their information would be used this way,” Giordano said.

    Ryan Clarkson, a partner at the law firm, said Google needs to “create an opportunity for folks to opt out” of having their data used for training AI while still maintaining their ability to use the internet for their everyday needs.

    [ad_2]

    Source link

  • Bill Gates says AI risks are real but nothing we can’t handle | CNN Business

    Bill Gates says AI risks are real but nothing we can’t handle | CNN Business

    [ad_1]



    CNN
     — 

    Bill Gates sounds less worried than some other executives in Silicon Valley about the risks of artificial intelligence.

    In a blog post on Tuesday, the Microsoft co-founder outlined some of the biggest areas of concern with artificial intelligence, including the potential for spreading misinformation and displacing jobs. But he stressed that these risks are “manageable.”

    “This is not the first time a major innovation has introduced new threats that had to be controlled,” Gates wrote. “We’ve done it before.”

    Gates likened AI to previous “transformative” changes in society, such as the introduction of the car, which then required the public to adopt seat belts, speed limits, driver’s licenses and other safety standards. Innovation, he said, can create “a lot of turbulence” in the beginning, but society can “come out better off in the end.”

    Microsoft is one of the leaders in the race to develop and deploy a new crop of generative AI tools into popular products with the promise of helping people be more productive and creative. But a number of prominent figures in the industry have also publicly raised doomsday scenarios about the rapidly evolving technology.

    In late May, tech leaders including Microsoft’s CTO Kevin Scott joined dozens of AI researchers and some celebrities in signing a one-sentence letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    Gates has previously said people should not “panic” about apocalyptic AI scenarios. In a blog post earlier this year, Gates wrote: “Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months.”

    In his blog post this week, Gates said he believes one of the biggest areas of concern for AI is the potential for deepfakes and AI-generated misinformation to undermine elections and democracy. Gates said he is “hopeful” that “AI can help identify deepfakes as well as create them.” He also said laws needs to be clear about deepfake usage and labeling “so everyone understands when something they’re seeing or hearing is not genuine.”

    Gates also expressed concern over how AI could make it easier for hackers and even countries to launch cyberattacks on people and governments. Gates urged the development of related cybersecurity measures and for governments to consider creating a global body for AI similar to the International Atomic Energy Agency.

    Gates ticked through other concerns, too, including how AI could take away people’s jobs,perpetuate biases baked into the data on which it’s trained, and even disrupt the way kids learn to write.

    “It reminds me of the time when electronic calculators became widespread in the 1970s and 1980s,” Gates wrote. “Some math teachers worried that students would stop learning how to do basic arithmetic, but others embraced the new technology and focused on the thinking skills behind the arithmetic.”

    Gates said “it’s natural to feel unsettled” during a transition period, but added he is optimistic about the future and how “history shows that it’s possible to solve the challenges created by new technologies.”

    “It’s the most transformative innovation any of us will see in our lifetimes,” he wrote, “and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks.”

    [ad_2]

    Source link

  • OpenAI’s head of trust and safety is stepping down | CNN Business

    OpenAI’s head of trust and safety is stepping down | CNN Business

    [ad_1]


    New York
    CNN
     — 

    OpenAI’s head of trust and safety announced on Thursday plans to step down from the job.

    Dave Willner, who has led the artificial intelligence firm’s trust and safety team since February 2022, said in a LinkedIn post that he is “leaving OpenAI as an employee and transitioning into an advisory role” to spend more time with his family.

    Willner’s exit comes at a crucial moment for OpenAI. Since the viral success of the company’s AI chatbot ChatGPT late last year, OpenAI has faced growing scrutiny from lawmakers, regulators and the public over the safety of its products and their potential implications for society.

    OpenAI CEO Sam Altman called for AI regulation during a Senate panel hearing in March. He told lawmakers that the potential for AI to be used to manipulate voters and target disinformation are among “my areas of greatest concern,” especially because “we’re going to face an election next year and these models are getting better.”

    In his Thursday post, Willner — whose resume includes stops at Facebook and Airbnb — noted that “OpenAI is going through a high-intensity phase in its development” and that his role had “grown dramatically in its scope and scale since I first joined.”

    A statement from OpenAI about Willner’s exit said that “his work has been foundational in operationalizing our commitment to the safe and responsible use of our technology, and has paved the way for future progress in this field.” OpenAI’s Chief Technology Officer Mira Murati will become the trust and safety team’s interim manager and Willner will advise the team through the end of this year, according to the company.

    “We are seeking a technically-skilled lead to advance our mission, focusing on the design, development, and implementation of systems that ensure the safe use and scalable growth of our technology,” the company said in the statement.

    Willner’s exit comes as OpenAI continues to work with regulators in the United States and elsewhere to develop guardrails around fast-advancing AI technology. OpenAI was among seven leading AI companies that on Friday made voluntary commitments agreed to by the White House meant to make AI systems and products safer and more trustworthy. As part of the pledge, the companies agreed to put new AI systems through outside testing before they are publicly released, and to clearly label AI-generated content, the White House announced.

    [ad_2]

    Source link

  • 300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    As many as 300 million full-time jobs around the world could be automated in some way by the newest wave of artificial intelligence that has spawned platforms like ChatGPT, according to Goldman Sachs economists.

    They predicted in a report Sunday that 18% of work globally could be computerized, with the effects felt more deeply in advanced economies than emerging markets.

    That’s partly because white-collar workers are seen to be more at risk than manual laborers. Administrative workers and lawyers are expected to be most affected, the economists said, compared to the “little effect” seen on physically demanding or outdoor occupations, such as construction and repair work.

    In the United States and Europe, approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely, the bank estimates.

    If generative artificial intelligence “delivers on its promised capabilities, the labor market could face significant disruption,” the economists wrote. The term refers to the technology behind ChatGPT, the chatbot sensation that has taken the world by storm.

    ChatGPT, which can answer prompts and write essays, has already prompted many businesses to rethink how people should work every day.

    This month, its developer unveiled the latest version of the software behind the bot, GPT-4. The platform has quickly impressed early users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Further use of such AI will likely lead to job losses, the Goldman Sachs economists wrote. But they noted that technological innovation that initially displaces workers has historically also created employment growth over the long haul.

    While workplaces may shift, widespread adoption of AI could ultimately increase labor productivity — and boost global GDP by 7% annually over a 10-year period, according to Goldman Sachs.

    “Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI,” the economists added.

    “Most workers are employed in occupations that are partially exposed to AI automation and, following AI adoption, will likely apply at least some of their freed-up capacity toward productive activities that increase output.”

    Of US workers expected to be affected, for instance, 25% to 50% of their workload “can be replaced,” the researchers added.

    “The combination of significant labor cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labor productivity boom like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer.”

    — CNN’s Nicole Goodkind contributed to this report.

    [ad_2]

    Source link

  • Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat is about to give new meaning to the “chat” part of its name.

    Snap, the company behind Snapchat, announced on Wednesday that its customizable My AI chatbot, is now accessible to all users within the app. The feature, which is powered by the viral AI chatbot ChatGPT, was previously only available to paying Snapchat+ subscribers.

    The tool offers recommendations, answers questions, helps users make plans and can write a haiku in seconds, according to the company. It can be brought into conversation with friends when it’s mentioned with “@MyAI.” Users can also give it a name and design a custom Bitmoji avatar for it to personalize it more.

    The move comes more than a month after ChatGPT creator OpenAI opened up access to its chatbot to third-party businesses. Snap, Instacart and tutor app Quizlet were among the early partners experimenting with adding ChatGPT.

    Since its public release in November 2022, ChatGPT has stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    The initial batch of companies tapping into ChatGPT’s functionality each have slightly different visions for how to incorporate it. Taken together, however, these services may test just how useful AI chatbots can really be in our everyday life and how much people want to interact with them for customer service and other uses across their favorite apps.

    Adding ChatGPT features also may come with some risks. The tool, which is trained on vast troves of data online, can spread inaccurate information and has the potential to respond to users in ways they might find inappropriate.

    In a blog post on Wednesday, Snap acknowledged “My AI is far from perfect but we’ve made a lot of progress.”

    It said, for example, about 99.5% of My AI responses conform to its community guidelines. Snap said it has made changes to “help protect against responses that could be inappropriate or harmful.” The company also said it has added moderation technology and included the new feature to its in-app parental tools.

    “We will continue to use these early learnings to make AI a more safe, fun, and useful experience, and we’re eager to hear your thoughts,” the company said.

    [ad_2]

    Source link

  • ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities.

    But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.”

    McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI.

    Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate.

    “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”

    Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted.

    Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.”

    A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.”

    “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.”

    Companies are broadly taking two approaches to address the issue.

    One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature.

    Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data.

    Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerator Y Combinator, says it uses “proprietary deepfake and generative content fingerprinting technology” to spot AI-generated video, audio and images.

    In an example provided by the company, Reality Defender highlights an image of a Tom Cruise deepfake as 53% “suspicious,” telling the user it has found evidence showing the face was warped, “a common artifact of image manipulation.”

    Defending reality could prove to be a lucrative business if the issue becomes a frequent concern for businesses and individuals. These services offer limited free demos as well as paid tiers. Hive Moderation said it charges $1.50 for every 1,000 images as well as “annual contract deals” that offer a discount. Realty Defender said its pricing may vary based on various factors, including whether the client needs “any bespoke factors requiring our team’s expertise and assistance.”

    “The risk is doubling every month,” Ben Colman, CEO of Reality Defender, told CNN. “Anybody can do this. You don’t need a PhD in computer science. You don’t need to spin up servers on Amazon. You don’t need to know how to write ransomware. Anybody can do this just by Googling ‘fake face generator.’”

    Kevin Guo, CEO of Hive Moderation, described it as “an arms race.”

    “We have to keep looking at all the new ways that people are creating this content, we have to understand it and add it to our dataset to then classify the future,” Guo told CNN. “Today it’s a small percent of content for sure that’s AI-generated, but I think that’s going to change over the next few years.”

    In a different, preventative approach, some larger tech companies are working to integrate a kind of watermark to images to certify media as real or AI-generated when they’re first created. The effort has so far largely been driven by the Coalition for Content Provenance and Authenticity, or C2PA.

    The C2PA was founded in 2021 to create a technical standard that certifies the source and history of digital media. It combines efforts by the Adobe-led Content Authenticity Initiative (CAI) and Project Origin, a Microsoft- and BBC-spearheaded initiative that focuses on combating disinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

    Based on the C2PA’s guidelines, the CAI makes open source tools for companies to create content credentials, or the metadata that contains information about the image. This “allows creators to transparently share the details of how they created an image,” according to the CAI website. “This way, an end user can access context around who, what, and how the picture was changed — then judge for themselves how authentic that image is.”

    “Adobe doesn’t have a revenue center around this. We’re doing it because we think this has to exist,” Andy Parsons, Senior Director at CAI, told CNN. “We think it’s a very important foundational countermeasure against mis- and disinformation.”

    Many companies are already integrating the C2PA standard and CAI tools into their applications. Adobe’s Firefly, an AI image generation tool recently added to Photoshop, follows the standard through the Content Credentials feature. Microsoft also announced that AI art created by Bing Image Creator and Microsoft Designer will carry a cryptographic signature in the coming months.

    Other tech companies like Google appear to be pursuing a playbook that pulls a bit from both approaches.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online. The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    While tech companies are trying to tackle concerns about Ai-generated images and the integrity of digital media, experts in the field stress that these businesses will ultimately need to work with each other and the government to address the problem.

    “We’re going to need cooperation from the Twitters of the world and the Facebooks of the world so they start taking this stuff more seriously, and stop promoting the fake stuff and start promoting the real stuff,” said Farid. “There’s a regulatory part that we haven’t talked about. There’s an education part that we haven’t talked about.”

    Parsons agreed. “This is not a single company or a single government or a single individual in academia who can make this possible,” he said. “We need everybody to participate.”

    For now, however, tech companies continue to move forward with pushing more AI tools into the world.

    [ad_2]

    Source link

  • Forget about the AI apocalypse. The real dangers are already here | CNN Business

    Forget about the AI apocalypse. The real dangers are already here | CNN Business

    [ad_1]



    CNN
     — 

    Two weeks after members of Congress questioned OpenAI CEO Sam Altman about the potential for artificial intelligence tools to spread misinformation, disrupt elections and displace jobs, he and others in the industry went public with a much more frightening possibility: an AI apocalypse.

    Altman, whose company is behind the viral chatbot tool ChatGPT, joined Google DeepMind CEO Demis Hassabis, Microsoft’s CTO Kevin Scott and dozens of other AI researchers and business leaders in signing a one-sentence letter last month stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The stark warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. But it also highlights an important dynamic in Silicon Valley right now: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.

    The dynamic has played out elsewhere recently, too. Tesla CEO Elon Musk, for example, said in a TV interview in April that AI could lead to “civilization destruction.” But he still remains deeply involved in the technology through investments across his sprawling business empire and has said he wants to create a rival to the AI offerings by Microsoft and Google.

    Some AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services.

    “Motives seemed to be mixed,” Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. Some of the execs are likely “genuinely worried about what they have unleashed,” he said, but others may be trying to focus attention on “abstract possibilities to detract from the more immediate possibilities.”

    Representatives for Google and OpenAI did not immediately respond to a request for comment. In a statement, a Microsoft spokesperson said: “We are optimistic about the future of AI, and we think AI advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly.”

    For Marcus, a self-described critic of AI hype, “the biggest immediate threat from AI is the threat to democracy from the wholesale production of compelling misinformation.”

    Generative AI tools like OpenAI’s ChatGPT and Dall-E are trained on vast troves of data online to create compelling written work and images in response to user prompts. With these tools, for example, one could quickly mimic the style or likeness of public figures in an attempt to create disinformation campaigns.

    In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

    Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright “hallucinating” responses and potentially perpetuating racial and gender biases.

    Gary Marcus, professor emeritus at New York University, right, listens to Sam Altman, chief executive officer and co-founder of OpenAI, speak during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.

    Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.

    Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.

    “If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer,” Bender told CNN.

    Regulators may be the real intended audience for the tech industry’s doomsday messaging.

    As Bender puts it, execs are essentially saying: “‘This stuff is very, very dangerous, and we’re the only ones who understand how to rein it in.’”

    Judging from Altman’s appearance before Congress, this strategy might work. Altman appeared to win over Washington by echoing lawmakers’ concerns about AI — a technology that many in Congress are still trying to understand — and offering suggestions for how to address it.

    This approach to regulation would be “hugely problematic,” Bender said. It could give the industry influence over the regulators tasked with holding it accountable and also leave out the voices and input of other people and communities experiencing negative impacts of this technology.

    “If the regulators kind of orient towards the people who are building and selling the technology as the only ones who could possibly understand this, and therefore can possibly inform how regulation should work, we’re really going to miss out,” Bender said.

    Bender said she tries, at every opportunity, to tell people “these things seem much smarter than they are.” As she put it, this is because “we are as smart as we are” and the way that we make sense of language, including responses from AI, “is actually by imagining a mind behind it.”

    Ultimately, Bender put forward a simple question for the tech industry on AI: “If they honestly believe that this could be bringing about human extinction, then why not just stop?”

    [ad_2]

    Source link

  • With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Last month, a video posted to Twitter by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s top infectious disease specialist, were tricky to spot: they were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    As the images began spreading, fact-checking organizations and sharp-eyed users quickly flagged them as fake. But Twitter, which has slashed much of its staff in recent months under new ownership, did not remove the video. Instead, it eventually added a community note — a contributor-led feature to highlight misinformation on the social media platform — to the post, alerting the site’s users that in the video “3 still shots showing Trump embracing Fauci are AI generated images.”

    Experts in digital information integrity say it’s just the start of AI-generated content being used ahead of the 2024 US Presidential election in ways that could confuse or mislead voters.

    A new crop of AI tools offer the ability to generate compelling text and realistic images — and, increasingly, video and audio. Experts, and even some executives overseeing AI companies, say these tools risk spreading false information to mislead voters, including ahead of the 2024 US election.

    “The campaigns are starting to ramp up, the elections are coming fast and the technology is improving fast,” said Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public. “We’ve already seen evidence of the impact that AI can have.”

    Social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep up with the next wave of election misinformation.

    Several major social networks have pulled back on their enforcement of some election-related misinformation and undergone significant layoffs over the past six months, which in some cases hit election integrity, safety and responsible AI teams. Current and former US officials have also raised alarms that a federal judge’s decision earlier this month to limit how some US agencies communicate with social media companies could have a “chilling effect” on how the federal government and states address election-related disinformation. (On Friday, an appeals court temporarily blocked the order.)

    Meanwhile, AI is evolving at a rapid pace. And despite calls from industry players and others, US lawmakers and regulators have yet to implement real guardrails for AI technologies.

    “I’m not confident in even their ability to deal with the old types of threats,” said David Evan Harris, an AI researcher and ethics adviser to the Psychology of Technology Institute, who previously worked on responsible AI at Facebook-parent Meta. “And now there are new threats.”

    The major platforms told CNN they have existing policies and practices in place related to misinformation and, in some cases, specifically targeting “synthetic” or computer-generated content, that they say will help them identify and address any AI-generated misinformation. None of the companies agreed to make anyone working on generative AI detection efforts available for an interview.

    The platforms “haven’t been ready in the past, and there’s absolutely no reason for us to believe that they’re going to be ready now,” Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University, told CNN.

    Misleading content, especially related to elections, is nothing new. But with the help of artificial intelligence, it’s now possible for anyone to quickly, easily and cheaply create huge quantities of fake content.

    And given AI technology’s rapid improvement over the past year, fake images, text, audio and videos are likely to be even harder to discern by the time the US election rolls around next year.

    “We’ve still got more than a year to go until the election. These tools are going to get better and, in the hands of sophisticated users, they can be very powerful,” said Harris. He added that the kinds of misinformation and election meddling that took place on social media in 2016 and 2020 will likely only be exacerbated by AI.

    The various forms of AI-generated content could be used together to make false information more believable — for example, an AI-written fake article accompanied by an AI-generated photo purporting to show what happened in the report, said Margaret Mitchell, researcher and chief ethics scientist at open-source AI firm Hugging Face.

    AI tools could be useful for anyone wanting to mislead, but especially for organized groups and foreign adversaries incentivized to meddle in US elections. Massive foreign troll farms have been hired to attempt to influence previous elections in the United States and elsewhere, but “now, one person could be in charge of deploying thousands of thousands of generative AI bots that work,” to pump out content across social media to mislead voters, Mitchell, who previously worked at Google, said.

    OpenAI, the maker of the popular AI chatbot ChatGPT, issued a stark warning about the risk of AI-generated misinformation in a recent research paper. An abundance of false information from AI systems, whether intentional or created by biases or “hallucinations” from the systems, has “the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction,” it said.

    Examples of AI-generated misinformation have already begun to crop up. In May, several Twitter accounts, including some who had paid for a blue “verification” checkmark, shared fake images purporting to show an explosion near the Pentagon. While the images were quickly debunked, their circulation was briefly followed by a dip in the stock market. Twitter suspended at least one of the accounts responsible for spreading the images. Facebook labeled posts about the images as “false information,” along with a fact check.

    A month earlier, the Republican National Committee released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington D.C. to whom CNN showed the video did not spot it on their first watch.

    Dozens of Democratic lawmakers last week sent a letter calling on the Federal Election Commission to consider cracking down on the use of artificial intelligence technology in political advertisements, warning that deceptive ads could harm the integrity of next year’s elections.

    Ahead of 2024, many of the platforms have said that they will be rolling out plans to protect the election’s integrity, including from the threat of AI-generated content.

    TikTok earlier this year rolled out a policy stipulating that “synthetic” or manipulated media created by AI must be clearly labeled, in addition to its civic integrity policy which prohibits misleading information about electoral processes and its general misinformation policy which prohibits false or misleading claims that could cause “significant harm” to individuals or society.

    YouTube has a manipulated media policy that prohibits content that has been “manipulated or doctored” in a way that could mislead users and “may pose a serious risk of egregious harm.” The platform also has policies against content that could mislead users about how and when to vote, false claims that could discourage voting and content that “encourages others to interfere with democratic processes.” YouTube also says it prominently surfaces reliable news and information about elections on its platform, and that its election-focused team includes members of its trust and safety, product and “Intelligence Desk” teams.

    “Technically manipulated content, including election content, that misleads users and may pose a serious risk of egregious harm is not allowed on YouTube,” YouTube spokesperson Ivy Choi said in a statement. “We enforce our manipulated content policy using machine learning and human review, and continue to improve on this work to stay ahead of potential threats.”

    A Meta spokesperson told CNN that the company’s policies apply to all content on its platforms, including AI-generated content. That includes its misinformation policy, which stipulates that the platform removes false claims that could “directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media,” and may reduce the spread of other misleading claims. Meta also prohibits ads featuring content that has been debunked by its network of third-party fact checkers.

    TikTok and Meta have also joined a group of tech industry partners coordinated by the non-profit Partnership on AI dedicated to developing a framework for responsible use of synthetic media.

    Asked for comment on this story, Twitter responded with an auto-reply of a poop emoji.

    Twitter has rolled back much of its content moderation in the months since billionaire Elon Musk took over the platform, and instead has leaned more heavily on its “Community Notes” feature which allows users to critique the accuracy of and add context to other people’s posts. On its website, Twitter also says it has a “synthetic media” policy under which it may label or remove “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

    Still, as is often the case with social media, the challenge is likely to be less a matter of having the policies in place than enforcing them. The platforms largely use a mix of human and automated review to identify misinformation and manipulated media. The companies declined to provide additional details about their AI detection processes, including how many staffers are involved in such efforts.

    But AI experts say they’re worried that the platforms’ detection systems for computer-generated content may have a hard time keeping up with the technology’s advancements. Even some of the companies developing new generative AI tools have struggled to build services that can accurately detect when something is AI-generated.

    Some experts are urging all the social platforms to implement policies requiring that AI-generated or manipulated content be clearly labeled, and calling on regulators and lawmakers to establish guardrails around AI and hold tech companies accountable for the spread of false claims.

    One thing is clear: the stakes for success are high. Experts say that not only does AI-generated content create the risk of internet users being misled by false information; it could also make it harder for them to trust real information about everything from voting to crisis situations.

    “We know that we’re going into a very scary situation where it’s going to be very unclear what has happened and what has not actually happened,” said Mitchell. “It completely destroys the foundation of reality when it’s a question whether or not the content you’re seeing is real.”

    [ad_2]

    Source link

  • Welcome to the era of viral AI generated ‘news’ images | CNN Business

    Welcome to the era of viral AI generated ‘news’ images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Pope Francis wearing a massive, white puffer coat. Elon Musk walking hand-in-hand with rival GM CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.

    None of these things actually happened, but AI-generated images depicting them did go viral online over the past week.

    The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. Model and TV personality Chrissy Teigen, for example, tweeted that she thought the pope’s puffer coat was real, saying, “didn’t give it a second thought. no way am I surviving the future of technology.” The images also sparked a slew of headlines, as news organizations rushed to debunk the false images, especially those of Trump, who was ultimately indicted by a Manhattan grand jury on Thursday but has not been arrested.

    The situation demonstrates a new online reality: the rise of a new crop of buzzy artificial intelligence tools has made it cheaper and easier than ever to create realistic images, as well as audio and videos. And these images are likely to pop up with increasing frequency on social media.

    While these AI tools may enable new means of expressing creativity, the spread of computer-generated media also threatens to further pollute the information ecosystem. That risks adding to the challenges for users, news organizations and social media platforms to vet what’s real, after years of grappling with online misinformation featuring far less sophisticated visuals. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.

    “I worry that it will sort of get to a point where there will be so much fake, highly realistic content online that most people will just go with their tribal instincts as a guide to what they think is real, more than actually informed opinions based on verified evidence,” said Henry Ajder, a synthethic media expert who works as an advisor to companies and government agencies, including Meta Reality Labs’ European Advisory Council.

    Images, compared to the AI-generated text that has also recently proliferated thanks to tools like ChatGPT, can be especially powerful in provoking emotions when people view them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, a nonprofit industry group. That can make it harder for people to slow down and evaluate whether what they’re looking at is real or fake.

    What’s more, coordinated bad actors could eventually attempt to create fake content in bulk — or suggest that real content is computer-generated — in order to confuse internet users and provoke certain behaviors.

    “The paranoia of an impending Trump … potential arrest created a really useful case study in understanding what the potential implications are, and I think we’re very lucky that things did not go south,” said Ben Decker, CEO of threat intelligence group Memetica. “Because if more people had had that idea en masse, in a coordinated fashion, I think there’s a universe where we could start to see the online to offline effects.”

    Computer-generated image technology has improved rapidly in recent years, from the photoshopped image of a shark swimming through a flooded highway that has been repeatedly shared during natural disasters to the websites that four years ago began churning out mostly unconvincing fake photos of non-existent people.

    Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts. On its website, Midjourney describes itself as “a small self-funded team,” with just 11 full-time staff members.

    A cursory glance at a Facebook page popular among Midjourney users reveals AI-generated images of a seemingly inebriated Pope Francis, elderly versions of Elvis and Kurt Cobain, Musk in a robotic Tesla bodysuit and many creepy animal creations. And that’s just from the past few days.

    Midjourney has emerged as a popular tool for users to create AI-generated images.

    The latest version of Midjourney is only available to a select number of paid users, Midjourney CEO David Holz told CNN in an email Friday. Midjourney this week paused access to the free trial of its earlier versions due to “extraordinary demand and trial abuse,” according to a Discord post from Holz, but he told CNN it was unrelated to the viral images. The creator of the Trump arrest images also claimed he was banned from the site.

    The rules page on the company’s Discord site asks users: “Don’t use our tools to make images that could inflame, upset, or cause drama. That includes gore and adult content.”

    “Moderation is hard and we’ll be shipping improved systems soon,” Holz told CNN. “We’re taking lots of feedback and ideas from experts and the community and are trying to be really thoughtful.”

    In most cases, the creators of the recent viral images don’t appear to have been acting malevolently. The Trump arrest images were created by the founder of the online investigative journalism outlet Bellingcat, who clearly labeled them as his fabrications, even if other social media users weren’t as discerning.

    There are efforts by platforms, AI technology companies and industry groups to improve the transparency around when a piece of content is generated by a computer.

    Platforms including Meta’s Facebook and Instagram, Twitter and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. But as use of AI-generated technologies grows, even such policies could threaten to undermine user trust. If, for example, a fake image accidentally slipped through a platform’s detection system, “it could give people false confidence,” Ajder said. “They’ll say, ‘there’s a detection system that says it’s real, so it must be real.’”

    Work is also underway on technical solutions that would, for example, watermark an AI-generated image or include a transparent label in an image’s metadata, so anyone viewing it across the internet would know it was created by a computer. The Partnership on AI has developed a set of standard, responsible practices for synthetic media along with partners like ChatGPT-creator OpenAI, TikTok, Adobe, Bumble and the BBC, which includes recommendations such as how to disclose an image was AI-generated and how companies can share data around such images.

    “The idea is that these institutions are all committed to disclosure, consent and transparency,” Leibowicz said.

    A group of tech leaders, including Musk and Apple co-founder Steve Wozniak, this week wrote an open letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” Still, it’s not clear whether any labs will take such a step. And as the technology rapidly improves and becomes accessible beyond a relatively small group of corporations committed to responsible practices, lawmakers may need to get involved, Ajder said.

    “This new age of AI can’t be held in the hands of a few massive companies getting rich off of these tools, we need to democratize this technology,” he said. “At the same time, there are also very real and legitimate concerns of having a radical open approach where you just open source a tool or have very minimal restrictions on its use is going to lead to a massive scaling of harm … and I think legislation will probably play a role in reigning in some of the more radically open models.”

    [ad_2]

    Source link

  • Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    [ad_1]



    CNN
     — 

    When Elyse Nguyen was nearing her wedding date in February and still hadn’t started writing her vows, a friend suggested she try a new source of inspiration: ChatGPT.

    The AI chatbot, which was released publicly in late November, can generate compelling written responses to user prompts and offers the promise of helping people get over writer’s block, whether it be for an essay, an email, or an emotional speech.

    “At first we inputted the prompt as a joke and the output was pretty cheesy with personal references to me and my husband,” said Nguyen, a financial analyst at Qualcomm. “But the essence of what vows should incorporate was there – our promises to each other and structure.”

    She made edits, changed the prompts to add humor and details about her partner’s interests, and added some personal touches. Nguyen ultimately ended up using a good portion of ChatGPT’s suggestions and said her husband was on board with it.

    “It helped alleviate some stress because I had no prior experience with wedding vows nor did I know what should be included,” Nguyen said. “Plus, ChatGPT is a genius with alliteration, analogies and metaphors. Having something like, ‘I promise to be your partner in life with the enthusiasm of a golfer’s first hole in one’ in my back pocket was comical.”

    Nearly five months after ChatGPT went viral and ignited a new AI arms race in Silicon Valley, more couples are looking to it for help with wedding planning, including writing vows and speeches, drafting religious marriage contracts, and setting up websites for the special day.

    Ellen Le recently created some of her wedding website through a new Writer’s Block Assistant tool on online wedding planning service Joy, which was one of the first third-party platforms to incorporate ChatGPT’s technology. (Last month, OpenAI, the company behind ChatGPT, opened up access to the chatbot, paving the way for it to be integrated into numerous apps and services.)

    Le, a product manager at a startup, said she used the feature to draft an “about us” page and write directions from San Francisco to her Napa Valley wedding. The Writer’s Block Assistant tool helps users write vows, best man and maid of honor speeches, thank you cards and wedding website “about us” pages. It also lets users highlight personal stories and select the style or tone before pulling it into a speech.

    “I started drafting my vows and when I typed in how we met, it produced this very delightful story,” Le said. “Some of it was inaccurate, making up certain details, but it gave me a helping hand and something to react to, rather than just spending 10 hours thinking about how to get started.”

    Le said her fiance, who often uses ChatGPT for work, is considering using AI to help with his vows too.

    Joy co-founder and CEO Vishal Joshi, who studied artificial intelligence and electrical engineering at NIT Rourkela in India, said the company launched Writer’s Block Assistant in March after it conducted an internal study that found most of its users were somewhat overwhelmed with getting started on writing vows and speeches, and wished they had help. He said the company has already seen thousands of submissions since launching the tool.

    “Almost two decades ago, AI enthusiasts like myself and my research peers had only dreamt of mass market adoption we are seeing today, and we know this is just the true beginning,” Joshi said. “Just like smartphones, if applied well, the positive impact of AI on our lives can far outshine the negatives. We’re working on responsibly innovating using AI to advance the wedding and event industry as a whole.”

    Michael Grinn and Kate Gardiner used viral AI tool ChatGPT to write the Ketubah, a Jewish wedding contract, for their June wedding.

    ChatGPT has sparked concerns in recent months about its potential to perpetuate biases, spread misinformation and upend certain livelihoods. Now, as it finds its way into marriage ceremonies, it could raise more nuanced questions about whether people risk losing something by injecting technology into what is supposed to be a deeply personal and, for many, spiritual moment in life.

    Michael Grinn, an anesthesiologist with practices in Miami and New York, was experimenting with ChatGPT when he asked it to produce a traditional Ketubah – a Jewish marriage contract – for his upcoming June wedding.

    Grinn and his fiance Kate Gardiner, the founder and CEO of a public relations firm, then requested it make some language changes around gender equality and intimacy. “At the end, we both looked at each other and were like, we can’t disagree with the result,” he said.

    Editing took about an hour, but it still shaved hours off what otherwise could have been a lengthy process, he said. Still, Grinn plans to write his own vows. “I want them to be less refined and something no one else helped me with.”

    He does, however, plan to use ChatGPT for inspiration for officiating his best man’s wedding. “It mostly comes down to time because I’ve been working so much,” he said, “and this is so efficient.”

    [ad_2]

    Source link