ReportWire

Tag: computer science and information technology

  • Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

    The list of more than 8,000 authors includes some of the world’s most celebrated writers, including Margaret Atwood, Dan Brown, Michael Chabon, Jonathan Franzen, James Patterson, Jodi Picoult and Philip Pullman, among others.

    In an open letter they signed, posted by the Authors Guild Tuesday, the writers accused AI companies of unfairly profiting from their work.

    “Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill,” the letter said. “You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.”

    Tuesday’s letter was addressed to the CEOs of ChatGPT-maker OpenAI, Facebook-parent Meta, Google, Stability AI, IBM and Microsoft. Most of the companies didn’t immediately respond to a request for comment. Meta, Microsoft and Stability AI declined to comment.

    Much of the tech industry is now working to develop AI tools that can generate compelling images and written work in response to user prompts. These tools are built on large language models, which are trained on vast troves of information online. But recently, there has been growing pressure on tech companies over alleged intellectual property violations with this training process.

    This month, comedian Sarah Silverman and two authors filed a copyright lawsuit against OpenAI and Meta, while a proposed class-action suit accused Google of “stealing everything ever created and shared on the internet by hundreds of millions of Americans,” including copyrighted content. Google has called the lawsuit “baseless,” saying it has been upfront for years that it uses public data to train its algorithms. OpenAI did not previously respond to a request for comment on the suit.

    In addition to demanding compensation “for the past and ongoing use of our works in your generative AI programs,” the thousands of authors who signed the letter this week called on AI companies to seek permission before using the copyrighted material. They also urged the companies to pay writers when their work is featured in the results of generative AI, “whether or not the outputs are infringing under current law.”

    The letter also cites this year’s Supreme Court holding in Warhol v Goldsmith, which found that the late artist Andy Warhol infringed on a photographer’s copyright when he created a series of silk screens based on a photograph of the late singer Prince. The court ruled that Warhol did not sufficiently “transform” the underlying photograph so as to avoid copyright infringement.

    “The high commerciality of your use argues against fair use,” the authors wrote to the AI companies.

    In May, OpenAI CEO Sam Altman appeared to acknowledge more needs to be done to address concerns from creators about how AI systems use their works.

    “We’re trying to work on new models where if an AI system is using your content, or if it’s using your style, you get paid for that,” he said at an event.

    – CNN’s Catherine Thorbecke contributed to this report.

    [ad_2]

    Source link

  • Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    Who says romance is dead? Couples are using ChatGPT to write their wedding vows | CNN Business

    [ad_1]



    CNN
     — 

    When Elyse Nguyen was nearing her wedding date in February and still hadn’t started writing her vows, a friend suggested she try a new source of inspiration: ChatGPT.

    The AI chatbot, which was released publicly in late November, can generate compelling written responses to user prompts and offers the promise of helping people get over writer’s block, whether it be for an essay, an email, or an emotional speech.

    “At first we inputted the prompt as a joke and the output was pretty cheesy with personal references to me and my husband,” said Nguyen, a financial analyst at Qualcomm. “But the essence of what vows should incorporate was there – our promises to each other and structure.”

    She made edits, changed the prompts to add humor and details about her partner’s interests, and added some personal touches. Nguyen ultimately ended up using a good portion of ChatGPT’s suggestions and said her husband was on board with it.

    “It helped alleviate some stress because I had no prior experience with wedding vows nor did I know what should be included,” Nguyen said. “Plus, ChatGPT is a genius with alliteration, analogies and metaphors. Having something like, ‘I promise to be your partner in life with the enthusiasm of a golfer’s first hole in one’ in my back pocket was comical.”

    Nearly five months after ChatGPT went viral and ignited a new AI arms race in Silicon Valley, more couples are looking to it for help with wedding planning, including writing vows and speeches, drafting religious marriage contracts, and setting up websites for the special day.

    Ellen Le recently created some of her wedding website through a new Writer’s Block Assistant tool on online wedding planning service Joy, which was one of the first third-party platforms to incorporate ChatGPT’s technology. (Last month, OpenAI, the company behind ChatGPT, opened up access to the chatbot, paving the way for it to be integrated into numerous apps and services.)

    Le, a product manager at a startup, said she used the feature to draft an “about us” page and write directions from San Francisco to her Napa Valley wedding. The Writer’s Block Assistant tool helps users write vows, best man and maid of honor speeches, thank you cards and wedding website “about us” pages. It also lets users highlight personal stories and select the style or tone before pulling it into a speech.

    “I started drafting my vows and when I typed in how we met, it produced this very delightful story,” Le said. “Some of it was inaccurate, making up certain details, but it gave me a helping hand and something to react to, rather than just spending 10 hours thinking about how to get started.”

    Le said her fiance, who often uses ChatGPT for work, is considering using AI to help with his vows too.

    Joy co-founder and CEO Vishal Joshi, who studied artificial intelligence and electrical engineering at NIT Rourkela in India, said the company launched Writer’s Block Assistant in March after it conducted an internal study that found most of its users were somewhat overwhelmed with getting started on writing vows and speeches, and wished they had help. He said the company has already seen thousands of submissions since launching the tool.

    “Almost two decades ago, AI enthusiasts like myself and my research peers had only dreamt of mass market adoption we are seeing today, and we know this is just the true beginning,” Joshi said. “Just like smartphones, if applied well, the positive impact of AI on our lives can far outshine the negatives. We’re working on responsibly innovating using AI to advance the wedding and event industry as a whole.”

    Michael Grinn and Kate Gardiner used viral AI tool ChatGPT to write the Ketubah, a Jewish wedding contract, for their June wedding.

    ChatGPT has sparked concerns in recent months about its potential to perpetuate biases, spread misinformation and upend certain livelihoods. Now, as it finds its way into marriage ceremonies, it could raise more nuanced questions about whether people risk losing something by injecting technology into what is supposed to be a deeply personal and, for many, spiritual moment in life.

    Michael Grinn, an anesthesiologist with practices in Miami and New York, was experimenting with ChatGPT when he asked it to produce a traditional Ketubah – a Jewish marriage contract – for his upcoming June wedding.

    Grinn and his fiance Kate Gardiner, the founder and CEO of a public relations firm, then requested it make some language changes around gender equality and intimacy. “At the end, we both looked at each other and were like, we can’t disagree with the result,” he said.

    Editing took about an hour, but it still shaved hours off what otherwise could have been a lengthy process, he said. Still, Grinn plans to write his own vows. “I want them to be less refined and something no one else helped me with.”

    He does, however, plan to use ChatGPT for inspiration for officiating his best man’s wedding. “It mostly comes down to time because I’ve been working so much,” he said, “and this is so efficient.”

    [ad_2]

    Source link

  • UK blocks Microsoft takeover of Activision Blizzard | CNN Business

    UK blocks Microsoft takeover of Activision Blizzard | CNN Business

    [ad_1]


    London
    CNN
     — 

    The UK antitrust regulator has blocked Microsoft’s $69 billion purchase of Activision Blizzard, thwarting one of the tech industry’s biggest deals over concerns it will stifle competition in cloud gaming.

    The Competition and Markets Authority said in a statement Wednesday that it was worried the deal would lead to “reduced innovation and less choice for UK gamers over the years to come.”

    The acquisition would make Microsoft

    (MSFT)
    “even stronger” in cloud gaming, a market in which it already holds a 60%-70% share globally, the regulator added.

    Activision Blizzard is one of the world’s biggest video game developers, producing games such as “Call of Duty,” “World of Warcraft,” “Diablo” and “Overwatch.” Microsoft, which sells the Xbox gaming console, offers a video game subscription service called Xbox Game Pass, as well as a cloud-based video game streaming service.

    The deal to combine the businesses has been met with growing opposition by antitrust regulators worldwide. In December, the US Federal Trade Commission sued to block the takeover over similar competition concerns. A hearing is scheduled for August. The European Union is also evaluating the transaction

    Microsoft could seek to make Activision’s games exclusive to its own platforms and then increase the cost of a Game Pass subscription, the Competition and Markets Authority said.

    “The cloud allows UK gamers to avoid buying expensive gaming consoles and PCs and gives them much more flexibility and choice as to how they play. Allowing Microsoft to take such a strong position in the cloud gaming market just as it begins to grow rapidly would risk undermining the innovation that is crucial to the development of these opportunities,” it added.

    “The evidence available… indicates that, absent the merger, Activision would start providing games via cloud platforms in the foreseeable future.”

    Both companies plan to appeal the decision. “Alongside Microsoft, we can and will contest this decision, and we’ve already begun the work to appeal to the UK Competition Appeals Tribunal,” Activision Blizzard CEO Bobby Kotick said in a statement.

    Microsoft President Brad Smith added: “This decision appears to reflect a flawed understanding of the market and the way the relevant cloud technology actually works.”

    The Competition and Markets Authority, which launched an in-depth review of the blockbuster deal in September, said Microsoft’s proposed remedies to its concerns had “significant shortcomings.”

    “Their proposals… would have replaced competition with ineffective regulation in a new and dynamic market,” explained Martin Coleman, chair of the independent panel of experts conducting the investigation.

    “Microsoft already enjoys a powerful position and head start over other competitors in cloud gaming, and this deal would strengthen that advantage, giving it the ability to undermine new and innovative competitors,” Coleman continued. “Cloud gaming needs a free, competitive market to drive innovation and choice.”

    The UK cloud gaming market is expected to be worth up to £1 billion ($1.2 billion) by 2026, around 9% of the global market, according to the Competition and Markets Authority.

    -— Josh du Lac and Brian Fung contributed reporting.

    [ad_2]

    Source link

  • LinkedIn to cut 716 jobs and shut its China app amid ‘challenging’ economic climate | CNN Business

    LinkedIn to cut 716 jobs and shut its China app amid ‘challenging’ economic climate | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    LinkedIn, the world’s largest social media platform for professionals, is cutting 716 positions and shutting down its jobs app in mainland China, the California-based company announced.

    The decision was made amid shifts in customer behavior and slower revenue growth, CEO Ryan Roslansky said Monday in a letter to employees.

    “As we guide LinkedIn through this rapidly changing landscape, we are making changes to our Global Business Organization and our China strategy that will result in a reduction of roles for 716 employees,” he said.

    LinkedIn, owned by Microsoft

    (MSFT)
    , has joined a slew of US tech companies that have made significant job cuts this year. Meta announced in March an additional 10,000 layoffs on top of mass layoffs announced in 2022. Amazon also said during the same month it would eliminate 9,000 positions, on the heels of the 18,000 roles the company announced it was cutting in January.

    “As we plan for [the fiscal year of 2024], we’re expecting the macro environment to remain challenging,” Roslansky said. “We will continue to manage our expenses as we invest in strategic growth areas.”

    As part of the move, LinkedIn will phase out InCareer, its app for mainland China, by August 9.

    Roslansky cited “fierce competition” and “a challenging macroeconomic climate” as the reason for the shutdown.

    LinkedIn will retain some presence in China, including providing services for companies operating there to hire and train employees outside the country, according to a company spokesperson.

    LinkedIn is the last major Western social media app still operating in mainland China. Twitter, Facebook and Youtube have been banned in the country for more than a decade. Google left in early 2010.

    LinkedIn first entered China in 2014 by launching a localized version of its main app. But its moves to censor posts in the country, in accordance with Chinese laws, came under criticism.

    In March 2021, LinkedIn had to suspend signups in China to ensure it was “in compliance with local law.” A few months later, it replaced that app with InCareer, which was focused solely on job postings, with no social networking features such as sharing or commenting.

    The US social media site has faced tough competition in China. By 2021, it had more than 50 million members in the country, making it the company’s third biggest market after the United States and India. But it lagged behind local competitors such as Maimai.

    Maimai was launched in 2013 and dubbed the Chinese version of LinkedIn. In a few years it surpassed LinkedIn to become the most popular professional networking platform in the country, with 110 million verified members. A major feature that powered its success was that it allowed users to post anonymously in a chat forum.

    The operating environment in China has also become more challenging. Since Xi Jinping took power in 2012, he has tightened control over what can be said online and launched a series of crackdowns on the internet.

    “While we’ve found success in helping Chinese members find jobs and economic opportunity, we have not found that same level of success in the more social aspects of sharing and staying informed,” LinkedIn wrote in an October 2021 blog post. “We’re also facing a significantly more challenging operating environment and greater compliance requirements in China.”

    [ad_2]

    Source link

  • Adobe is adding an AI-powered image generator to Photoshop | CNN Business

    Adobe is adding an AI-powered image generator to Photoshop | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Photoshop is about to look a little different.

    Adobe on Tuesday said it’s incorporating an AI-powered image generator into Photoshop, with the goal of “dramatically accelerating” how users edit their photos.

    The tool, called Firefly, allows users to add or delete elements from images with just a text prompt, according to Adobe. It can also match the lighting and style of the existing images automatically, the company said.

    It’s currently available in a new Photoshop beta app. The company plans to roll the product out to all Photoshop customers by the end of the year.

    Adobe’s move comes after a recent crop of AI tools have launched that can generate compelling written work and images in response to user prompts, with the potential to change how people work, create and communicate with each other.

    “[N]ow that we are entering a new era of AI, the advent of generative models presents a new opportunity to take our imaging capabilities to another level,” Pam Clark, vice president of Photoshop product management and product strategy, wrote in a blog post. “Over the last few months, we have integrated this exciting new technology into Photoshop in a major step toward a more natural, intuitive, and fun way to work.”

    Firefly was launched in March at the Adobe Summit as a web-only beta. It was trained on Adobe’s own collection of stock images, as well as publicly available assets. Adobe has called the tool one of its most successful beta launches ever, with more than 70 million images created in the first month.

    By relying on its own image collection and media available for public use, Adobe may be able to avoid the backlash that some other AI image generator tools have faced for using a vast trove of online content as training.

    In January, Getty Images sued Stability AI, the company behind popular AI art tool Stable Diffusion, alleging the tech company committed copyright infringement. Getty said Stability AI copied and processed millions of its images without obtaining the proper licensing.

    Stability filed a motion earlier this month to dismiss the suit.

    [ad_2]

    Source link

  • Here’s how much each state will get in the $42.5 billion broadband infrastructure plan | CNN Business

    Here’s how much each state will get in the $42.5 billion broadband infrastructure plan | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The Biden administration on Monday outlined how states across the country will be receiving billions of dollars in federal funding for high-speed internet access, highlighting the US government’s push to bring connectivity to more Americans and to close the digital divide.

    More than $42 billion from the 2021 bipartisan infrastructure law will be distributed to US states and territories for building internet access, the White House said — with Texas eligible for the largest award of more than $3.3 billion, followed by California, which could receive more than $1.8 billion.

    “We’re talking today about a major investment that we’re making in affordable, high-speed internet, all across the country,” Biden said in a speech Monday, describing internet access as a critical economic resource allowing children to do their homework, for workers to find jobs and for patients to access health care.

    “I’ve gotten letters and emails from across the country from people who are thrilled that after so many years of waiting, they are finally going to get high-speed internet,” Biden said, citing one message he received from an Iowa woman who described the development as “the best thing that’s happened in rural America since the Rural Electrification Act,” referring to the push under President Franklin Delano Roosevelt to bring electricity to farms and ranches nationwide.

    All US states and territories have been awarded at least some funding, starting with the US Virgin Islands, which is eligible for $27 million under the initiative known as the Broadband Equity, Access, and Deployment (BEAD) program.

    The BEAD program marks one of the largest-ever infusions of federal money for bringing disconnected households and businesses online. And it reflects months of work by the US government to design new and updated broadband maps showing which areas of the country remain unserved or under-served.

    Finalized by the Federal Communications Commission last month, the new maps show that 7% of US households and businesses, representing 8.5 million physical locations and tens of millions of individual Americans, do not have broadband internet access, which is defined as internet download speeds of at least 25 megabits per second. The new maps provide information about internet connectivity at a granular level, whereas previous maps assessed connectivity only at a census-block level. The older maps also considered a census block to be served if just one household in that block had broadband access, even if many of its surrounding neighbors did not — leaving many Americans to report that they had no high-speed internet even when the official maps claimed that they did.

    The updated maps allowed the US government to calculate which states had the greatest need for broadband funding and to distribute the infrastructure law’s resources accordingly. States and territories may begin applying for the funds as soon as July 1, the White House said. After the applications are approved by the Commerce Department, state officials will gain access to at least 20% of their eligible awards.

    Under the infrastructure law, US states had been guaranteed at least $100 million in BEAD funding, while US territories were promised at least $25 million.

    Nineteen states received more than $1 billion in the final allocation, the White House said, adding that the 10 states receiving the most funding were Alabama, California, Georgia, Louisiana, Michigan, Missouri, North Carolina, Texas, Virginia and Washington.

    And it complements another $23 billion across five separate broadband access programs included in the legislation, such as a program specifically aimed at Tribal connectivity and another for low-income households. And it follows a $25 billion investment under the American Rescue Plan, the 2021 Covid-19 stimulus package.

    Monday’s announcement marked the launch of a three-week nationwide tour by President Joe Biden and other White House officials to tout the administration’s economic plan.

    Here’s how much each state received:

    • Alabama: $1,401,221,901.77
    • Alaska: $1,017,139,672.42
    • Arizona: $993,112,231.37
    • Arkansas: $1,024,303,993.86
    • California: $1,864,136,508.93
    • Colorado: $826,522,650.41
    • Connecticut: $144,180,792.71
    • Delaware: $107,748,384.66
    • District of Columbia: $100,694,786.93
    • Florida: $1,169,947,392.70
    • Georgia: $1,307,214,371.30
    • Hawaii: $149,484,493.57
    • Idaho: $583,256,249.88
    • Illinois: $1,040,420,751.50
    • Indiana: $868,109,929.79
    • Iowa: $415,331,313.00
    • Kansas: $451,725,998.15
    • Kentucky: $1,086,172,536.86
    • Louisiana: $1,355,554,552.94
    • Maine: $271,977,723.07
    • Maryland: $267,738,400.71
    • Massachusetts: $147,422,464.39
    • Michigan: $1,559,362,479.29
    • Minnesota: $651,839,368.20
    • Mississippi: $1,203,561,563.05
    • Missouri: $1,736,302,708.39
    • Montana: $628,973,798.59
    • Nebraska: $405,281,070.41
    • Nevada: $416,666,229.74
    • New Hampshire: $196,560,278.97
    • New Jersey: $263,689,548.65
    • New Mexico: $675,372,311.86
    • New York: $664,618,251.49
    • North Carolina: $1,532,999,481.15
    • North Dakota: $130,162,815.12
    • Ohio: $793,688,107.63
    • Oklahoma: $797,435,691.25
    • Oregon: $688,914,932.17
    • Pennsylvania: $1,161,778,272.41
    • Rhode Island: $108,718,820.75
    • South Carolina: $551,535,983.05
    • South Dakota: $207,227,523.92
    • Tennessee: $813,319,680.22
    • Texas: $3,312,616,455.45
    • Utah: $317,399,741.54
    • Vermont: $228,913,019.08
    • Virginia: $1,481,489,572.87
    • Washington: $1,227,742,066.30
    • West Virginia: $1,210,800,969.85
    • Wisconsin: $1,055,823,573.71
    • Wyoming: $347,877,921.27
    • American Samoa: $37,564,827.53
    • Guam: $156,831,733.59
    • Northern Mariana Islands: $80,796,709.02
    • Puerto Rico: $334,614,151.70
    • U.S. Virgin Islands: $27,103,240.86

    [ad_2]

    Source link

  • OpenAI’s head of trust and safety is stepping down | CNN Business

    OpenAI’s head of trust and safety is stepping down | CNN Business

    [ad_1]


    New York
    CNN
     — 

    OpenAI’s head of trust and safety announced on Thursday plans to step down from the job.

    Dave Willner, who has led the artificial intelligence firm’s trust and safety team since February 2022, said in a LinkedIn post that he is “leaving OpenAI as an employee and transitioning into an advisory role” to spend more time with his family.

    Willner’s exit comes at a crucial moment for OpenAI. Since the viral success of the company’s AI chatbot ChatGPT late last year, OpenAI has faced growing scrutiny from lawmakers, regulators and the public over the safety of its products and their potential implications for society.

    OpenAI CEO Sam Altman called for AI regulation during a Senate panel hearing in March. He told lawmakers that the potential for AI to be used to manipulate voters and target disinformation are among “my areas of greatest concern,” especially because “we’re going to face an election next year and these models are getting better.”

    In his Thursday post, Willner — whose resume includes stops at Facebook and Airbnb — noted that “OpenAI is going through a high-intensity phase in its development” and that his role had “grown dramatically in its scope and scale since I first joined.”

    A statement from OpenAI about Willner’s exit said that “his work has been foundational in operationalizing our commitment to the safe and responsible use of our technology, and has paved the way for future progress in this field.” OpenAI’s Chief Technology Officer Mira Murati will become the trust and safety team’s interim manager and Willner will advise the team through the end of this year, according to the company.

    “We are seeking a technically-skilled lead to advance our mission, focusing on the design, development, and implementation of systems that ensure the safe use and scalable growth of our technology,” the company said in the statement.

    Willner’s exit comes as OpenAI continues to work with regulators in the United States and elsewhere to develop guardrails around fast-advancing AI technology. OpenAI was among seven leading AI companies that on Friday made voluntary commitments agreed to by the White House meant to make AI systems and products safer and more trustworthy. As part of the pledge, the companies agreed to put new AI systems through outside testing before they are publicly released, and to clearly label AI-generated content, the White House announced.

    [ad_2]

    Source link

  • Amazon is ‘investing heavily’ in the technology behind ChatGPT | CNN Business

    Amazon is ‘investing heavily’ in the technology behind ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    Amazon wants investors to know it won’t be left behind in the latest Big Tech arms race over artificial intelligence.

    In a letter to shareholders Thursday, Amazon

    (AMZN)
    CEO Andy Jassy said the company is “investing heavily” in large language models (LLMs) and generative AI, the same technology that underpins ChatGPT and other similar AI chatbots.

    “We have been working on our own LLMs for a while now, believe it will transform and improve virtually every customer experience, and will continue to invest substantially in these models across all of our consumer, seller, brand, and creator experiences,” Jassy wrote in his letter to shareholders.

    The remarks, which were part of Jassy’s second annual letter to shareholder since taking over as CEO, hint at the pressure that many tech companies feel to explain how they can tap into the rapidly evolving marketplace for AI products. Since ChatGPT was released to the public in late November, Google

    (GOOG)
    , Facebook

    (FB)
    and Microsoft

    (MSFT)
    have all talked up their growing focus on generative AI technology, which can create compelling essays, stories and visuals in response to user prompts.

    Amazon’s goal, according to Jassy, is to offer less costly machine learning chips so that “small and large companies can afford to train and run their LLMs in production.” Large language models are trained on vast troves of data in order to generate responses to user prompts.

    “Most companies want to use these large language models, but the really good ones take billions of dollars to train and many years, most companies don’t want to go through that,” Jassy said in an interview with CNBC on Thursday morning.

    “What they want to do is they want to work off of a foundational model that’s big and great already, and then have the ability to customize it for their own purposes,” Jassy told CNBC.

    With that in mind, Amazon on Thursday unveiled a new service called Bedrock. It essentially makes foundation models (large models that are pre-trained on vast amounts of data) from AI21 Labs, Anthropic, Stability AI and Amazon accessible to clients via an API, Amazon said in a blog post.

    Jassy told CNBC he thinks Bedrock “will change the game for people.”

    In his letter to shareholders, Jassy also touted AWS’s CodeWhisperer, another AI-powered tool which he said “revolutionizes developer productivity by generating code suggestions in real time.”

    “I could write an entire letter on LLMs and Generative AI as I think they will be that transformative, but I’ll leave that for a future letter,” Jassy wrote. “Let’s just say that LLMs and Generative AI are going to be a big deal for customers, our shareholders, and Amazon.”

    In the letter, Jassy also reflected on leading Amazon through “one of the harder macroeconomic years in recent memory,” as the e-commerce giant cut some 27,000 jobs as part of a major bid to rein in costs in recent months.

    “There were an unusual number of simultaneous challenges this past year,” Jassy said in the letter, before outlining steps Amazon took to rethink certain free shipping options, abandon some of its physical store concepts and significantly reduce overall headcount.

    Amazon disclosed in a securities filing Thursday that Jassy’s pay package last year was valued at some $1.3 million, and that the CEO did not receive any new stock awards in 2022. (When Jassy took over as CEO in 2021, he was awarded a pay package mostly comprised of stock awards that valued his total compensation package at some $212 million.)

    Despite the challenges at Amazon, however, Jassy said in his letter that he finds himself “optimistic and energized by what lies ahead.” Jassy added: “I strongly believe that our best days are in front of us.”

    [ad_2]

    Source link

  • Meta stock jumps after company reports first revenue growth in nearly a year | CNN Business

    Meta stock jumps after company reports first revenue growth in nearly a year | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook-parent Meta on Wednesday reported that it grew sales by 3% during the first three months of the year, reversing a trend of three consecutive quarters of revenue declines and far exceeding Wall Street analysts’ expectations.

    Meta shares jumped as much as 12% in after-hours trading following the report, continuing the company’s strong trajectory since Zuckerberg announced that 2023 would be a “year of efficiency.”

    Another bright spot: user growth was relatively strong compared to recent quarters. The number of monthly active people on Meta’s family of apps grew 5% from the prior year to more than 3.8 billion and Facebook daily active users increased 4% to more than 2 billion.

    “We had a good quarter and our community continues to grow,” Zuckerberg said in a statement Wednesday. “We’re also becoming more efficient so we can build better products faster and put ourselves in a stronger position to deliver our long term vision.”

    But Meta has a long hill to climb.

    The company also reported that profits declined by nearly a quarter compared to the same period in the prior year to $5.7 billion. Price per advertisement — an indicator of the health of the company’s core digital ad business — also decreased by 17% from the year prior.

    Meta has been in the midst of a massive restructuring, as it attempts to recover from a perfect storm of heightened competition, lingering recession fears resulting in fewer ad dollars and a multibillion dollar effort to build a future version of the internet it calls the metaverse. Meta said in November it would eliminate 11,000 jobs, the single largest round of cuts in its history. And in March, Zuckerberg announced Meta would lay off another 10,000 employees. All told, the cuts will shrink Meta’s workforce by a quarter.

    Meta took a hit of more than $1 billion related to the restructuring in the March quarter, and said it will realize additional charges of around $500 million related to 2023 layoffs by the end of the year.

    Zuckerberg said on a call with analysts Wednesday that when Meta started its “efficiency work” late last year, “our business wasn’t performing as well as I wanted, but now we’re increasingly doing this work from a position of strength.”

    The company said it expects revenue to grow again in the current quarter compared to the prior year. And it slightly lowered its expectations for full-year expenses, potentially buoying investor optimism.

    “The year of efficiency is off to a stronger than expected start for Meta,” Insider Intelligence principal analyst Debra Aho Williamson said in a statement. But she added that the company “can’t afford to sit still in this environment.”

    Like other tech companies, Meta has recently read investor cues and taken to playing up its focus on artificial intelligence rather than the metaverse. The shift comes as Meta contends with the popularity of AI tools from tech firms like Microsoft and OpenAI.

    In his statement with the results Wednesday, Zuckerberg said: “Our AI work is driving good results across our apps and business.” He added in the call that the company’s AI work includes efforts to build AI chat experiences in WhatsApp and Messenger, as well as visual creation tools for posts on Facebook and Instagram and advertisements.

    [ad_2]

    Source link

  • Chinese police detain man for allegedly using ChatGPT to spread rumors online | CNN Business

    Chinese police detain man for allegedly using ChatGPT to spread rumors online | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Police in China have detained a man they say used ChatGPT to create fake news and spread it online, in what state media has called the country’s first criminal case related to the AI chatbot.

    According to a statement from police in the northwest province of Gansu, the suspect allegedly used ChatGPT to generate a bogus report about a train crash, which he then posted online for profit. The article received about 15,000 views, the police said in Sunday’s statement.

    ChatGPT, developed by Microsoft

    (MSFT)
    -backed OpenAI, is banned in China, though internet users can use virtual private networks (VPN) to access it.

    Train crashes have been a sensitive issue in China since 2011, when authorities faced pressure to explain why state media had failed to provide timely updates on a bullet train collision in the city of Wenzhou that resulted in 40 deaths.

    Gansu authorities said the suspect, surnamed Hong, was questioned in the city of Dongguan in southern Guangdong province on May 5.

    “Hong used modern technology to fabricate false information, spreading it on the internet, which was widely disseminated,” the Gansu police said in the statement.

    “His behavior amounted to picking quarrels and provoking trouble,” they added, explaining the offense that Hong was accused of committing.

    Police said the arrest was the first in Gansu since China’s Cyberspace Administration enacted new regulations in January to rein in the use of deep fakes. State broadcaster CGTN says it was the country’s first arrest of a person accused of using ChatGPT to fabricate and spread fake news.

    Formally known as deep synthesis, deep fake refers to highly realistic textual and visual content generated by artificial intelligence.

    The new legislation bars users from generating deep fake content on topics already prohibited by existing laws on China’s heavily censored internet. It also outlines take down procedures for content considered false or harmful.

    The arrest also came amid a 100-day campaign launched by the internet branch of the Ministry of Public Security in March to crack down on the spread of internet rumors.

    Since the beginning of the year, Chinese internet giants such as Baidu

    (BIDU)
    and Alibaba

    (BABA)
    have sought to catch up with OpenAI, launching their own versions of the ChatGPT service.

    Baidu unveiled “Wenxin Yiyan” or “ERNIE Bot” in March. Two months later, Alibaba launched “Tongyi Qianwen,” which roughly translates as seeking truth by asking a thousand questions.

    In draft guidelines issued last month to solicit public feedback, China’s cyberspace regulator said generative AI services would be required to undergo security reviews before they can operate.

    Service providers will also be required to verify users’ real identities, as well as providing details about the scale and type of data they use, their basic algorithms and other technical information.

    [ad_2]

    Source link

  • Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    For years, Alexa has been synonymous with virtual assistants that can interact with users and do tasks on their behalf.

    Now Amazon is trying to keep pace with a new wave of conversational AI tools that have accelerated the artificial intelligence arms race in the tech industry and rapidly reshaped what consumers may expect from their tech products.

    Amazon’s goal is to use AI “to create this great personal assistant,” said Dave Limp, senior VP of devices and services, in a recent interview with CNN. “We’ve been using all forms of AI for a long time, but now that we see this emergence of generative AI, we can accelerate that vision even faster.”

    Generative AI refers to a type of AI that can create new content, such as text and images, in response to user prompts. Limp did not elaborate on how generative AI could be used in Alexa products, but there are clear possibilities.

    In theory, this technology could one day help Alexa have more natural conversations with users, answer more complex questions, and be more creative by telling stories or making up song lyrics in seconds. It could also enable more personalized interactions, allowing the assistant to learn about the device owner’s interests, preferences and better tailor its responses to each person.

    “We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp said. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

    Alexa launched nearly a decade ago and, along with Siri, Cortana and other voice assistants, seemed poised to change the way people interacted with technology. But the viral success of ChatGPT has arguably accomplished that faster and across a wider range of everyday products.

    The effort to continue updating the technology that powers Alexa comes at a difficult moment for Amazon. Like other Big Tech companies, Amazon is now slashing staff and shelving products in an urgent effort to cut costs amid broader economic uncertainty. The Alexa division has not escaped unscathed.

    Amazon confirmed plans in January to lay off more than 18,000 employees as the global economic outlook continued to worsen. In March, the company said about 9,000 more jobs would be impacted. Limp said his division lost about 2,000 people, about half of which were from the Alexa team.

    Amazon also shut down some of the products it spun up earlier in the pandemic, such as its wearable fitness brand Halo, which allowed users to ask Alexa questions about their health and wellness. Limp said the company also shelved some “more risky” projects. “I wouldn’t doubt we’ll dust them off at some point and bring them back,” he said. “We’re still taking a lot of risks in this organization.”

    But Limp said Alexa remains a “North Star” for his division. “To give you a sense, there’s still thousands and thousands of people working on Alexa,” he said.

    Amazon is indeed still investing in Alexa and its related Echo smart speaker lineup. Last week, the company unveiled several new products, including the $39.99 Echo Pop and the $89.99 Echo Show 5, its smart speaker with a screen. While the products feature incremental updates, Limp said Amazon’s current lineup contains hints of what’s to come with its AI efforts, beyond generative AI.

    For example, if Alexa is enabled on an Echo Show, where it can rotate and follow users around the room, “you’ll see glimmers of where it’s going over the next months and years,” Limp said.

    But generative AI remains a key focus for the company. Amazon CEO Andy Jassy said in a letter to shareholders in April that the company is focused on “investing heavily” in the technology “across all of our consumer, seller, brand, and creator experiences.”

    The company is reportedly working on adding ChatGPT-like search capabilities for its e-commerce store. Amazon is also rumored to be planning to use generative AI to bring conversational language to a home robot.

    While Limp didn’t comment on the report, he said the end goal has long been for Alexa to communicate with users in a fluid, natural way, whether it’s through an Echo device or other products such as its robotic dog, Astro.

    The concept remains a “hard technical challenge,” he said, but one that is “more tractable” with generative AI. “There’s still some hard corner cases and things to work out,” he said.

    [ad_2]

    Source link

  • ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities.

    But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.”

    McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI.

    Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate.

    “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”

    Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted.

    Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.”

    A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.”

    “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.”

    Companies are broadly taking two approaches to address the issue.

    One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature.

    Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data.

    Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerator Y Combinator, says it uses “proprietary deepfake and generative content fingerprinting technology” to spot AI-generated video, audio and images.

    In an example provided by the company, Reality Defender highlights an image of a Tom Cruise deepfake as 53% “suspicious,” telling the user it has found evidence showing the face was warped, “a common artifact of image manipulation.”

    Defending reality could prove to be a lucrative business if the issue becomes a frequent concern for businesses and individuals. These services offer limited free demos as well as paid tiers. Hive Moderation said it charges $1.50 for every 1,000 images as well as “annual contract deals” that offer a discount. Realty Defender said its pricing may vary based on various factors, including whether the client needs “any bespoke factors requiring our team’s expertise and assistance.”

    “The risk is doubling every month,” Ben Colman, CEO of Reality Defender, told CNN. “Anybody can do this. You don’t need a PhD in computer science. You don’t need to spin up servers on Amazon. You don’t need to know how to write ransomware. Anybody can do this just by Googling ‘fake face generator.’”

    Kevin Guo, CEO of Hive Moderation, described it as “an arms race.”

    “We have to keep looking at all the new ways that people are creating this content, we have to understand it and add it to our dataset to then classify the future,” Guo told CNN. “Today it’s a small percent of content for sure that’s AI-generated, but I think that’s going to change over the next few years.”

    In a different, preventative approach, some larger tech companies are working to integrate a kind of watermark to images to certify media as real or AI-generated when they’re first created. The effort has so far largely been driven by the Coalition for Content Provenance and Authenticity, or C2PA.

    The C2PA was founded in 2021 to create a technical standard that certifies the source and history of digital media. It combines efforts by the Adobe-led Content Authenticity Initiative (CAI) and Project Origin, a Microsoft- and BBC-spearheaded initiative that focuses on combating disinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

    Based on the C2PA’s guidelines, the CAI makes open source tools for companies to create content credentials, or the metadata that contains information about the image. This “allows creators to transparently share the details of how they created an image,” according to the CAI website. “This way, an end user can access context around who, what, and how the picture was changed — then judge for themselves how authentic that image is.”

    “Adobe doesn’t have a revenue center around this. We’re doing it because we think this has to exist,” Andy Parsons, Senior Director at CAI, told CNN. “We think it’s a very important foundational countermeasure against mis- and disinformation.”

    Many companies are already integrating the C2PA standard and CAI tools into their applications. Adobe’s Firefly, an AI image generation tool recently added to Photoshop, follows the standard through the Content Credentials feature. Microsoft also announced that AI art created by Bing Image Creator and Microsoft Designer will carry a cryptographic signature in the coming months.

    Other tech companies like Google appear to be pursuing a playbook that pulls a bit from both approaches.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online. The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    While tech companies are trying to tackle concerns about Ai-generated images and the integrity of digital media, experts in the field stress that these businesses will ultimately need to work with each other and the government to address the problem.

    “We’re going to need cooperation from the Twitters of the world and the Facebooks of the world so they start taking this stuff more seriously, and stop promoting the fake stuff and start promoting the real stuff,” said Farid. “There’s a regulatory part that we haven’t talked about. There’s an education part that we haven’t talked about.”

    Parsons agreed. “This is not a single company or a single government or a single individual in academia who can make this possible,” he said. “We need everybody to participate.”

    For now, however, tech companies continue to move forward with pushing more AI tools into the world.

    [ad_2]

    Source link

  • First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    [ad_1]



    CNN
     — 

    Three US senators are pressing Facebook-parent Meta, Google-parent Alphabet and Twitter about whether their layoffs may have hindered the companies’ ability to fight the spread of misinformation ahead of the 2024 elections.

    In a letter to the companies dated Tuesday, the lawmakers warned that reported staff cuts to content moderation and other teams could make it harder for the companies to fulfill their commitments to election integrity.

    “This is particularly troubling given the emerging use of artificial intelligence to mislead voters,” wrote Minnesota Democratic Sen. Amy Klobuchar, Vermont Democratic Sen. Peter Welch and Illinois Democratic Sen. Dick Durbin, according to a copy of the letter reviewed by CNN.

    Since purchasing Twitter in October, Elon Musk has slashed headcount by more than 80%, in some cases eliminating entire teams.

    Alphabet announced plans to cut roughly 12,000 workers across product areas and regions earlier this year. And Meta has previously said it would eliminate about 21,000 jobs over two rounds of layoffs, hitting across teams devoted to policy, user experience and well-being, among others.

    “We remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community – including our efforts to prepare for elections around the world,” Andy Stone, a spokesperson for Meta, said in a statement to CNN about the letter.

    Alphabet and Twitter did not immediately respond to a request for comment.

    The pullback at those companies has coincided with a broader industry retrenchment in the face of economic headwinds. Peers such as Microsoft and Amazon have also trimmed their workforces, while others have announced hiring freezes.

    But the social media companies are coming under greater scrutiny now in part due to their role facilitating the US electoral process.

    Tuesday’s letter asked Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai and Twitter CEO Linda Yaccarino how each company is preparing for the 2024 elections and for mis- and disinformation surrounding the campaigns.

    To illustrate their concerns, the lawmakers pointed to recent changes at Alphabet-owned YouTube to allow the sharing of false claims that the 2020 presidential election was stolen, along with what they described as content moderation “challenges” at Twitter since the layoffs.

    The letter, which seeks responses by July 10, also asked whether the companies may hire more content moderation employees or contractors ahead of the election, and how the platforms may be specifically preparing for the rise of AI-generated deepfakes in politics.

    Already, candidates such as Florida Gov. Ron DeSantis appear to have used fake, AI-generated images to attack their opponents, raising questions about the risks that artificial intelligence could pose for democracy.

    [ad_2]

    Source link

  • What is Threads? Here’s what you need to know about the potential ‘Twitter Killer’ | CNN Business

    What is Threads? Here’s what you need to know about the potential ‘Twitter Killer’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook-parent Meta on Wednesday officially launched its Twitter competitor, Threads, after first confirming its plans for the app just three months ago.

    Threads is already off to a strong start: the app received 30 million sign-ups as of Thursday morning, according to the company, including a large number of brands, celebrities, journalists and many other prominent accounts.

    The mood on Threads Wednesday night felt a bit like the first day of school, with early adopters rushing to try out the app and write their first posts — and some questioning whether the app could end up being the “Twitter killer.” As of Thursday morning, Threads was the top free app on Apple’s App Store and a top trending topic on Twitter.

    Threads could pose a serious threat to Twitter, which has faced backlash since Elon Musk took over the platform in October 2022 and has run it with a fly-by-the-seat-of-your-pants approach. But Twitter has become particularly vulnerable in recent days, angering users over a temporary limit on how much content users can view each day. And for Meta, Threads could further expand its empire of popular apps and provide a new platform on which to sell ads.

    Here is everything we know so far about Meta’s Threads:

    Threads is a new app from the parent company of Facebook, Instagram and WhatsApp. The platform looks a lot like Twitter, with a feed of largely text-based posts — although users can also post photos and videos — where people can have real-time conversations.

    Meta said messages posted to Threads will have a 500-character limit. Similar to Twitter, users can reply to, repost and quote others’ Threads posts. But the app also blends Instagram’s existing aesthetic and navigation system, and offers the ability to share posts from Threads directly to Instagram Stories.

    Thread accounts can also be listed as public or private. Verified Instagram accounts are automatically verified on Threads.

    “The vision for Threads is to create an option and friendly public space for conversation,” Meta CEO Mark Zuckerberg said in a Threads post following the launch. “We hope to take what Instagram does best and create a new experience around text, ideas, and discussing what’s on your mind.”

    Some users did experience occasional glitches and issues getting content to load in the early hours after Threads launched, but that is to be expected when millions of users are joining and using an app at once.

    Users sign up through their Instagram accounts and keep the same username, password and account name, although they can edit their bio to be unique to Threads. Users can also import the list of accounts they follow directly from Instagram, making it super easy to get up and running on the app.

    But it’s not quite so easy to leave Threads. While users can temporarily deactivate their profiles via the settings section on the app, the company says in its privacy policy that “your Threads profile can only be deleted by deleting your Instagram account.” Some users have also raised concerns about the amount of data that the Threads, like Instagram, can collect about users, including location, contacts, search history, browsing history, contact info and more, according to the Apple App Store.

    Threads is available in 100 countries and more than 30 languages via Apple’s iOS and Android, according to the company.

    Threads is just the latest platform launched in recent months in hopes of unseating Twitter as the go-to app for real-time, public conversations. But it may have the greatest chance at success.

    Many Twitter users have expressed desire for an alternative since Musk took over the platform late last year. Frequent technical issues and policy changes have sent some noteworthy Twitter users heading for the exits.

    Meta has at least one significant leg up on Twitter: the size of its existing user base. Meta is hoping to capture at least some of its more than 2 billion global active Instagram users with the new app. That’s compared to Twitter’s active user base, which is somewhere around 250 million.

    “It’ll take some time, but I think there should be a public conversations app with 1 billion+ people on it,” Zuckerberg said in a Threads post. “Twitter has had the opportunity do this but hasn’t nailed it. Hopefully we will.”

    In a tweet on Thursday, Twitter’s new CEO Linda Yaccarino appeared to acknowledge the rival app’s launch, calling Twitter “irreplaceable.”

    “We’re often imitated – but the Twitter community can never be duplicated,” she said.

    Meta’s existing scale and infrastructure could play to its advantage. Whereas many of the other Twitter competitors rolled out in recent months have required users to join waitlists or receive invitations to sign up, only to have to work to recreate their network on the new site, Threads makes it remarkably easy for users to get started.

    But Instagram CEO Adam Mosseri noted in a video posted to the platform that the challenge for new social media platforms often is not getting users to sign up, but rather keeping them engaged long-term.

    In particular, Meta will have to work to prevent spam, harassment, conspiracy theories and false claims on Threads, issues that have caused many users to sour on Twitter. The new platform’s launch comes after Meta laid off more than 20,000 workers starting last November, including user experience, well-being, policy and risk analytics employees. It also comes as campaign season for the 2024 US Presidential election ramps up, with some experts warning of an incoming wave of misinformation. Meta says its Community Guidelines will apply to Threads, just like its other apps.

    For Meta, Threads could be a way of eking additional engagement time out of its massive existing user base.

    Although there are no ads on the platform just yet, Threads could also ultimately supplement Meta’s core advertising business. Meta’s ad business could use a boost after facing challenges from a broad decline in the online ad market and changes to Apple’s app privacy practices, although, if Twitter’s history is any guide, the format is unlikely to attract as many ad dollars as Meta’s other platforms.

    For Zuckerberg, though, the real draw may be in attempting to best his rival, Musk, with whom he has in recent weeks been making plans to engage in a cage fight. Perhaps winning in the battle of social networks is even better.

    [ad_2]

    Source link

  • Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    [ad_1]



    CNN
     — 

    Microsoft, Google and other leading artificial intelligence companies committed Friday to put new AI systems through outside testing before they are publicly released and to clearly label AI-generated content, the White House announced.

    The pledges are part of a series of voluntary commitments agreed to by the White House and seven leading AI companies – which also include Amazon, Meta, OpenAI, Anthropic and Inflection – aimed at making AI systems and products safer and more trustworthy while Congress and the White House develop more comprehensive regulations to govern the rapidly growing industry. President Joe Biden met with top executives from all seven companies at the White House on Friday.

    In a speech Friday, Biden called the companies commitments “real and concrete,” adding they will help fulfill their “fundamental obligations to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

    “We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years. That has been an astounding revelation,” Biden said.

    White House officials acknowledge that some of the companies have already enacted some of the commitments but argue they will as a whole raise “the standards for safety, security and trust of AI” and will serve as a “bridge to regulation.”

    “It’s a first step, it’s a bridge to where we need to go,” White House deputy chief of staff Bruce Reed, who has been managing the AI policy process, said in an interview. “It will help industry and government develop the capacities to make sure that AI is safe and secure. And we pushed to move so quickly because this technology is moving farther and faster than anything we’ve seen before.”

    While most of the companies already conduct internal “red-teaming” exercises, the commitments will mark the first time they have all committed to allow outside experts to test their systems before they are released to the public. A red team exercise is designed to simulate what could go wrong with a given technology – such as a cyberattack or its potential to be used by malicious actors – and allows companies to proactively identify shortcomings and prevent negative outcomes.

    Reed said the external red-teaming “will help pave the way for government oversight and regulation,” potentially laying the groundwork for that outside testing to be carried out by a government regulator or licenser.

    The commitments could also lead to widespread watermarking of AI-generated audio and visual content with the aim of combating fraud and misinformation.

    The companies also committed to investing in cybersecurity and “insider threat safeguards,” in particular to protect AI model weights, which are essentially the knowledge base upon which AI systems rely; creating a robust mechanism for third parties to report system vulnerabilities; prioritizing research on the societal risks of AI; and developing and deploying AI systems “to help address society’s greatest challenges,” according to the White House.

    Asked by CNN’s Jake Tapper Friday about worries he has when it comes to AI, Microsoft Vice Chair and President Brad Smith pointed to “what people, bad actors, individuals or countries will do” with the technology.

    “That they’ll use it to undermine our elections, that they will use it to seek to break in to our computer networks. You know, that they’ll use it in ways that will undermine the security of our jobs,” he said.

    But, Smith argued, “the best way to solve these problems is to focus on them, to understand them, to bring people together, and to solve them. And the interesting thing about AI, in my opinion, is that when we do that, and we are determined to do that, we can use AI to defend against these problems far more effectively than we can today.”

    Pressed by Tapper about AI and compensation concerns listed in a recent letter signed by thousands of authors, Smith said: “I don’t want it to undermine anybody’s ability to make a living by creating, by writing. That is the balance that we should all want to strike.”

    All of the commitments are voluntary and White House officials acknowledged that there is no enforcement mechanism to ensure the companies stick to the commitments, some of which also lack specificity.

    Common Sense Media, a child internet-safety organization, commended the White House for taking steps to establish AI guardrails, but warned that “history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

    “If we’ve learned anything from the last decade and the complete mismanagement of social media governance, it’s that many companies offer a lot of lip service,” Common Sense Media CEO James Steyer said in a statement. “And then they prioritize their profits to such an extent that they will not hold themselves accountable for how their products impact the American people, particularly children and families.”

    The federal government’s failure to regulate social media companies at their inception – and the resistance from those companies – has loomed large for White House officials as they have begun crafting potential AI regulations and executive actions in recent months.

    “The main thing we stressed throughout the discussions with the companies was that we should make this as robust as possible,” Reed said. “The tech industry made a mistake in warding off any kind of oversight, legislation and regulation a decade ago and I think that AI is progressing even more rapidly than that and it’s important for this bridge to regulation to be a sturdy one.”

    The commitments were crafted during a monthslong back-and-forth between the AI companies and the White House that began in May when a group of AI executives came to the White House to meet with Biden, Vice President Kamala Harris and White House officials. The White House also sought input from non-industry AI safety and ethics experts.

    White House officials are working to move beyond voluntary commitments, readying a series of executive actions, the first of which is expected to be unveiled later this summer. Officials are also working closely with lawmakers on Capitol Hill to develop more comprehensive legislation to regulate AI.

    “This is a serious responsibility. We have to get it right. There’s an enormous, enormous potential upside as well,” Biden said.

    In the meantime, White House officials say the companies will “immediately” begin implementing the voluntary commitments and hope other companies sign on in the future.

    “We expect that other companies will see how they also have an obligation to live up to the standards of safety, security and trust. And they may choose – and we would welcome them choosing – joining these commitments,” a White House official said.

    This story has been updated with additional details.

    [ad_2]

    Source link