ReportWire

Tag: iab-artificial intelligence

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • Bill Gates says AI risks are real but nothing we can’t handle | CNN Business

    Bill Gates says AI risks are real but nothing we can’t handle | CNN Business

    [ad_1]



    CNN
     — 

    Bill Gates sounds less worried than some other executives in Silicon Valley about the risks of artificial intelligence.

    In a blog post on Tuesday, the Microsoft co-founder outlined some of the biggest areas of concern with artificial intelligence, including the potential for spreading misinformation and displacing jobs. But he stressed that these risks are “manageable.”

    “This is not the first time a major innovation has introduced new threats that had to be controlled,” Gates wrote. “We’ve done it before.”

    Gates likened AI to previous “transformative” changes in society, such as the introduction of the car, which then required the public to adopt seat belts, speed limits, driver’s licenses and other safety standards. Innovation, he said, can create “a lot of turbulence” in the beginning, but society can “come out better off in the end.”

    Microsoft is one of the leaders in the race to develop and deploy a new crop of generative AI tools into popular products with the promise of helping people be more productive and creative. But a number of prominent figures in the industry have also publicly raised doomsday scenarios about the rapidly evolving technology.

    In late May, tech leaders including Microsoft’s CTO Kevin Scott joined dozens of AI researchers and some celebrities in signing a one-sentence letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    Gates has previously said people should not “panic” about apocalyptic AI scenarios. In a blog post earlier this year, Gates wrote: “Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months.”

    In his blog post this week, Gates said he believes one of the biggest areas of concern for AI is the potential for deepfakes and AI-generated misinformation to undermine elections and democracy. Gates said he is “hopeful” that “AI can help identify deepfakes as well as create them.” He also said laws needs to be clear about deepfake usage and labeling “so everyone understands when something they’re seeing or hearing is not genuine.”

    Gates also expressed concern over how AI could make it easier for hackers and even countries to launch cyberattacks on people and governments. Gates urged the development of related cybersecurity measures and for governments to consider creating a global body for AI similar to the International Atomic Energy Agency.

    Gates ticked through other concerns, too, including how AI could take away people’s jobs,perpetuate biases baked into the data on which it’s trained, and even disrupt the way kids learn to write.

    “It reminds me of the time when electronic calculators became widespread in the 1970s and 1980s,” Gates wrote. “Some math teachers worried that students would stop learning how to do basic arithmetic, but others embraced the new technology and focused on the thinking skills behind the arithmetic.”

    Gates said “it’s natural to feel unsettled” during a transition period, but added he is optimistic about the future and how “history shows that it’s possible to solve the challenges created by new technologies.”

    “It’s the most transformative innovation any of us will see in our lifetimes,” he wrote, “and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks.”

    [ad_2]

    Source link

  • The viral new ‘Drake’ and ‘Weeknd’ song is not what it seems | CNN Business

    The viral new ‘Drake’ and ‘Weeknd’ song is not what it seems | CNN Business

    [ad_1]



    CNN
     — 

    One of the buzziest songs recently circulating on TikTok and climbing the Spotify charts featured the familiar voices of best-selling artists Drake and the Weeknd. But there’s a twist: Drake and the Weeknd appear to have had nothing to do with it.

    The viral track, “Heart on my Sleeve,” comes from an anonymous TikTok user named Ghostwriter977, who claims to have used artificial intelligence to generate the voices of Drake and the Weeknd for the track.

    “I was a ghostwriter for years and got paid close to nothing just for major labels to profit,” Ghostwriter977 wrote in the video comments. “The future is here.”

    “Heart on my Sleeve” racked up more than 11 million views across several videos in just a few days and was streamed on Spotify hundreds of thousands of times. The original TikTok video has seemingly been taken down, and the song has since been removed from streaming services including YouTube, Apple Music and Spotify. (TikTok, YouTube, Apple and Spotify did not respond to a request for comment.)

    The exact origin of the song remains unclear, and some have suggested it could be a publicity stunt. But the stunning traction for “Heart on my Sleeve” may only add to the anxiety inside the music industry as it goes on offense against the possible threat posed by a new crop of increasingly powerful AI tools on the market.

    Universal Music Group, the music label that represents Drake, The Weeknd and numerous other superstars, sent urgent letters in April to streaming platforms, including Spotify and Apple Music, asking them to block AI platforms from training on the melodies and lyrics of their copywritten songs.

    “The training of generative AI using our artists’ music — which represents both a breach of our agreements and a violation of copyright law as well as the availability of infringing content created with generative AI on digital service providers – begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” the company said in a statement this week to CNN.

    The record label said platforms have “a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

    But attempting to crack down on AI-generated music may pose a unique challenge. The legal landscape for AI work remains unclear, the tools to create it are widely accessible and social media makes it easier than ever to distribute it.

    AI-generated music is not new. Taryn Southern’s debut song “Break Free,” which was composed and produced with AI, hit the Top 100 radio charts back in 2018, and VAVA, an AI music artist (i.e. not a human), currently has a single out in Thailand.

    But a new crop of AI tools have made it easier than ever to quickly generate convincing images, audio, video and written work. Some services such as Boomy specifically leverage generative AI to make music creation more accessible.

    There’s little known about who is behind the Ghostwriter977 account, or which tools the creator used to make the track. The user did not respond to a CNN request for comment.

    In the bio section of the user’s TikTok account, a link directs users to a page on Laylo, a website where fans can sign up to get notifications from artists when new songs are dropped or merchandise and tickets become available. The company told CNN the account likely registered to build up its fan base and brought in “tens of thousands” of signups in the past few days.

    Laylo CEO Alec Ellin denied that the company was behind the viral track as some have speculated, but Ellin told CNN whoever did make it was “clearly a really savvy creator” and called it “a perfect example of the power of using Laylo to own your audience.”

    Michael Inouye, an analyst at ABI Research, said “Heart on my Sleeve” could have been made in several ways depending on the sophistication of the AI and level of musical talent.

    “If music artists were involved, they could create the background music and the lyrics, and then the AI model could be trained with content from Drake and The Weekend to replicate their voices and singing styles,” he said. “AI could also have generated most of the song, lyrics and replicated the artists again based on the training data set and any prompts given to direct the AI model.”

    He added that part of this fascination and virality of the song comes from “just how good AI has gotten at creating content, which includes replicating famous people.”

    Roberto Nickson, who is building an AI platform to help boost productivity and work flow, recently posted a video on Twitter showing how easy it is to record a verse and train an AI model to replace his vocals. He used the artist formerly known as Kanye West as an example.

    “The results will blow your mind,” he said. “You’re going to be listening to songs by your favorite artist that are completely indistinguishable and you’re not going to know if it’s them or not.”

    Although the entertainment industry has seen these issues coming, regulations are lagging behind the rapid pace of AI development.

    Audrey Benoualid, an entertainment lawyer based in Los Angeles, said one could argue “Heart On My Sleeve” does not infringe copyright as it appears to be an “original” composition.

    “Ghostwriter also publicized that Drake and The Weeknd were not involved in the making of the song, which could protect them from a ‘passing off’ claim, where profits are generated as consumers are misled into believing the song is actually a Drake-Weeknd collaboration,” she said in an email to CNN.

    However, Benoualid added, machine learning and generative AI programs may also be found to infringe copyright in existing works, either by making copies of those works to train the AI or by generating outputs that are substantially similar to those existing works. “Major labels would undoubtedly, and have already begun to, argue that their copyrights (and their artists’ intellectual property rights) are being infringed,” she said.

    Michael Nash, an executive VP at Universal Music Group, recently wrote in an op-ed that AI music is “diluting the market, making original creations harder to find, and violating artists’ legal rights to compensation from their work.”

    No regulations exist that dictate on what AI can and cannot train. But last month, in response to individuals looking to seek copyright for AI-generated works, the US Copyright Office released new guidance around how to register literary, musical, and artistic works made with AI.

    The copyright will be determined on a case-by-case basis, the guidance continued, based on how the AI tool operates and how it was used to create the final piece or work. The US Copyright Office announced it will also be seeking public input on how the law should apply to copywritten works the AI trains on, and how the office should treat those works.

    “AI and copyright law and the rights of musicians and labels have crashed into one another (once again), and it will take time for the dust to settle,” Benoualid said. “The landscape is anything but clear at the moment.”

    Inouye said if AI generated content becomes associated with famous individuals in a negative way that could be grounds for a lawsuit to not only take content down but to cease and desist their operations and potentially seek damage.

    “On the flip side, if the content were to be popular and the creator were to make revenue off of the artists’ image or likeness then again the artists could similarly request the content to be taken down and potentially sue for any monetary gains,” he said.

    But for now, concerned parties may be forced to play whack-a-mole. While services like Spotify pulled “Heart on my Sleeve,” versions of it appeared to continue circulating as of Tuesday on other online platforms.

    Even a song made with artificial intelligence may find real staying power online.

    – CNN’s Vanessa Yurkevich contributed to this report.

    [ad_2]

    Source link

  • US senator introduces bill to create a federal agency to regulate AI | CNN Business

    US senator introduces bill to create a federal agency to regulate AI | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Days after OpenAI CEO Sam Altman testified in front of Congress and proposed creating a new federal agency to regulate artificial intelligence, a US senator has introduced a bill to do just that.

    On Thursday, Colorado Democratic Sen. Michael Bennet unveiled an updated version of legislation he introduced last year that would establish a Federal Digital Platform Commission.

    The updated bill, which was reviewed by CNN, makes numerous changes to more explicitly cover AI products, including by amending the definition of a digital platform to include companies that offer “content primarily generated by algorithmic processes.”

    “There’s no reason that the biggest tech companies on Earth should face less regulation than Colorado’s small businesses – especially as we see technology corrode our democracy and harm our kids’ mental health with virtually no oversight,” Bennet said in a statement. “Technology is moving quicker than Congress could ever hope to keep up with. We need an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest.”

    The revised bill expands on the definition of an algorithmic process, clarifying that the proposed commission would have jurisdiction over the use of personal data to generate content or to make a decision — two key applications associated with generative AI, the technology behind popular tools such as OpenAI’s viral chatbot, ChatGPT.

    And for the most significant platforms — companies the bill calls “systemically important” — the bill would create requirements for algorithmic audits and public risk assessments of the harms their tools could cause.

    The bill retains existing language mandating that the commission ensure platform algorithms are “fair, transparent, and safe.” And under the bill, the commission would continue to have broad oversight authority over social media sites, search engines and other online platforms.

    But the added emphasis on AI highlights how Congress is rapidly gearing up for policymaking on a cutting-edge technology it is scrambling to understand. The debate over whether the US government should establish a separate federal agency to police AI tools may become a significant focus of those efforts following Altman’s testimony this week.

    Altman suggested in a Senate hearing on Tuesday that such an agency could restrict how AI is developed through licenses or credentialing for AI companies. Some lawmakers appeared receptive to the idea, with Louisiana Republican Sen. John Kennedy even asking Altman whether he would be open to serving as its chair.

    “I love my current job,” Altman demurred, to laughter from the audience.

    Thursday’s bill does not explicitly provide for such a licensing program, though it directs the would-be commission to design rules appropriate for overseeing the industry, according to a Bennet aide. Bennet’s office did not consult with OpenAI on either the original bill or Thursday’s revised version.

    But even as some lawmakers have embraced the concept of a specialized regulator for internet companies — which could conflict with existing cops on the beat at agencies including the Justice Department and the Federal Trade Commission — others have warned of the potential risks of creating a whole new bureaucracy.

    Gary Marcus, a New York University professor and self-described critic of AI “hype,” told lawmakers at Tuesday’s hearing that a separate agency could fall victim to “regulatory capture,” a term that describes when industries gain dominating influence over the government agencies created to hold them accountable.

    Connecticut Democratic Sen. Richard Blumenthal, a former state attorney general who has prosecuted consumer protection cases, said no agency can be effective without proper support.

    “I’ve been doing this stuff for a while,” Blumenthal said. “You can create 10 new agencies, but if you don’t give them the resources — and I’m not just talking about dollars, I’m talking about scientific expertise — [industry] will run circles around them.”

    [ad_2]

    Source link

  • Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    [ad_1]


    London
    CNN
     — 

    The European Union took a major step Wednesday toward setting rules — the first in the world — on how companies can use artificial intelligence.

    It’s a bold move that Brussels hopes will pave the way for global standards for a technology used in everything from chatbots such as OpenAI’s ChatGPT to surgical procedures and fraud detection at banks.

    “We have made history today,” Brando Benifei, a member of the European Parliament working on the EU AI Act, told journalists.

    Lawmakers have agreed a draft version of the Act, which will now be negotiated with the Council of the European Union and EU member states before becoming law.

    “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei added.

    Hundreds of top AI scientists and researchers warned last month that the technology posed an extinction risk to humanity, and several prominent figures — including Microsoft President Brad Smith and OpenAI CEO Sam Altman — have called for greater regulation of the technology.

    At the Yale CEO Summit this week, more than 40% of business leaders — including Walmart chief Doug McMillion and Coca-Cola

    (KO)
    CEO James Quincy — said AI had the potential to destroy humanity five to 10 years from now.

    Against that backdrop, the EU AI Act seeks to “promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects.”

    Here are the key takeaways.

    Once approved, the Act will apply to anyone who develops and deploys AI systems in the EU, including companies located outside the bloc.

    The extent of regulation depends on the risks created by a particular application, from minimal to “unacceptable.”

    Systems that fall into the latter category are banned outright. These include real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China, which assign people a “health score” based on their behavior.

    The legislation also sets tight restrictions on “high-risk” AI applications, which are those that threaten “significant harm to people’s health, safety, fundamental rights or the environment.”

    These include systems used to influence voters in an election, as well as social media platforms with more than 45 million users that recommend content to their users — a list that would include Facebook, Twitter and Instagram.

    The Act also outlines transparency requirements for AI systems.

    For instance, systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content.

    Detailed summaries of the copyrighted data used to train these AI systems would also have to be published.

    AI systems with minimal or no risk, such as spam filters, fall largely outside of the rules.

    Most AI systems will likely fall into the high-risk or prohibited categories, leaving their owners exposed to potentially enormous fines if they fall foul of the regulations, according to Racheal Muldoon, a barrister (litigator) at London law firm Maitland Chambers.

    Engaging in prohibited AI practices could lead to a fine of up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    That goes much further than Europe’s signature data privacy law, the General Data Protection Regulation, under which Meta was hit with a €1.2 billion ($1.3 billion) fine last month. GDPR sets fines of up to €10 million ($10.8 million), or up to 2% of a firm’s global turnover.

    Fines under the AI Act serve as a “war cry from the legislators to say, ‘take this seriously’,” Muldoon said.

    At the same time, penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for start-ups.

    The Act also requires EU member states to establish at least one regulatory “sandbox” to test AI systems before they are deployed.

    “The one thing that we wanted to achieve with this text is balance,” Dragoș Tudorache, a member of the European Parliament, told journalists. The Act protects citizens while also “promoting innovation, not hindering creativity, and deployment and development of AI in Europe,” he added.

    The Act gives citizens the right to file complaints against providers of AI systems and makes a provision for an EU AI Office to monitor enforcement of the legislation. It also requires member states to designate national supervisory authorities for AI.

    Microsoft

    (MSFT)
    — which, together with Google, is at the forefront of AI development globally — welcomed progress on the Act but said it looked forward to “further refinement.”

    “We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson said in a statement.

    IBM

    (IBM)
    , meanwhile, called on EU policymakers to take a “risk-based approach” and suggested four “key improvements” to the draft Act, including further clarity around high-risk AI “so that only truly high-risk use cases are captured.”

    The Act may not come into force until 2026, according to Muldoon, who said revisions were likely, given how rapidly AI was advancing. The legislation has already gone through several updates since drafting began in 2021.

    “The law will expand in scope as the technology develops,” Muldoon said.

    [ad_2]

    Source link

  • With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Last month, a video posted to Twitter by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s top infectious disease specialist, were tricky to spot: they were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    As the images began spreading, fact-checking organizations and sharp-eyed users quickly flagged them as fake. But Twitter, which has slashed much of its staff in recent months under new ownership, did not remove the video. Instead, it eventually added a community note — a contributor-led feature to highlight misinformation on the social media platform — to the post, alerting the site’s users that in the video “3 still shots showing Trump embracing Fauci are AI generated images.”

    Experts in digital information integrity say it’s just the start of AI-generated content being used ahead of the 2024 US Presidential election in ways that could confuse or mislead voters.

    A new crop of AI tools offer the ability to generate compelling text and realistic images — and, increasingly, video and audio. Experts, and even some executives overseeing AI companies, say these tools risk spreading false information to mislead voters, including ahead of the 2024 US election.

    “The campaigns are starting to ramp up, the elections are coming fast and the technology is improving fast,” said Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public. “We’ve already seen evidence of the impact that AI can have.”

    Social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep up with the next wave of election misinformation.

    Several major social networks have pulled back on their enforcement of some election-related misinformation and undergone significant layoffs over the past six months, which in some cases hit election integrity, safety and responsible AI teams. Current and former US officials have also raised alarms that a federal judge’s decision earlier this month to limit how some US agencies communicate with social media companies could have a “chilling effect” on how the federal government and states address election-related disinformation. (On Friday, an appeals court temporarily blocked the order.)

    Meanwhile, AI is evolving at a rapid pace. And despite calls from industry players and others, US lawmakers and regulators have yet to implement real guardrails for AI technologies.

    “I’m not confident in even their ability to deal with the old types of threats,” said David Evan Harris, an AI researcher and ethics adviser to the Psychology of Technology Institute, who previously worked on responsible AI at Facebook-parent Meta. “And now there are new threats.”

    The major platforms told CNN they have existing policies and practices in place related to misinformation and, in some cases, specifically targeting “synthetic” or computer-generated content, that they say will help them identify and address any AI-generated misinformation. None of the companies agreed to make anyone working on generative AI detection efforts available for an interview.

    The platforms “haven’t been ready in the past, and there’s absolutely no reason for us to believe that they’re going to be ready now,” Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University, told CNN.

    Misleading content, especially related to elections, is nothing new. But with the help of artificial intelligence, it’s now possible for anyone to quickly, easily and cheaply create huge quantities of fake content.

    And given AI technology’s rapid improvement over the past year, fake images, text, audio and videos are likely to be even harder to discern by the time the US election rolls around next year.

    “We’ve still got more than a year to go until the election. These tools are going to get better and, in the hands of sophisticated users, they can be very powerful,” said Harris. He added that the kinds of misinformation and election meddling that took place on social media in 2016 and 2020 will likely only be exacerbated by AI.

    The various forms of AI-generated content could be used together to make false information more believable — for example, an AI-written fake article accompanied by an AI-generated photo purporting to show what happened in the report, said Margaret Mitchell, researcher and chief ethics scientist at open-source AI firm Hugging Face.

    AI tools could be useful for anyone wanting to mislead, but especially for organized groups and foreign adversaries incentivized to meddle in US elections. Massive foreign troll farms have been hired to attempt to influence previous elections in the United States and elsewhere, but “now, one person could be in charge of deploying thousands of thousands of generative AI bots that work,” to pump out content across social media to mislead voters, Mitchell, who previously worked at Google, said.

    OpenAI, the maker of the popular AI chatbot ChatGPT, issued a stark warning about the risk of AI-generated misinformation in a recent research paper. An abundance of false information from AI systems, whether intentional or created by biases or “hallucinations” from the systems, has “the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction,” it said.

    Examples of AI-generated misinformation have already begun to crop up. In May, several Twitter accounts, including some who had paid for a blue “verification” checkmark, shared fake images purporting to show an explosion near the Pentagon. While the images were quickly debunked, their circulation was briefly followed by a dip in the stock market. Twitter suspended at least one of the accounts responsible for spreading the images. Facebook labeled posts about the images as “false information,” along with a fact check.

    A month earlier, the Republican National Committee released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington D.C. to whom CNN showed the video did not spot it on their first watch.

    Dozens of Democratic lawmakers last week sent a letter calling on the Federal Election Commission to consider cracking down on the use of artificial intelligence technology in political advertisements, warning that deceptive ads could harm the integrity of next year’s elections.

    Ahead of 2024, many of the platforms have said that they will be rolling out plans to protect the election’s integrity, including from the threat of AI-generated content.

    TikTok earlier this year rolled out a policy stipulating that “synthetic” or manipulated media created by AI must be clearly labeled, in addition to its civic integrity policy which prohibits misleading information about electoral processes and its general misinformation policy which prohibits false or misleading claims that could cause “significant harm” to individuals or society.

    YouTube has a manipulated media policy that prohibits content that has been “manipulated or doctored” in a way that could mislead users and “may pose a serious risk of egregious harm.” The platform also has policies against content that could mislead users about how and when to vote, false claims that could discourage voting and content that “encourages others to interfere with democratic processes.” YouTube also says it prominently surfaces reliable news and information about elections on its platform, and that its election-focused team includes members of its trust and safety, product and “Intelligence Desk” teams.

    “Technically manipulated content, including election content, that misleads users and may pose a serious risk of egregious harm is not allowed on YouTube,” YouTube spokesperson Ivy Choi said in a statement. “We enforce our manipulated content policy using machine learning and human review, and continue to improve on this work to stay ahead of potential threats.”

    A Meta spokesperson told CNN that the company’s policies apply to all content on its platforms, including AI-generated content. That includes its misinformation policy, which stipulates that the platform removes false claims that could “directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media,” and may reduce the spread of other misleading claims. Meta also prohibits ads featuring content that has been debunked by its network of third-party fact checkers.

    TikTok and Meta have also joined a group of tech industry partners coordinated by the non-profit Partnership on AI dedicated to developing a framework for responsible use of synthetic media.

    Asked for comment on this story, Twitter responded with an auto-reply of a poop emoji.

    Twitter has rolled back much of its content moderation in the months since billionaire Elon Musk took over the platform, and instead has leaned more heavily on its “Community Notes” feature which allows users to critique the accuracy of and add context to other people’s posts. On its website, Twitter also says it has a “synthetic media” policy under which it may label or remove “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

    Still, as is often the case with social media, the challenge is likely to be less a matter of having the policies in place than enforcing them. The platforms largely use a mix of human and automated review to identify misinformation and manipulated media. The companies declined to provide additional details about their AI detection processes, including how many staffers are involved in such efforts.

    But AI experts say they’re worried that the platforms’ detection systems for computer-generated content may have a hard time keeping up with the technology’s advancements. Even some of the companies developing new generative AI tools have struggled to build services that can accurately detect when something is AI-generated.

    Some experts are urging all the social platforms to implement policies requiring that AI-generated or manipulated content be clearly labeled, and calling on regulators and lawmakers to establish guardrails around AI and hold tech companies accountable for the spread of false claims.

    One thing is clear: the stakes for success are high. Experts say that not only does AI-generated content create the risk of internet users being misled by false information; it could also make it harder for them to trust real information about everything from voting to crisis situations.

    “We know that we’re going into a very scary situation where it’s going to be very unclear what has happened and what has not actually happened,” said Mitchell. “It completely destroys the foundation of reality when it’s a question whether or not the content you’re seeing is real.”

    [ad_2]

    Source link

  • Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat is about to give new meaning to the “chat” part of its name.

    Snap, the company behind Snapchat, announced on Wednesday that its customizable My AI chatbot, is now accessible to all users within the app. The feature, which is powered by the viral AI chatbot ChatGPT, was previously only available to paying Snapchat+ subscribers.

    The tool offers recommendations, answers questions, helps users make plans and can write a haiku in seconds, according to the company. It can be brought into conversation with friends when it’s mentioned with “@MyAI.” Users can also give it a name and design a custom Bitmoji avatar for it to personalize it more.

    The move comes more than a month after ChatGPT creator OpenAI opened up access to its chatbot to third-party businesses. Snap, Instacart and tutor app Quizlet were among the early partners experimenting with adding ChatGPT.

    Since its public release in November 2022, ChatGPT has stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    The initial batch of companies tapping into ChatGPT’s functionality each have slightly different visions for how to incorporate it. Taken together, however, these services may test just how useful AI chatbots can really be in our everyday life and how much people want to interact with them for customer service and other uses across their favorite apps.

    Adding ChatGPT features also may come with some risks. The tool, which is trained on vast troves of data online, can spread inaccurate information and has the potential to respond to users in ways they might find inappropriate.

    In a blog post on Wednesday, Snap acknowledged “My AI is far from perfect but we’ve made a lot of progress.”

    It said, for example, about 99.5% of My AI responses conform to its community guidelines. Snap said it has made changes to “help protect against responses that could be inappropriate or harmful.” The company also said it has added moderation technology and included the new feature to its in-app parental tools.

    “We will continue to use these early learnings to make AI a more safe, fun, and useful experience, and we’re eager to hear your thoughts,” the company said.

    [ad_2]

    Source link

  • How the CEO behind ChatGPT won over Congress | CNN Business

    How the CEO behind ChatGPT won over Congress | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    OpenAI CEO Sam Altman seems to have achieved in a matter of hours what other tech execs have been struggling to do for years: He charmed the socks off Congress.

    Despite wide-ranging concerns that artificial intelligence tools like OpenAI’s ChatGPT could disrupt democracy, national security, and the economy, Altman’s appearance Tuesday before a Senate subcommittee went so smoothly that viewers could have been forgiven for thinking the year was closer to 2013 than 2023.

    It was a pivotal moment for the AI industry. Altman’s testimony on Tuesday alongside Christina Montgomery, IBM’s chief privacy officer, promised to set the tone for how Washington regulates a technology that many fear could eliminate jobs or destabilize elections.

    But where lawmakers could have followed a familiar pattern, blasting the tech industry with hostile questioning and leveling withering allegations of reckless innovation, members of the Senate Judiciary Committee instead heaped praise on the companies — and often, on Altman in particular.

    The difference seemed to come down to OpenAI calling for proactive government regulation — and persuading lawmakers it was serious. Unlike the long list of social media hearings in recent years, this AI hearing came earlier in OpenAI’s lifecycle and, crucially, before the company or its technology had suffered any high-profile mishaps.

    Altman, more than any other figure in tech, has emerged as the face of a new crop of powerful and disruptive AI tools that can generate compelling written work and images in response to user prompts. Much of the federal government is now racing to figure out how to regulate the cutting-edge technology.

    But after his performance on Tuesday, the CEO whose company helped spark the new AI arms race may have maneuvered himself into a privileged position of influence over the rules that may soon govern the tools he’s developing.

    Altman’s easy-going, plain-spoken demeanor helped disarm skeptical lawmakers and appeared to win over Democrats and Republicans alike. His approach contrasted with the wooden, lawyerly performances that have afflicted some other tech CEOs in the past during their time in the hotseat.

    “I sense there is a willingness to participate here that is genuine and authentic,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the committee’s technology panel.

    New Jersey Democratic Sen. Cory Booker, adopting an unusual level of familiarity with a witness, found himself repeatedly addressing Altman as “Sam,” even as he referred to other panelists by their last names.

    Even Altman’s fellow witnesses couldn’t resist gushing about his style.

    “His sincerity in talking about those [AI] fears is very apparent, physically, in a way that just doesn’t communicate on the television screen,” Gary Marcus, a former New York University professor and a self-described critic of AI “hype,” told lawmakers.

    With a relaxed yet serious tone, Altman did not deflect or shy away from lawmakers’ concerns. He agreed that large-scale manipulation and deception using AI tools are among the technology’s biggest potential flaws. And he validated fears about AI’s impact on workers, acknowledging that it may “entirely automate away some jobs.”

    “If this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman said. “We want to work with the government to prevent that from happening.”

    Altman’s candor and openness has captivated many in Washington.

    On Monday evening, Altman spoke to a dinner audience of roughly 60 House lawmakers from both parties. One person in the room, speaking on condition of anonymity to discuss a closed-door meeting, described members of Congress as “riveted” by the conversation, which also saw Altman demonstrating ChatGPT’s capabilities “to much amusement” from the audience.

    Lawmakers have spent years railing against social media companies, attacking them for everything from their content moderation decisions to their economic dominance. On Tuesday, they seemed ready — or even relieved — to be dealing with another area of the technology industry.

    Whether this time is truly different remains unclear, though. The AI industry’s biggest players and aspirants include some of the same tech giants Congress has sharply criticized, including Google and Meta. OpenAI is receiving billions of dollars of investment from Microsoft in a multi-year partnership. And with his remarks on Tuesday, Altman appeared to draw from a familiar playbook for Silicon Valley: Referring to technology as merely a neutral tool, acknowledging his industry’s imperfections and inviting regulation.

    Some AI ethicists and experts questioned the value of asking a leading industry spokesperson how he would like to be regulated. Marcus, the New York University professor, cautioned that creating a new federal agency to police AI could lead to “regulatory capture” by the tech industry, but the warning could have applied just as easily to Congress itself.

    “It seems very very bad that ahead of a hearing meant to inform how this sector gets regulated, the CEO of one of the corporations that would be subject to that regulation gets to present a magic show to the regulators,” Emily Bender, a professor of computational linguistics at the University of Washington, said of Altman’s dinner with House lawmakers.

    She added: “Politicians, like journalists, must resist the urge to be impressed.”

    After years of fidgety evasiveness from other tech CEOs, however, lawmakers this week seemed easily wowed by Altman and his seemingly straight-shooting answers.

    Louisiana Republican Sen. John Kennedy, after expressing frustration with IBM’s Montgomery for providing a nuanced answer he couldn’t comprehend, visibly brightened when Altman quickly and smoothly outlined his regulatory proposals in a bulleted list. Kennedy began joking with Altman and even asked whether Altman might consider heading up a hypothetical federal agency charged with regulating the AI industry.

    “I love my current job,” Altman deadpanned, to audience laughter, before offering to send Kennedy’s office some potential candidates.

    Compounding lawmakers’ attraction to Altman is a belief on Capitol Hill that Congress erred in extending broad liability protections to online platforms at the dawn of the internet. That decision, which allowed for an explosion of blogs, e-commerce sites, streaming media and more, has become an object of regret for many lawmakers in the face of alleged mental health harms stemming from social media.

    “I don’t want to repeat that mistake again,” said Judiciary Committee Chairman Dick Durbin.

    Here too, Altman deftly seized an opportunity to curry favor with lawmakers by emphasizing distinctions between his industry and the social media industry.

    “We try to design systems that do not maximize for engagement,” Altman said, alluding to the common criticism that social media algorithms tend to prioritize outrage and negativity to boost usage. “We’re not an advertising-based model; we’re not trying to get people to use it more and more, and I think that’s a different shape than ad-supported social media.”

    In providing simple-sounding solutions with a smile, Altman is doing much more than shaping policy: He is offering members of Congress a shot at redemption, one they seem grateful to accept. Despite the many pitfalls of AI they identified on Tuesday, lawmakers appeared to thoroughly welcome Altman as a partner, not a potential adversary needing oversight and scrutiny.

    “We need to be mindful,” Blumenthal said, “of ways that rules can enable the big guys to get bigger and exclude innovation, and competition, and responsible good guys such as our representative in this industry right now.”

    [ad_2]

    Source link

  • Forget about the AI apocalypse. The real dangers are already here | CNN Business

    Forget about the AI apocalypse. The real dangers are already here | CNN Business

    [ad_1]



    CNN
     — 

    Two weeks after members of Congress questioned OpenAI CEO Sam Altman about the potential for artificial intelligence tools to spread misinformation, disrupt elections and displace jobs, he and others in the industry went public with a much more frightening possibility: an AI apocalypse.

    Altman, whose company is behind the viral chatbot tool ChatGPT, joined Google DeepMind CEO Demis Hassabis, Microsoft’s CTO Kevin Scott and dozens of other AI researchers and business leaders in signing a one-sentence letter last month stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The stark warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. But it also highlights an important dynamic in Silicon Valley right now: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.

    The dynamic has played out elsewhere recently, too. Tesla CEO Elon Musk, for example, said in a TV interview in April that AI could lead to “civilization destruction.” But he still remains deeply involved in the technology through investments across his sprawling business empire and has said he wants to create a rival to the AI offerings by Microsoft and Google.

    Some AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services.

    “Motives seemed to be mixed,” Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. Some of the execs are likely “genuinely worried about what they have unleashed,” he said, but others may be trying to focus attention on “abstract possibilities to detract from the more immediate possibilities.”

    Representatives for Google and OpenAI did not immediately respond to a request for comment. In a statement, a Microsoft spokesperson said: “We are optimistic about the future of AI, and we think AI advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly.”

    For Marcus, a self-described critic of AI hype, “the biggest immediate threat from AI is the threat to democracy from the wholesale production of compelling misinformation.”

    Generative AI tools like OpenAI’s ChatGPT and Dall-E are trained on vast troves of data online to create compelling written work and images in response to user prompts. With these tools, for example, one could quickly mimic the style or likeness of public figures in an attempt to create disinformation campaigns.

    In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

    Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright “hallucinating” responses and potentially perpetuating racial and gender biases.

    Gary Marcus, professor emeritus at New York University, right, listens to Sam Altman, chief executive officer and co-founder of OpenAI, speak during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.

    Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.

    Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.

    “If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer,” Bender told CNN.

    Regulators may be the real intended audience for the tech industry’s doomsday messaging.

    As Bender puts it, execs are essentially saying: “‘This stuff is very, very dangerous, and we’re the only ones who understand how to rein it in.’”

    Judging from Altman’s appearance before Congress, this strategy might work. Altman appeared to win over Washington by echoing lawmakers’ concerns about AI — a technology that many in Congress are still trying to understand — and offering suggestions for how to address it.

    This approach to regulation would be “hugely problematic,” Bender said. It could give the industry influence over the regulators tasked with holding it accountable and also leave out the voices and input of other people and communities experiencing negative impacts of this technology.

    “If the regulators kind of orient towards the people who are building and selling the technology as the only ones who could possibly understand this, and therefore can possibly inform how regulation should work, we’re really going to miss out,” Bender said.

    Bender said she tries, at every opportunity, to tell people “these things seem much smarter than they are.” As she put it, this is because “we are as smart as we are” and the way that we make sense of language, including responses from AI, “is actually by imagining a mind behind it.”

    Ultimately, Bender put forward a simple question for the tech industry on AI: “If they honestly believe that this could be bringing about human extinction, then why not just stop?”

    [ad_2]

    Source link

  • Microsoft unveils more secure AI-powered Bing Chat for businesses to ensure ‘data doesn’t leak’ | CNN Business

    Microsoft unveils more secure AI-powered Bing Chat for businesses to ensure ‘data doesn’t leak’ | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft on Tuesday announced a more secure version of its AI-powered Bing specifically for businesses and designed to assure professionals they can safely share potentially sensitive information with a chatbot.

    With Bing Chat Enterprise, the user’s chat data will not be saved, sent to Microsoft’s servers or used to train the AI models, according to the company.

    “What this [update] means is your data doesn’t leak outside the organization,” Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer, told CNN in an interview. “We don’t co-mingle your data with web data, and we don’t save it without your permission. So no data gets saved on the servers, and we don’t use any of your data chats to train the AI models.”

    Since ChatGPT launched late last year, a new crop of powerful AI tools has offered the promise of making workers more productive. But in recent months, some businesses such as JPMorgan Chase banned the use of ChatGPT among its employees, citing security and privacy concerns. Other large companies have reportedly taken similar steps over concerns around sharing confidential information with AI chatbots.

    In April, regulators in Italy issued a temporary ban on ChatGPT in the country after OpenAI disclosed a bug that allowed some users to see the subject lines from other users’ chat histories. The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post at the time.

    Like other tech companies, Microsoft is racing to develop and deploy a range of AI-powered tools for consumers and professionals amid widespread investor enthusiasm for the new technology. Microsoft also said Tuesday that it will add visual searches to its existing AI-powered Bing Chat tool. And the company said the Microsoft 365 Co-pilot, its previously announced AI-powered tool that helps edit, summarize, create and compare documents across its various products, will cost $30 a month for each user.

    Bing Chat Enterprise will be free for all of its 160 million Microsoft 365 subscribers starting on Tuesday, if a company’s IT department manually turns on the tool. After 30 days, however, Microsoft will roll out access to all users by default; subscribed businesses can disable the tool if they so choose.

    Current conversational AI tools such as the consumer version of Bing Chat send data from personal chats to their servers to train and improve its AI model.

    Microsoft’s new enterprise option is identical to the consumer version of Bing but it will not recall conversations with users, so they’ll need to go back and start from scratch each time. (Bing recently started to enable saved chats on its consumer chat model.)

    With these changes, Microsoft, which uses OpenAI’s technology to power its Bing chat tool, said workers can have “complete confidence” their data “won’t be leaked outside of the organization.”

    To access the tool, a user will sign into the Bing browser with their work credentials and the system will automatically detect the account and put it into a protected mode, according to Microsoft. Above the “ask me anything” bar reads: “Your personal and company data are protected in this chat.”

    In a demo video shown to CNN ahead of its launch, Microsoft showed how a user could type confidential details into Bing Chat Enterprise, such as an someone sharing financial information as part of preparing a bid to buy a building. With the new tool, the user could ask Bing Chat to create a table to compare the property to other neighboring buildings and write an analysis that highlights the strengths and weaknesses of their bid relative to other local bids.

    In addition to trying to ease privacy and security concerns around AI in the workplace, Mehdi also addressed the problem of factual errors. To reduce the possibility of inaccuracies or “hallucinations,” as some in the industry call it, he suggested users write clear, better prompts and check the included citations.

    [ad_2]

    Source link

  • The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI policy group claims | CNN Business

    The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI policy group claims | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    An AI policy think tank wants the US government to investigate OpenAI and its wildly popular GPT artificial intelligence product, claiming that algorithmic bias, privacy concerns and the technology’s tendency to produce sometimes inaccurate results may violate federal consumer protection law.

    The Federal Trade Commission should prohibit OpenAI from releasing future versions of GPT, the Center for AI and Digital Policy (CAIDP) said Thursday in an agency complaint, and establish new regulations for the rapidly growing AI sector.

    The complaint seeks to bring the full force of the FTC’s broad consumer protection powers to bear against what CAIDP portrayed as a Wild West of runaway experimentation in which consumers pay for the unintended consequences of AI development. And it could prove to be an early test of the US government’s appetite for directly regulating AI, as tech-skeptic officials such as FTC Chair Lina Khan have warned of the dangers of unchecked data use for commercial purposes and of novel ways that tech companies may try to entrench monopolies.

    The FTC declined to comment. OpenAI didn’t immediately respond to a request for comment.

    “We believe that the FTC should look closely at OpenAI and GPT-4,” said Marc Rotenberg, CAIDP’s president and a longtime consumer protection advocate on technology issues.

    The complaint attacks a range of risks associated with generative artificial intelligence, which has captured the world’s attention after OpenAI’s ChatGPT — powered by an earlier version of the GPT product — was first released to the public late last year. Everyday internet users have used ChatGPT to write poetry, create software and get answers to questions, all within seconds and with surprising sophistication. Microsoft and Google have both begun to integrate that same type of AI into their search products, with Microsoft’s Bing running on the GPT technology itself.

    But the race for dominance in a seemingly new field has also produced unsettling or simply flat-out incorrect results, such as confident claims that Feb. 12, 2023 came before Dec. 16, 2022. In industry parlance, these types of mistakes are known as “AI hallucinations” — and they should be considered legally enforceable violations, CAIDP argued in its complaint.

    “Many of the problems associated with GPT-4 are often described as ‘misinformation,’ ‘hallucinations,’ or ‘fabrications.’ But for the purpose of the FTC, these outputs should best be understood as ‘deception,’” the complaint said, referring to the FTC’s broad authority to prosecute unfair or deceptive business acts or practices.

    The complaint acknowledges that OpenAI has been upfront about many of the limitations of its algorithms. For example, the white paper linked to GPT’s latest release, GPT-4, explains that the model may “produce content that is nonsensical or untruthful in relation to certain sources.” OpenAI also makes similar disclosures about the possibility that tools like GPT can lead to broad-based discrimination against minorities or other vulnerable groups.

    But in addition to arguing that those outcomes themselves may be unfair or deceptive, CAIDP also alleges that OpenAI has violated the FTC’s AI guidelines by trying to offload responsibility for those risks onto its clients who use the technology.

    The complaint alleges that OpenAI’s terms require news publishers, banks, hospitals and other institutions that deploy GPT to include a disclaimer about the limitations of artificial intelligence. That does not insulate OpenAI from liability, according to the complaint.

    Citing a March FTC advisory on chatbots, CAIDP wrote: “Recently [the] FTC stated that ‘Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.’”

    Artificial intelligence also stands to have vast implications for consumer privacy and cybersecurity, said CAIDP, issues that sit squarely within the FTC’s jurisdiction but that the agency has not studied in connection with GPT’s inner workings.

    [ad_2]

    Source link

  • Meta stock jumps after company reports first revenue growth in nearly a year | CNN Business

    Meta stock jumps after company reports first revenue growth in nearly a year | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook-parent Meta on Wednesday reported that it grew sales by 3% during the first three months of the year, reversing a trend of three consecutive quarters of revenue declines and far exceeding Wall Street analysts’ expectations.

    Meta shares jumped as much as 12% in after-hours trading following the report, continuing the company’s strong trajectory since Zuckerberg announced that 2023 would be a “year of efficiency.”

    Another bright spot: user growth was relatively strong compared to recent quarters. The number of monthly active people on Meta’s family of apps grew 5% from the prior year to more than 3.8 billion and Facebook daily active users increased 4% to more than 2 billion.

    “We had a good quarter and our community continues to grow,” Zuckerberg said in a statement Wednesday. “We’re also becoming more efficient so we can build better products faster and put ourselves in a stronger position to deliver our long term vision.”

    But Meta has a long hill to climb.

    The company also reported that profits declined by nearly a quarter compared to the same period in the prior year to $5.7 billion. Price per advertisement — an indicator of the health of the company’s core digital ad business — also decreased by 17% from the year prior.

    Meta has been in the midst of a massive restructuring, as it attempts to recover from a perfect storm of heightened competition, lingering recession fears resulting in fewer ad dollars and a multibillion dollar effort to build a future version of the internet it calls the metaverse. Meta said in November it would eliminate 11,000 jobs, the single largest round of cuts in its history. And in March, Zuckerberg announced Meta would lay off another 10,000 employees. All told, the cuts will shrink Meta’s workforce by a quarter.

    Meta took a hit of more than $1 billion related to the restructuring in the March quarter, and said it will realize additional charges of around $500 million related to 2023 layoffs by the end of the year.

    Zuckerberg said on a call with analysts Wednesday that when Meta started its “efficiency work” late last year, “our business wasn’t performing as well as I wanted, but now we’re increasingly doing this work from a position of strength.”

    The company said it expects revenue to grow again in the current quarter compared to the prior year. And it slightly lowered its expectations for full-year expenses, potentially buoying investor optimism.

    “The year of efficiency is off to a stronger than expected start for Meta,” Insider Intelligence principal analyst Debra Aho Williamson said in a statement. But she added that the company “can’t afford to sit still in this environment.”

    Like other tech companies, Meta has recently read investor cues and taken to playing up its focus on artificial intelligence rather than the metaverse. The shift comes as Meta contends with the popularity of AI tools from tech firms like Microsoft and OpenAI.

    In his statement with the results Wednesday, Zuckerberg said: “Our AI work is driving good results across our apps and business.” He added in the call that the company’s AI work includes efforts to build AI chat experiences in WhatsApp and Messenger, as well as visual creation tools for posts on Facebook and Instagram and advertisements.

    [ad_2]

    Source link

  • How the technology behind ChatGPT could make mind-reading a reality | CNN Business

    How the technology behind ChatGPT could make mind-reading a reality | CNN Business

    [ad_1]



    CNN
     — 

    On a recent Sunday morning, I found myself in a pair of ill-fitting scrubs, lying flat on my back in the claustrophobic confines of an fMRI machine at a research facility in Austin, Texas. “The things I do for television,” I thought.

    Anyone who has had an MRI or fMRI scan will tell you how noisy it is — electric currents swirl creating a powerful magnetic field that produces detailed scans of your brain. On this occasion, however, I could barely hear the loud cranking of the mechanical magnets, I was given a pair of specialized earphones that began playing segments from The Wizard of Oz audiobook.

    Why?

    Neuroscientists at the University of Texas in Austin have figured out a way to translate scans of brain activity into words using the very same artificial intelligence technology that powers the groundbreaking chatbot ChatGPT.

    The breakthrough could revolutionize how people who have lost the ability to speak can communicate. It’s just one pioneering application of AI developed in recent months as the technology continues to advance and looks set to touch every part of our lives and our society.

    “So, we don’t like to use the term mind reading,” Alexander Huth, assistant professor of neuroscience and computer science at the University of Texas at Austin, told me. “We think it conjures up things that we’re actually not capable of.”

    Huth volunteered to be a research subject for this study, spending upward of 20 hours in the confines of an fMRI machine listening to audio clips while the machine snapped detailed pictures of his brain.

    An artificial intelligence model analyzed his brain and the audio he was listening to and, over time, was eventually able to predict the words he was hearing just by watching his brain.

    The researchers used the San Francisco-based startup OpenAI’s first language model, GPT-1, that was developed with a massive database of books and websites. By analyzing all this data, the model learned how sentences are constructed — essentially how humans talk and think.

    The researchers trained the AI to analyze the activity of Huth and other volunteers’ brains while they listened to specific words. Eventually the AI learned enough that it could predict what Huth and others were listening to or watching just by monitoring their brain activity.

    I spent less than a half-hour in the machine and, as expected, the AI wasn’t able to decode that I had been listening to a portion of The Wizard of Oz audiobook that described Dorothy making her way along the yellow brick road.

    Huth listened to the same audio but because the AI model had been trained on his brain it was accurately able to predict parts of the audio he was listening to.

    While the technology is still in its infancy and shows great promise, the limitations might be a source of relief to some. AI can’t easily read our minds, yet.

    “The real potential application of this is in helping people who are unable to communicate,” Huth explained.

    He and other researchers at UT Austin believe the innovative technology could be used in the future by people with “locked-in” syndrome, stroke victims and others whose brains are functioning but are unable to speak.

    “Ours is the first demonstration that we can get this level of accuracy without brain surgery. So we think that this is kind of step one along this road to actually helping people who are unable to speak without them needing to get neurosurgery,” he said.

    While breakthrough medical advances are no doubt good news and potentially life-changing for patients struggling with debilitating ailments, it also raises questions about how the technology could be applied in controversial settings.

    Could it be used to extract a confession from a prisoner? Or to expose our deepest, darkest secrets?

    The short answer, Huth and his colleagues say, is no — not at the moment.

    For starters, brain scans need to occur in an fMRI machine, the AI technology needs to be trained on an individual’s brain for many hours, and, according to the Texas researchers, subjects need to give their consent. If a person actively resists listening to audio or thinks about something else the brain scans will not be a success.

    “We think that everyone’s brain data should be kept private,” said Jerry Tang, the lead author on a paper published earlier this month detailing his team’s findings. “Our brains are kind of one of the final frontiers of our privacy.”

    Tang explained, “obviously there are concerns that brain decoding technology could be used in dangerous ways.” Brain decoding is the term the researchers prefer to use instead of mind reading.

    “I feel like mind reading conjures up this idea of getting at the little thoughts that you don’t want to let slip, little like reactions to things. And I don’t think there’s any suggestion that we can really do that with this kind of approach,” Huth explained. “What we can get is the big ideas that you’re thinking about. The story that somebody is telling you, if you’re trying to tell a story inside your head, we can kind of get at that as well.”

    Last week, the makers of generative AI systems, including OpenAI CEO Sam Altman, descended on Capitol Hill to testify before a Senate committee over lawmakers’ concerns of the risks posed by the powerful technology. Altman warned that the development of AI without guardrails could “cause significant harm to the world” and urged lawmakers to implement regulations to address concerns.

    Echoing the AI warning, Tang told CNN that lawmakers need to take “mental privacy” seriously to protect “brain data” — our thoughts — two of the more dystopian terms I’ve heard in the era of AI.

    While the technology at the moment only works in very limited cases, that might not always be the case.

    “It’s important not to get a false sense of security and think that things will be this way forever,” Tang warned. “Technology can improve and that could change how well we can decode and change whether decoders require a person’s cooperation.”

    [ad_2]

    Source link

  • Google is using AI to change how you shop | CNN Business

    Google is using AI to change how you shop | CNN Business

    [ad_1]



    CNN
     — 

    Google wants to make it easier for online shoppers to know how clothing will look on them before making a purchase.

    The company on Wednesday announced a new virtual try-on feature that uses generative AI, the same technology underpinning a new crop of chatbots and image creation tools, to show clothes on a wide selection of body types.

    With the feature, shoppers can see how an item would drape, fold, cling, stretch or form wrinkles and shadows on a diverse set of models in various poses, according to the company.

    Google is also launching a feature that helps users find similar clothing pieces in different colors, patterns or styles, from merchants across the web, using a visual matching algorithm powered by AI.

    These efforts are part of Google’s bigger push to defend its search engine from the threat posed by a wave of new AI-powered tools in the wake of the viral success of ChatGPT. At the Google I/O developer conference last month, the company spent more than 90 minutes teasing a long list of AI announcements, including expanding access to its existing chatbot Bard and bringing new AI capabilities to Google Search.

    Google said it developed the virtual try-on option using many pairs of images of more than 80 models standing forward and sideways, from sizes XS to XL, and with varying skin tones, body shapes and ethnic backgrounds. The AI-powered tool then learned to match the shape of certain shirts in those positions to generate realistic images of the person from all angles.

    The feature will initially work with women’s tops from brands such as Anthropology, Loft, H&M and Everlane. Google said it will expand to men’s shirts in the future. Google also said the tool will get more precise over time.

    Google isn’t the only e-commerce company blending generative AI into the shopping experience. Some companies such as Shopify and Instacart are using the technology to help inform customers’ shopping decisions. Amazon is experimenting with using artificial intelligence to sum up customer feedback about products on the site, with the potential to cut down on the time shoppers spend sifting through reviews before making a purchase. And eBay recently rolled out an AI tool to help sellers generate product listing descriptions.

    [ad_2]

    Source link

  • Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

    The list of more than 8,000 authors includes some of the world’s most celebrated writers, including Margaret Atwood, Dan Brown, Michael Chabon, Jonathan Franzen, James Patterson, Jodi Picoult and Philip Pullman, among others.

    In an open letter they signed, posted by the Authors Guild Tuesday, the writers accused AI companies of unfairly profiting from their work.

    “Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill,” the letter said. “You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.”

    Tuesday’s letter was addressed to the CEOs of ChatGPT-maker OpenAI, Facebook-parent Meta, Google, Stability AI, IBM and Microsoft. Most of the companies didn’t immediately respond to a request for comment. Meta, Microsoft and Stability AI declined to comment.

    Much of the tech industry is now working to develop AI tools that can generate compelling images and written work in response to user prompts. These tools are built on large language models, which are trained on vast troves of information online. But recently, there has been growing pressure on tech companies over alleged intellectual property violations with this training process.

    This month, comedian Sarah Silverman and two authors filed a copyright lawsuit against OpenAI and Meta, while a proposed class-action suit accused Google of “stealing everything ever created and shared on the internet by hundreds of millions of Americans,” including copyrighted content. Google has called the lawsuit “baseless,” saying it has been upfront for years that it uses public data to train its algorithms. OpenAI did not previously respond to a request for comment on the suit.

    In addition to demanding compensation “for the past and ongoing use of our works in your generative AI programs,” the thousands of authors who signed the letter this week called on AI companies to seek permission before using the copyrighted material. They also urged the companies to pay writers when their work is featured in the results of generative AI, “whether or not the outputs are infringing under current law.”

    The letter also cites this year’s Supreme Court holding in Warhol v Goldsmith, which found that the late artist Andy Warhol infringed on a photographer’s copyright when he created a series of silk screens based on a photograph of the late singer Prince. The court ruled that Warhol did not sufficiently “transform” the underlying photograph so as to avoid copyright infringement.

    “The high commerciality of your use argues against fair use,” the authors wrote to the AI companies.

    In May, OpenAI CEO Sam Altman appeared to acknowledge more needs to be done to address concerns from creators about how AI systems use their works.

    “We’re trying to work on new models where if an AI system is using your content, or if it’s using your style, you get paid for that,” he said at an event.

    – CNN’s Catherine Thorbecke contributed to this report.

    [ad_2]

    Source link