ReportWire

Tag: iab-artificial intelligence

  • Microsoft unveils more secure AI-powered Bing Chat for businesses to ensure ‘data doesn’t leak’ | CNN Business

    Microsoft unveils more secure AI-powered Bing Chat for businesses to ensure ‘data doesn’t leak’ | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft on Tuesday announced a more secure version of its AI-powered Bing specifically for businesses and designed to assure professionals they can safely share potentially sensitive information with a chatbot.

    With Bing Chat Enterprise, the user’s chat data will not be saved, sent to Microsoft’s servers or used to train the AI models, according to the company.

    “What this [update] means is your data doesn’t leak outside the organization,” Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer, told CNN in an interview. “We don’t co-mingle your data with web data, and we don’t save it without your permission. So no data gets saved on the servers, and we don’t use any of your data chats to train the AI models.”

    Since ChatGPT launched late last year, a new crop of powerful AI tools has offered the promise of making workers more productive. But in recent months, some businesses such as JPMorgan Chase banned the use of ChatGPT among its employees, citing security and privacy concerns. Other large companies have reportedly taken similar steps over concerns around sharing confidential information with AI chatbots.

    In April, regulators in Italy issued a temporary ban on ChatGPT in the country after OpenAI disclosed a bug that allowed some users to see the subject lines from other users’ chat histories. The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post at the time.

    Like other tech companies, Microsoft is racing to develop and deploy a range of AI-powered tools for consumers and professionals amid widespread investor enthusiasm for the new technology. Microsoft also said Tuesday that it will add visual searches to its existing AI-powered Bing Chat tool. And the company said the Microsoft 365 Co-pilot, its previously announced AI-powered tool that helps edit, summarize, create and compare documents across its various products, will cost $30 a month for each user.

    Bing Chat Enterprise will be free for all of its 160 million Microsoft 365 subscribers starting on Tuesday, if a company’s IT department manually turns on the tool. After 30 days, however, Microsoft will roll out access to all users by default; subscribed businesses can disable the tool if they so choose.

    Current conversational AI tools such as the consumer version of Bing Chat send data from personal chats to their servers to train and improve its AI model.

    Microsoft’s new enterprise option is identical to the consumer version of Bing but it will not recall conversations with users, so they’ll need to go back and start from scratch each time. (Bing recently started to enable saved chats on its consumer chat model.)

    With these changes, Microsoft, which uses OpenAI’s technology to power its Bing chat tool, said workers can have “complete confidence” their data “won’t be leaked outside of the organization.”

    To access the tool, a user will sign into the Bing browser with their work credentials and the system will automatically detect the account and put it into a protected mode, according to Microsoft. Above the “ask me anything” bar reads: “Your personal and company data are protected in this chat.”

    In a demo video shown to CNN ahead of its launch, Microsoft showed how a user could type confidential details into Bing Chat Enterprise, such as an someone sharing financial information as part of preparing a bid to buy a building. With the new tool, the user could ask Bing Chat to create a table to compare the property to other neighboring buildings and write an analysis that highlights the strengths and weaknesses of their bid relative to other local bids.

    In addition to trying to ease privacy and security concerns around AI in the workplace, Mehdi also addressed the problem of factual errors. To reduce the possibility of inaccuracies or “hallucinations,” as some in the industry call it, he suggested users write clear, better prompts and check the included citations.

    [ad_2]

    Source link

  • The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI policy group claims | CNN Business

    The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI policy group claims | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    An AI policy think tank wants the US government to investigate OpenAI and its wildly popular GPT artificial intelligence product, claiming that algorithmic bias, privacy concerns and the technology’s tendency to produce sometimes inaccurate results may violate federal consumer protection law.

    The Federal Trade Commission should prohibit OpenAI from releasing future versions of GPT, the Center for AI and Digital Policy (CAIDP) said Thursday in an agency complaint, and establish new regulations for the rapidly growing AI sector.

    The complaint seeks to bring the full force of the FTC’s broad consumer protection powers to bear against what CAIDP portrayed as a Wild West of runaway experimentation in which consumers pay for the unintended consequences of AI development. And it could prove to be an early test of the US government’s appetite for directly regulating AI, as tech-skeptic officials such as FTC Chair Lina Khan have warned of the dangers of unchecked data use for commercial purposes and of novel ways that tech companies may try to entrench monopolies.

    The FTC declined to comment. OpenAI didn’t immediately respond to a request for comment.

    “We believe that the FTC should look closely at OpenAI and GPT-4,” said Marc Rotenberg, CAIDP’s president and a longtime consumer protection advocate on technology issues.

    The complaint attacks a range of risks associated with generative artificial intelligence, which has captured the world’s attention after OpenAI’s ChatGPT — powered by an earlier version of the GPT product — was first released to the public late last year. Everyday internet users have used ChatGPT to write poetry, create software and get answers to questions, all within seconds and with surprising sophistication. Microsoft and Google have both begun to integrate that same type of AI into their search products, with Microsoft’s Bing running on the GPT technology itself.

    But the race for dominance in a seemingly new field has also produced unsettling or simply flat-out incorrect results, such as confident claims that Feb. 12, 2023 came before Dec. 16, 2022. In industry parlance, these types of mistakes are known as “AI hallucinations” — and they should be considered legally enforceable violations, CAIDP argued in its complaint.

    “Many of the problems associated with GPT-4 are often described as ‘misinformation,’ ‘hallucinations,’ or ‘fabrications.’ But for the purpose of the FTC, these outputs should best be understood as ‘deception,’” the complaint said, referring to the FTC’s broad authority to prosecute unfair or deceptive business acts or practices.

    The complaint acknowledges that OpenAI has been upfront about many of the limitations of its algorithms. For example, the white paper linked to GPT’s latest release, GPT-4, explains that the model may “produce content that is nonsensical or untruthful in relation to certain sources.” OpenAI also makes similar disclosures about the possibility that tools like GPT can lead to broad-based discrimination against minorities or other vulnerable groups.

    But in addition to arguing that those outcomes themselves may be unfair or deceptive, CAIDP also alleges that OpenAI has violated the FTC’s AI guidelines by trying to offload responsibility for those risks onto its clients who use the technology.

    The complaint alleges that OpenAI’s terms require news publishers, banks, hospitals and other institutions that deploy GPT to include a disclaimer about the limitations of artificial intelligence. That does not insulate OpenAI from liability, according to the complaint.

    Citing a March FTC advisory on chatbots, CAIDP wrote: “Recently [the] FTC stated that ‘Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.’”

    Artificial intelligence also stands to have vast implications for consumer privacy and cybersecurity, said CAIDP, issues that sit squarely within the FTC’s jurisdiction but that the agency has not studied in connection with GPT’s inner workings.

    [ad_2]

    Source link

  • Meta stock jumps after company reports first revenue growth in nearly a year | CNN Business

    Meta stock jumps after company reports first revenue growth in nearly a year | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook-parent Meta on Wednesday reported that it grew sales by 3% during the first three months of the year, reversing a trend of three consecutive quarters of revenue declines and far exceeding Wall Street analysts’ expectations.

    Meta shares jumped as much as 12% in after-hours trading following the report, continuing the company’s strong trajectory since Zuckerberg announced that 2023 would be a “year of efficiency.”

    Another bright spot: user growth was relatively strong compared to recent quarters. The number of monthly active people on Meta’s family of apps grew 5% from the prior year to more than 3.8 billion and Facebook daily active users increased 4% to more than 2 billion.

    “We had a good quarter and our community continues to grow,” Zuckerberg said in a statement Wednesday. “We’re also becoming more efficient so we can build better products faster and put ourselves in a stronger position to deliver our long term vision.”

    But Meta has a long hill to climb.

    The company also reported that profits declined by nearly a quarter compared to the same period in the prior year to $5.7 billion. Price per advertisement — an indicator of the health of the company’s core digital ad business — also decreased by 17% from the year prior.

    Meta has been in the midst of a massive restructuring, as it attempts to recover from a perfect storm of heightened competition, lingering recession fears resulting in fewer ad dollars and a multibillion dollar effort to build a future version of the internet it calls the metaverse. Meta said in November it would eliminate 11,000 jobs, the single largest round of cuts in its history. And in March, Zuckerberg announced Meta would lay off another 10,000 employees. All told, the cuts will shrink Meta’s workforce by a quarter.

    Meta took a hit of more than $1 billion related to the restructuring in the March quarter, and said it will realize additional charges of around $500 million related to 2023 layoffs by the end of the year.

    Zuckerberg said on a call with analysts Wednesday that when Meta started its “efficiency work” late last year, “our business wasn’t performing as well as I wanted, but now we’re increasingly doing this work from a position of strength.”

    The company said it expects revenue to grow again in the current quarter compared to the prior year. And it slightly lowered its expectations for full-year expenses, potentially buoying investor optimism.

    “The year of efficiency is off to a stronger than expected start for Meta,” Insider Intelligence principal analyst Debra Aho Williamson said in a statement. But she added that the company “can’t afford to sit still in this environment.”

    Like other tech companies, Meta has recently read investor cues and taken to playing up its focus on artificial intelligence rather than the metaverse. The shift comes as Meta contends with the popularity of AI tools from tech firms like Microsoft and OpenAI.

    In his statement with the results Wednesday, Zuckerberg said: “Our AI work is driving good results across our apps and business.” He added in the call that the company’s AI work includes efforts to build AI chat experiences in WhatsApp and Messenger, as well as visual creation tools for posts on Facebook and Instagram and advertisements.

    [ad_2]

    Source link

  • How the technology behind ChatGPT could make mind-reading a reality | CNN Business

    How the technology behind ChatGPT could make mind-reading a reality | CNN Business

    [ad_1]



    CNN
     — 

    On a recent Sunday morning, I found myself in a pair of ill-fitting scrubs, lying flat on my back in the claustrophobic confines of an fMRI machine at a research facility in Austin, Texas. “The things I do for television,” I thought.

    Anyone who has had an MRI or fMRI scan will tell you how noisy it is — electric currents swirl creating a powerful magnetic field that produces detailed scans of your brain. On this occasion, however, I could barely hear the loud cranking of the mechanical magnets, I was given a pair of specialized earphones that began playing segments from The Wizard of Oz audiobook.

    Why?

    Neuroscientists at the University of Texas in Austin have figured out a way to translate scans of brain activity into words using the very same artificial intelligence technology that powers the groundbreaking chatbot ChatGPT.

    The breakthrough could revolutionize how people who have lost the ability to speak can communicate. It’s just one pioneering application of AI developed in recent months as the technology continues to advance and looks set to touch every part of our lives and our society.

    “So, we don’t like to use the term mind reading,” Alexander Huth, assistant professor of neuroscience and computer science at the University of Texas at Austin, told me. “We think it conjures up things that we’re actually not capable of.”

    Huth volunteered to be a research subject for this study, spending upward of 20 hours in the confines of an fMRI machine listening to audio clips while the machine snapped detailed pictures of his brain.

    An artificial intelligence model analyzed his brain and the audio he was listening to and, over time, was eventually able to predict the words he was hearing just by watching his brain.

    The researchers used the San Francisco-based startup OpenAI’s first language model, GPT-1, that was developed with a massive database of books and websites. By analyzing all this data, the model learned how sentences are constructed — essentially how humans talk and think.

    The researchers trained the AI to analyze the activity of Huth and other volunteers’ brains while they listened to specific words. Eventually the AI learned enough that it could predict what Huth and others were listening to or watching just by monitoring their brain activity.

    I spent less than a half-hour in the machine and, as expected, the AI wasn’t able to decode that I had been listening to a portion of The Wizard of Oz audiobook that described Dorothy making her way along the yellow brick road.

    Huth listened to the same audio but because the AI model had been trained on his brain it was accurately able to predict parts of the audio he was listening to.

    While the technology is still in its infancy and shows great promise, the limitations might be a source of relief to some. AI can’t easily read our minds, yet.

    “The real potential application of this is in helping people who are unable to communicate,” Huth explained.

    He and other researchers at UT Austin believe the innovative technology could be used in the future by people with “locked-in” syndrome, stroke victims and others whose brains are functioning but are unable to speak.

    “Ours is the first demonstration that we can get this level of accuracy without brain surgery. So we think that this is kind of step one along this road to actually helping people who are unable to speak without them needing to get neurosurgery,” he said.

    While breakthrough medical advances are no doubt good news and potentially life-changing for patients struggling with debilitating ailments, it also raises questions about how the technology could be applied in controversial settings.

    Could it be used to extract a confession from a prisoner? Or to expose our deepest, darkest secrets?

    The short answer, Huth and his colleagues say, is no — not at the moment.

    For starters, brain scans need to occur in an fMRI machine, the AI technology needs to be trained on an individual’s brain for many hours, and, according to the Texas researchers, subjects need to give their consent. If a person actively resists listening to audio or thinks about something else the brain scans will not be a success.

    “We think that everyone’s brain data should be kept private,” said Jerry Tang, the lead author on a paper published earlier this month detailing his team’s findings. “Our brains are kind of one of the final frontiers of our privacy.”

    Tang explained, “obviously there are concerns that brain decoding technology could be used in dangerous ways.” Brain decoding is the term the researchers prefer to use instead of mind reading.

    “I feel like mind reading conjures up this idea of getting at the little thoughts that you don’t want to let slip, little like reactions to things. And I don’t think there’s any suggestion that we can really do that with this kind of approach,” Huth explained. “What we can get is the big ideas that you’re thinking about. The story that somebody is telling you, if you’re trying to tell a story inside your head, we can kind of get at that as well.”

    Last week, the makers of generative AI systems, including OpenAI CEO Sam Altman, descended on Capitol Hill to testify before a Senate committee over lawmakers’ concerns of the risks posed by the powerful technology. Altman warned that the development of AI without guardrails could “cause significant harm to the world” and urged lawmakers to implement regulations to address concerns.

    Echoing the AI warning, Tang told CNN that lawmakers need to take “mental privacy” seriously to protect “brain data” — our thoughts — two of the more dystopian terms I’ve heard in the era of AI.

    While the technology at the moment only works in very limited cases, that might not always be the case.

    “It’s important not to get a false sense of security and think that things will be this way forever,” Tang warned. “Technology can improve and that could change how well we can decode and change whether decoders require a person’s cooperation.”

    [ad_2]

    Source link

  • Google is using AI to change how you shop | CNN Business

    Google is using AI to change how you shop | CNN Business

    [ad_1]



    CNN
     — 

    Google wants to make it easier for online shoppers to know how clothing will look on them before making a purchase.

    The company on Wednesday announced a new virtual try-on feature that uses generative AI, the same technology underpinning a new crop of chatbots and image creation tools, to show clothes on a wide selection of body types.

    With the feature, shoppers can see how an item would drape, fold, cling, stretch or form wrinkles and shadows on a diverse set of models in various poses, according to the company.

    Google is also launching a feature that helps users find similar clothing pieces in different colors, patterns or styles, from merchants across the web, using a visual matching algorithm powered by AI.

    These efforts are part of Google’s bigger push to defend its search engine from the threat posed by a wave of new AI-powered tools in the wake of the viral success of ChatGPT. At the Google I/O developer conference last month, the company spent more than 90 minutes teasing a long list of AI announcements, including expanding access to its existing chatbot Bard and bringing new AI capabilities to Google Search.

    Google said it developed the virtual try-on option using many pairs of images of more than 80 models standing forward and sideways, from sizes XS to XL, and with varying skin tones, body shapes and ethnic backgrounds. The AI-powered tool then learned to match the shape of certain shirts in those positions to generate realistic images of the person from all angles.

    The feature will initially work with women’s tops from brands such as Anthropology, Loft, H&M and Everlane. Google said it will expand to men’s shirts in the future. Google also said the tool will get more precise over time.

    Google isn’t the only e-commerce company blending generative AI into the shopping experience. Some companies such as Shopify and Instacart are using the technology to help inform customers’ shopping decisions. Amazon is experimenting with using artificial intelligence to sum up customer feedback about products on the site, with the potential to cut down on the time shoppers spend sifting through reviews before making a purchase. And eBay recently rolled out an AI tool to help sellers generate product listing descriptions.

    [ad_2]

    Source link

  • Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

    The list of more than 8,000 authors includes some of the world’s most celebrated writers, including Margaret Atwood, Dan Brown, Michael Chabon, Jonathan Franzen, James Patterson, Jodi Picoult and Philip Pullman, among others.

    In an open letter they signed, posted by the Authors Guild Tuesday, the writers accused AI companies of unfairly profiting from their work.

    “Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill,” the letter said. “You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.”

    Tuesday’s letter was addressed to the CEOs of ChatGPT-maker OpenAI, Facebook-parent Meta, Google, Stability AI, IBM and Microsoft. Most of the companies didn’t immediately respond to a request for comment. Meta, Microsoft and Stability AI declined to comment.

    Much of the tech industry is now working to develop AI tools that can generate compelling images and written work in response to user prompts. These tools are built on large language models, which are trained on vast troves of information online. But recently, there has been growing pressure on tech companies over alleged intellectual property violations with this training process.

    This month, comedian Sarah Silverman and two authors filed a copyright lawsuit against OpenAI and Meta, while a proposed class-action suit accused Google of “stealing everything ever created and shared on the internet by hundreds of millions of Americans,” including copyrighted content. Google has called the lawsuit “baseless,” saying it has been upfront for years that it uses public data to train its algorithms. OpenAI did not previously respond to a request for comment on the suit.

    In addition to demanding compensation “for the past and ongoing use of our works in your generative AI programs,” the thousands of authors who signed the letter this week called on AI companies to seek permission before using the copyrighted material. They also urged the companies to pay writers when their work is featured in the results of generative AI, “whether or not the outputs are infringing under current law.”

    The letter also cites this year’s Supreme Court holding in Warhol v Goldsmith, which found that the late artist Andy Warhol infringed on a photographer’s copyright when he created a series of silk screens based on a photograph of the late singer Prince. The court ruled that Warhol did not sufficiently “transform” the underlying photograph so as to avoid copyright infringement.

    “The high commerciality of your use argues against fair use,” the authors wrote to the AI companies.

    In May, OpenAI CEO Sam Altman appeared to acknowledge more needs to be done to address concerns from creators about how AI systems use their works.

    “We’re trying to work on new models where if an AI system is using your content, or if it’s using your style, you get paid for that,” he said at an event.

    – CNN’s Catherine Thorbecke contributed to this report.

    [ad_2]

    Source link

  • 300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    As many as 300 million full-time jobs around the world could be automated in some way by the newest wave of artificial intelligence that has spawned platforms like ChatGPT, according to Goldman Sachs economists.

    They predicted in a report Sunday that 18% of work globally could be computerized, with the effects felt more deeply in advanced economies than emerging markets.

    That’s partly because white-collar workers are seen to be more at risk than manual laborers. Administrative workers and lawyers are expected to be most affected, the economists said, compared to the “little effect” seen on physically demanding or outdoor occupations, such as construction and repair work.

    In the United States and Europe, approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely, the bank estimates.

    If generative artificial intelligence “delivers on its promised capabilities, the labor market could face significant disruption,” the economists wrote. The term refers to the technology behind ChatGPT, the chatbot sensation that has taken the world by storm.

    ChatGPT, which can answer prompts and write essays, has already prompted many businesses to rethink how people should work every day.

    This month, its developer unveiled the latest version of the software behind the bot, GPT-4. The platform has quickly impressed early users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Further use of such AI will likely lead to job losses, the Goldman Sachs economists wrote. But they noted that technological innovation that initially displaces workers has historically also created employment growth over the long haul.

    While workplaces may shift, widespread adoption of AI could ultimately increase labor productivity — and boost global GDP by 7% annually over a 10-year period, according to Goldman Sachs.

    “Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI,” the economists added.

    “Most workers are employed in occupations that are partially exposed to AI automation and, following AI adoption, will likely apply at least some of their freed-up capacity toward productive activities that increase output.”

    Of US workers expected to be affected, for instance, 25% to 50% of their workload “can be replaced,” the researchers added.

    “The combination of significant labor cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labor productivity boom like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer.”

    — CNN’s Nicole Goodkind contributed to this report.

    [ad_2]

    Source link

  • TikTok is testing a new option to create AI-generated avatars for profile pictures | CNN Business

    TikTok is testing a new option to create AI-generated avatars for profile pictures | CNN Business

    [ad_1]


    New York
    CNN
     — 

    TikTok is testing a new option to let users create AI-generated avatars for their profile pictures, the company confirmed to CNN on Wednesday, in a move with the potential to put recent advances in artificial intelligence technology front and center for millions of users.

    The new feature appears to create a stylized, illustrated image of the user based on an uploaded picture, according to a post from social media consultant Matt Navarra, who was first to spot the option.

    The feature is still in the early stages of testing and not widely available to TikTok users, according to the company, and there is currently no timeline for when the feature might roll out.

    “We’re always thinking about new ways to add value to the community and enrich the TikTok experience, as we continue to build a safe place that entertains, inspires creativity, and drives culture,” a TikTok spokesperson said in a statement provided to CNN. “In a few select regions, we’re experimenting with a new way to create and share profile pictures with the TikTok community.”

    AI-generated images have taken over the internet in recent months, but some tools have also raised concerns among privacy experts, digital artists, and users who have noticed the potential to sexualize images, make skin paler and make bodies thinner.

    [ad_2]

    Source link

  • Adobe is adding an AI-powered image generator to Photoshop | CNN Business

    Adobe is adding an AI-powered image generator to Photoshop | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Photoshop is about to look a little different.

    Adobe on Tuesday said it’s incorporating an AI-powered image generator into Photoshop, with the goal of “dramatically accelerating” how users edit their photos.

    The tool, called Firefly, allows users to add or delete elements from images with just a text prompt, according to Adobe. It can also match the lighting and style of the existing images automatically, the company said.

    It’s currently available in a new Photoshop beta app. The company plans to roll the product out to all Photoshop customers by the end of the year.

    Adobe’s move comes after a recent crop of AI tools have launched that can generate compelling written work and images in response to user prompts, with the potential to change how people work, create and communicate with each other.

    “[N]ow that we are entering a new era of AI, the advent of generative models presents a new opportunity to take our imaging capabilities to another level,” Pam Clark, vice president of Photoshop product management and product strategy, wrote in a blog post. “Over the last few months, we have integrated this exciting new technology into Photoshop in a major step toward a more natural, intuitive, and fun way to work.”

    Firefly was launched in March at the Adobe Summit as a web-only beta. It was trained on Adobe’s own collection of stock images, as well as publicly available assets. Adobe has called the tool one of its most successful beta launches ever, with more than 70 million images created in the first month.

    By relying on its own image collection and media available for public use, Adobe may be able to avoid the backlash that some other AI image generator tools have faced for using a vast trove of online content as training.

    In January, Getty Images sued Stability AI, the company behind popular AI art tool Stable Diffusion, alleging the tech company committed copyright infringement. Getty said Stability AI copied and processed millions of its images without obtaining the proper licensing.

    Stability filed a motion earlier this month to dismiss the suit.

    [ad_2]

    Source link

  • Everything you need to know about AI but were too afraid to ask | CNN Business

    Everything you need to know about AI but were too afraid to ask | CNN Business

    [ad_1]



    CNN
     — 

    Business executives keep talking about it. Teachers are struggling with what to do about it. And artists like Drake seem angry about it.

    Love it or hate it, everyone is paying attention to artificial intelligence right now. Almost overnight, a new crop of AI tools has found its way into products used by billions of people, changing the way we work, shop, create and communicate with each other.

    AI advocates tout the technology’s potential to supercharge our productivity, creating a new era of better jobs, better education and better treatments for diseases. AI skeptics have raised concerns about the technology’s potential to disrupt jobs, mislead people and possibly bring about the end of humanity as we know it. Confusingly, some execs in Silicon Valley seem to hold both sets of views at once.

    What’s clear, however, is that AI is not going away, but it is changing very fast. Here’s everything you need to know to keep up.

    In the public consciousness, “artificial intelligence” may conjure up images of murderous machines eager to overtake humans, and capable of doing so. But in the tech industry, it’s a broad term that refers to different tools that are trained to perform a wide range of complex tasks that might previously have required some input from an actual person.

    If you use the internet, then you almost certainly use services that rely on AI to sort data, filter content and make suggestions, among other tasks.

    It’s the technology that allows Netflix to recommend movies and that helps remove spam, hate speech and other inappropriate content from your social media feeds. It helps power everything from autocorrect features and Google Translate to facial recognition services, the last of which uses AI that, in Microsoft’s words, “mimics a human capability to recognize human faces.”

    AI can also be successful in developing techniques for solving a wide range of real world problems, such as adjusting traffic signals in real time to manage congestion issues or helping medical professionals analyze images to make a diagnosis. AI is also central to developing self-driving cars by processing tremendous amounts of visual data so the vehicles can understand their surroundings.

    The short answer: ChatGPT.

    For years, AI has largely operated in the background of services we use every day. That changed following the November launch of ChatGPT, a viral chatbot that put the power of AI front and center.

    People have already used ChatGPT, a tool created by OpenAI, to draft lawsuits, write song lyrics and create research paper abstracts so good they’ve even fooled some scientists. The tool has even passed standardized exams. And ChatGPT has sparked an intense competition among tech companies to develop and deploy similar tools.

    Microsoft and Google have each introduced features powered by generative AI, the technology underpinning ChatGPT, into their most widely used productivity tools. Meta, Amazon and Alibaba have said they’re working on generative AI tools, too. And numerous other businesses also want in on the action.

    It’s rare to see a cutting-edge technology become so ubiquitous almost overnight. Now businesses, educators and lawmakers are all racing to adapt.

    Generative AI enables tools to create written work, images and even audio in response to prompts from users.

    To get those responses, several Big Tech companies have developed their own large language models trained on vast amounts of online data. The scope and purpose of these data sets can vary. For example, the version of ChatGPT that went public last year was only trained on data up until 2021 (it’s now more up to date).

    These models work through a method called deep learning, which learns patterns and relationships between words, so it can make predictive responses and generate relevant outputs to user prompts.

    As impressive as some generative AI services may seem, they essentially just do pattern matching. These tools can mimic the writing of others or make predictions about what words might be relevant in their responses based on all the data they’ve previously been trained on.

    AGI, on the other hand, promises something more ambitious — and scary.

    AGI — short for artificial general intelligence — refers to technology that can perform intelligent tasks such as learning, reasoning and adapting to new situations in the way that humans do. OpenAI CEO Sam Altman has teased the possibility of a superintelligent AGI that could go on to change the world or perhaps backfire and end humanity.

    For the moment, however, AGI remains purely a hypothetical, so don’t worry too much about it.

    Anytime there’s an excess of buzz around a technology, it’s good to be skeptical — and there is certainly a lot of that here. Investor fascination with AI has helped push Wall Street back into a bull market, despite lingering economic uncertainty.

    Not all AI tools are equally useful and many companies will certainly tout AI features and strategies simply to tap into the current hype cycle. But even in just the past six months, AI has already shown potential to change how people do numerous everyday tasks.

    One of the biggest selling points around AI chatbots, for example, is their ability to make people more productive. Earlier this year, some real estate agents told CNN that ChatGPT saved them hours of work not only by writing listings for homes for sale but also looking up the permitted uses for certain land and calculating what mortgage payments or the return on investment might be for a client, which typically involve formulas and mortgage calculators.

    Artificial intelligence is also much broader than ChatGPT and other generative AI tools. Even if you think AI chatbots are annoying or might be a fad, the underlying technology will continue to power meaningful advances in products and services for years to come.

    The fear is AI will eliminate millions of jobs. The hope is it will help improve how millions do their jobs. The current reality is somewhere in between.

    Companies will likely need new workers to help them implement and manage AI tools. Employment of data analysts and scientists, machine learning specialists and cybersecurity experts is forecast to grow 30% on average by 2027, according to one recent estimate from the World Economic Forum.

    But the proliferation of AI will also likely put many roles at risk eventually. There could be 26 million fewer record-keeping and administrative jobs by 2027, the WEF predicted. Data entry clerks and executive secretaries are expected to see the steepest losses.

    For now, there are clearly limits to how well AI can do the job of a human on its own. When CNET, a media outlet, experimented with using AI to write articles, it came under scrutiny for publishing pieces with factual errors. Likewise, a lawyer in May made headlines for citing false court cases to a judge provided to him by ChatGPT. In an affidavit, the lawyer said he had never used ChatGPT as a legal research tool before and “was unaware of the possibility that its content could be false.”

    Alphabet CEO Sundar Pichai, left, and OpenAI CEO Sam Altman arrive to the White House for a meeting with Vice President Kamala Harris on artificial intelligence, Thursday, May 4, 2023, in Washington.

    Top AI executives have warned that AI could potentially bring about human extinction. But these same executives are also racing to deploy the technology into their products.

    Some experts say that focusing on far-off doomsday scenarios may distract from the more immediate harms that AI can cause, such as spreading misinformation, perpetuating biases that exist in training data, and enabling discrimination.

    For example, generative AI could be used to create deepfakes to spread propaganda during an election or enable a frightening new era of scams. Some AI models have also been criticized for what the industry calls “hallucinations,” or making up information.

    Even before the rise of ChatGPT, there were concerns about AI acting as a gatekeeper that can determine who does and does not move forward in a hiring process, for example. AI-powered facial recognition systems have also resulted in some wrongful arrests, and research has shown these systems are drastically more prone to error when trying to match the faces of darker skinned people.

    The more AI tools are incorporated into core parts of society, the more potential there is for unintended consequences.

    Regulators in the United States and Europe are pushing for legislation to help put guardrails in place for AI, which could ultimately impact how the technology develops. But it’s unclear if lawmakers can keep pace with the rapid advances in AI.

    Experts believe in the months ahead, generative AI will go on to create even more realistic images, videos, and audio that could further disrupt media, entertainment, tech and other industries. The technology will likely become increasingly conversational and personalized.

    In March, OpenAI unveiled GPT-4, the next-generation version of the technology that powers ChatGPT. According to the company and early tests, GPT-4 is able to provide more detailed and accurate written responses, pass academic tests with high marks and build a working website from a hand-drawn sketch. (Altman has previously said OpenAI is not yet training GPT-5.)

    AI will almost certainly be infused into many more products and services in the coming months. That means we’ll all have to learn how to live with it.

    As ChatGPT put it in response to a prompt from CNN, “AI has the potential to transform our lives … but it’s crucial for companies and individuals to be mindful of the accompanying risks and responsibly address concerns.”

    [ad_2]

    Source link

  • OpenAI’s head of trust and safety is stepping down | CNN Business

    OpenAI’s head of trust and safety is stepping down | CNN Business

    [ad_1]


    New York
    CNN
     — 

    OpenAI’s head of trust and safety announced on Thursday plans to step down from the job.

    Dave Willner, who has led the artificial intelligence firm’s trust and safety team since February 2022, said in a LinkedIn post that he is “leaving OpenAI as an employee and transitioning into an advisory role” to spend more time with his family.

    Willner’s exit comes at a crucial moment for OpenAI. Since the viral success of the company’s AI chatbot ChatGPT late last year, OpenAI has faced growing scrutiny from lawmakers, regulators and the public over the safety of its products and their potential implications for society.

    OpenAI CEO Sam Altman called for AI regulation during a Senate panel hearing in March. He told lawmakers that the potential for AI to be used to manipulate voters and target disinformation are among “my areas of greatest concern,” especially because “we’re going to face an election next year and these models are getting better.”

    In his Thursday post, Willner — whose resume includes stops at Facebook and Airbnb — noted that “OpenAI is going through a high-intensity phase in its development” and that his role had “grown dramatically in its scope and scale since I first joined.”

    A statement from OpenAI about Willner’s exit said that “his work has been foundational in operationalizing our commitment to the safe and responsible use of our technology, and has paved the way for future progress in this field.” OpenAI’s Chief Technology Officer Mira Murati will become the trust and safety team’s interim manager and Willner will advise the team through the end of this year, according to the company.

    “We are seeking a technically-skilled lead to advance our mission, focusing on the design, development, and implementation of systems that ensure the safe use and scalable growth of our technology,” the company said in the statement.

    Willner’s exit comes as OpenAI continues to work with regulators in the United States and elsewhere to develop guardrails around fast-advancing AI technology. OpenAI was among seven leading AI companies that on Friday made voluntary commitments agreed to by the White House meant to make AI systems and products safer and more trustworthy. As part of the pledge, the companies agreed to put new AI systems through outside testing before they are publicly released, and to clearly label AI-generated content, the White House announced.

    [ad_2]

    Source link

  • Welcome to the era of viral AI generated ‘news’ images | CNN Business

    Welcome to the era of viral AI generated ‘news’ images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Pope Francis wearing a massive, white puffer coat. Elon Musk walking hand-in-hand with rival GM CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.

    None of these things actually happened, but AI-generated images depicting them did go viral online over the past week.

    The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. Model and TV personality Chrissy Teigen, for example, tweeted that she thought the pope’s puffer coat was real, saying, “didn’t give it a second thought. no way am I surviving the future of technology.” The images also sparked a slew of headlines, as news organizations rushed to debunk the false images, especially those of Trump, who was ultimately indicted by a Manhattan grand jury on Thursday but has not been arrested.

    The situation demonstrates a new online reality: the rise of a new crop of buzzy artificial intelligence tools has made it cheaper and easier than ever to create realistic images, as well as audio and videos. And these images are likely to pop up with increasing frequency on social media.

    While these AI tools may enable new means of expressing creativity, the spread of computer-generated media also threatens to further pollute the information ecosystem. That risks adding to the challenges for users, news organizations and social media platforms to vet what’s real, after years of grappling with online misinformation featuring far less sophisticated visuals. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.

    “I worry that it will sort of get to a point where there will be so much fake, highly realistic content online that most people will just go with their tribal instincts as a guide to what they think is real, more than actually informed opinions based on verified evidence,” said Henry Ajder, a synthethic media expert who works as an advisor to companies and government agencies, including Meta Reality Labs’ European Advisory Council.

    Images, compared to the AI-generated text that has also recently proliferated thanks to tools like ChatGPT, can be especially powerful in provoking emotions when people view them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, a nonprofit industry group. That can make it harder for people to slow down and evaluate whether what they’re looking at is real or fake.

    What’s more, coordinated bad actors could eventually attempt to create fake content in bulk — or suggest that real content is computer-generated — in order to confuse internet users and provoke certain behaviors.

    “The paranoia of an impending Trump … potential arrest created a really useful case study in understanding what the potential implications are, and I think we’re very lucky that things did not go south,” said Ben Decker, CEO of threat intelligence group Memetica. “Because if more people had had that idea en masse, in a coordinated fashion, I think there’s a universe where we could start to see the online to offline effects.”

    Computer-generated image technology has improved rapidly in recent years, from the photoshopped image of a shark swimming through a flooded highway that has been repeatedly shared during natural disasters to the websites that four years ago began churning out mostly unconvincing fake photos of non-existent people.

    Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts. On its website, Midjourney describes itself as “a small self-funded team,” with just 11 full-time staff members.

    A cursory glance at a Facebook page popular among Midjourney users reveals AI-generated images of a seemingly inebriated Pope Francis, elderly versions of Elvis and Kurt Cobain, Musk in a robotic Tesla bodysuit and many creepy animal creations. And that’s just from the past few days.

    Midjourney has emerged as a popular tool for users to create AI-generated images.

    The latest version of Midjourney is only available to a select number of paid users, Midjourney CEO David Holz told CNN in an email Friday. Midjourney this week paused access to the free trial of its earlier versions due to “extraordinary demand and trial abuse,” according to a Discord post from Holz, but he told CNN it was unrelated to the viral images. The creator of the Trump arrest images also claimed he was banned from the site.

    The rules page on the company’s Discord site asks users: “Don’t use our tools to make images that could inflame, upset, or cause drama. That includes gore and adult content.”

    “Moderation is hard and we’ll be shipping improved systems soon,” Holz told CNN. “We’re taking lots of feedback and ideas from experts and the community and are trying to be really thoughtful.”

    In most cases, the creators of the recent viral images don’t appear to have been acting malevolently. The Trump arrest images were created by the founder of the online investigative journalism outlet Bellingcat, who clearly labeled them as his fabrications, even if other social media users weren’t as discerning.

    There are efforts by platforms, AI technology companies and industry groups to improve the transparency around when a piece of content is generated by a computer.

    Platforms including Meta’s Facebook and Instagram, Twitter and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. But as use of AI-generated technologies grows, even such policies could threaten to undermine user trust. If, for example, a fake image accidentally slipped through a platform’s detection system, “it could give people false confidence,” Ajder said. “They’ll say, ‘there’s a detection system that says it’s real, so it must be real.’”

    Work is also underway on technical solutions that would, for example, watermark an AI-generated image or include a transparent label in an image’s metadata, so anyone viewing it across the internet would know it was created by a computer. The Partnership on AI has developed a set of standard, responsible practices for synthetic media along with partners like ChatGPT-creator OpenAI, TikTok, Adobe, Bumble and the BBC, which includes recommendations such as how to disclose an image was AI-generated and how companies can share data around such images.

    “The idea is that these institutions are all committed to disclosure, consent and transparency,” Leibowicz said.

    A group of tech leaders, including Musk and Apple co-founder Steve Wozniak, this week wrote an open letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” Still, it’s not clear whether any labs will take such a step. And as the technology rapidly improves and becomes accessible beyond a relatively small group of corporations committed to responsible practices, lawmakers may need to get involved, Ajder said.

    “This new age of AI can’t be held in the hands of a few massive companies getting rich off of these tools, we need to democratize this technology,” he said. “At the same time, there are also very real and legitimate concerns of having a radical open approach where you just open source a tool or have very minimal restrictions on its use is going to lead to a massive scaling of harm … and I think legislation will probably play a role in reigning in some of the more radically open models.”

    [ad_2]

    Source link

  • AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, who has been called the ‘Godfather of AI,’ confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it.

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision.

    In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. “Google has acted very responsibly.”

    Jeff Dean, chief scientist at Google, said Hinton “has made foundational breakthroughs in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”

    “We remain committed to a responsible approach to AI,” Dean said in a statement provided to CNN. “We’re continually learning to understand emerging risks while also innovating boldly.”

    Hinton’s decision to step back from the company and speak out on the technology comes as a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.

    In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    In the interview with the Times, Hinton echoed concerns about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Even before stepping aside from Google, Hinton had spoken publicly about AI’s potential to do harm as well as good.

    “I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good,” Hinton said in a 2021 commencement address at the Indian Institute of Technology Bombay in Mumbai. He noted how AI will boost healthcare while also creating opportunities for lethal autonomous weapons. “I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off.”

    Hinton isn’t the first Google employee to raise a red flag on AI. In July, the company fired an engineer who claimed an unreleased AI system had become sentient, saying he violated employment and data security policies. Many in the AI community pushed back strongly on the engineer’s assertion.

    [ad_2]

    Source link

  • Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    For years, Alexa has been synonymous with virtual assistants that can interact with users and do tasks on their behalf.

    Now Amazon is trying to keep pace with a new wave of conversational AI tools that have accelerated the artificial intelligence arms race in the tech industry and rapidly reshaped what consumers may expect from their tech products.

    Amazon’s goal is to use AI “to create this great personal assistant,” said Dave Limp, senior VP of devices and services, in a recent interview with CNN. “We’ve been using all forms of AI for a long time, but now that we see this emergence of generative AI, we can accelerate that vision even faster.”

    Generative AI refers to a type of AI that can create new content, such as text and images, in response to user prompts. Limp did not elaborate on how generative AI could be used in Alexa products, but there are clear possibilities.

    In theory, this technology could one day help Alexa have more natural conversations with users, answer more complex questions, and be more creative by telling stories or making up song lyrics in seconds. It could also enable more personalized interactions, allowing the assistant to learn about the device owner’s interests, preferences and better tailor its responses to each person.

    “We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp said. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

    Alexa launched nearly a decade ago and, along with Siri, Cortana and other voice assistants, seemed poised to change the way people interacted with technology. But the viral success of ChatGPT has arguably accomplished that faster and across a wider range of everyday products.

    The effort to continue updating the technology that powers Alexa comes at a difficult moment for Amazon. Like other Big Tech companies, Amazon is now slashing staff and shelving products in an urgent effort to cut costs amid broader economic uncertainty. The Alexa division has not escaped unscathed.

    Amazon confirmed plans in January to lay off more than 18,000 employees as the global economic outlook continued to worsen. In March, the company said about 9,000 more jobs would be impacted. Limp said his division lost about 2,000 people, about half of which were from the Alexa team.

    Amazon also shut down some of the products it spun up earlier in the pandemic, such as its wearable fitness brand Halo, which allowed users to ask Alexa questions about their health and wellness. Limp said the company also shelved some “more risky” projects. “I wouldn’t doubt we’ll dust them off at some point and bring them back,” he said. “We’re still taking a lot of risks in this organization.”

    But Limp said Alexa remains a “North Star” for his division. “To give you a sense, there’s still thousands and thousands of people working on Alexa,” he said.

    Amazon is indeed still investing in Alexa and its related Echo smart speaker lineup. Last week, the company unveiled several new products, including the $39.99 Echo Pop and the $89.99 Echo Show 5, its smart speaker with a screen. While the products feature incremental updates, Limp said Amazon’s current lineup contains hints of what’s to come with its AI efforts, beyond generative AI.

    For example, if Alexa is enabled on an Echo Show, where it can rotate and follow users around the room, “you’ll see glimmers of where it’s going over the next months and years,” Limp said.

    But generative AI remains a key focus for the company. Amazon CEO Andy Jassy said in a letter to shareholders in April that the company is focused on “investing heavily” in the technology “across all of our consumer, seller, brand, and creator experiences.”

    The company is reportedly working on adding ChatGPT-like search capabilities for its e-commerce store. Amazon is also rumored to be planning to use generative AI to bring conversational language to a home robot.

    While Limp didn’t comment on the report, he said the end goal has long been for Alexa to communicate with users in a fluid, natural way, whether it’s through an Echo device or other products such as its robotic dog, Astro.

    The concept remains a “hard technical challenge,” he said, but one that is “more tractable” with generative AI. “There’s still some hard corner cases and things to work out,” he said.

    [ad_2]

    Source link

  • Schumer outlines plan for how Senate will regulate AI | CNN Business

    Schumer outlines plan for how Senate will regulate AI | CNN Business

    [ad_1]



    CNN
     — 

    Senate Majority Leader Chuck Schumer announced a broad, open-ended plan for regulating artificial intelligence on Wednesday, describing AI as an unprecedented challenge for Congress that effectively has policymakers “starting from scratch.”

    The plan, Schumer said at a speech in Washington, will begin with at least nine panels to identify and discuss the hardest questions that regulations on AI will have to answer, including how to protect workers, national security and copyright and to defend against “doomsday scenarios.” The panels will be composed of experts from industry, academia and civil society, with the first sessions taking place in September, Schumer said.

    The Senate will then turn to committee chairs and other vocal lawmakers on AI legislation to develop bills reflecting the panel discussions, Schumer added, arguing that the resulting US solution could leapfrog existing regulatory proposals from around the world.

    “If we can put this together in a very serious way, I think the rest of the world will follow and we can set the direction of how we ought to go in AI, because I don’t think any of the existing proposals have captured that imagination,” Schumer said, reflecting on other recent proposals such as the European Union’s draft AI Act, which last week was approved by the European Parliament.

    The speech represents Schumer’s most definitive remarks to date on a problem that has dogged Congress for months amid the wide embrace of tools such as ChatGPT: How to catch up, or get ahead, on policymaking for a technology that is already in the hands of millions of people and evolving rapidly.

    In the wake of ChatGPT’s viral success, Silicon Valley has raced to develop and deploy a new crop of generative AI tools that can produce images and writing almost instantly, with the potential to change how people work, shop and interact with each other. But these same tools have also raised concerns for their potential to make factual errors, spread misinformation and perpetuate biases, among other issues.

    In contrast to the fast pace of AI advancements, Schumer has stressed the importance of a deliberate approach, focusing on getting lawmakers acquainted with the basic facts of the technology and the issues it raises before seeking to legislate. He and three other colleagues began last week by convening the first in a series of closed-door briefings on AI for senators that is expected to run through the summer.

    In his remarks Wednesday, Schumer appeared to acknowledge criticism of his pace.

    “I know many of you have spent months calling on us to act,” he said. “I hear you. I hear you loud and clear.”

    But he described AI as a novel issue for which Congress lacks a guide.

    “It’s not like labor, or healthcare, or defense, where Congress has had a long history we can work off of,” he said. “Experts aren’t even sure which questions policymakers should be asking. In many ways, we’re starting from scratch.”

    Schumer described his plan as laying “a foundation for AI policy” that will do “years of work in a matter of months.”

    To guide that process, Schumer expanded on a set of principles he first announced in April. Formally unveiling the framework on Wednesday, Schumer said any legislation on AI should be geared toward facilitating innovation before addressing risks to national security or democratic governance.

    “Innovation first,” Schumer said, “but with security, accountability, [democratic] foundations and explainability.”

    The last two pillars of his framework, Schumer said, may be among the most important, as unrestricted artificial intelligence could undermine electoral processes or make it impossible to critically evaluate an AI’s claims.

    Schumer’s remarks were restrained in calling for any specific proposals. At one point, he acknowledged that a consensus may even emerge that recommends against major government intervention on the technology.

    But he was clear on one point: “We do — we do — need to require companies to develop a system where in simple and understandable terms users understand why the system produced a particular answer, and where that answer came from.”

    The Senate may still be a long way off from unveiling any comprehensive proposal, however. Schumer predicted that the process is likely to take longer than weeks but shorter than years.

    “Months would be the proper timeline,” he said.

    [ad_2]

    Source link

  • Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    Leading AI companies commit to outside testing of AI systems and other safety commitments | CNN Politics

    [ad_1]



    CNN
     — 

    Microsoft, Google and other leading artificial intelligence companies committed Friday to put new AI systems through outside testing before they are publicly released and to clearly label AI-generated content, the White House announced.

    The pledges are part of a series of voluntary commitments agreed to by the White House and seven leading AI companies – which also include Amazon, Meta, OpenAI, Anthropic and Inflection – aimed at making AI systems and products safer and more trustworthy while Congress and the White House develop more comprehensive regulations to govern the rapidly growing industry. President Joe Biden met with top executives from all seven companies at the White House on Friday.

    In a speech Friday, Biden called the companies commitments “real and concrete,” adding they will help fulfill their “fundamental obligations to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

    “We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years. That has been an astounding revelation,” Biden said.

    White House officials acknowledge that some of the companies have already enacted some of the commitments but argue they will as a whole raise “the standards for safety, security and trust of AI” and will serve as a “bridge to regulation.”

    “It’s a first step, it’s a bridge to where we need to go,” White House deputy chief of staff Bruce Reed, who has been managing the AI policy process, said in an interview. “It will help industry and government develop the capacities to make sure that AI is safe and secure. And we pushed to move so quickly because this technology is moving farther and faster than anything we’ve seen before.”

    While most of the companies already conduct internal “red-teaming” exercises, the commitments will mark the first time they have all committed to allow outside experts to test their systems before they are released to the public. A red team exercise is designed to simulate what could go wrong with a given technology – such as a cyberattack or its potential to be used by malicious actors – and allows companies to proactively identify shortcomings and prevent negative outcomes.

    Reed said the external red-teaming “will help pave the way for government oversight and regulation,” potentially laying the groundwork for that outside testing to be carried out by a government regulator or licenser.

    The commitments could also lead to widespread watermarking of AI-generated audio and visual content with the aim of combating fraud and misinformation.

    The companies also committed to investing in cybersecurity and “insider threat safeguards,” in particular to protect AI model weights, which are essentially the knowledge base upon which AI systems rely; creating a robust mechanism for third parties to report system vulnerabilities; prioritizing research on the societal risks of AI; and developing and deploying AI systems “to help address society’s greatest challenges,” according to the White House.

    Asked by CNN’s Jake Tapper Friday about worries he has when it comes to AI, Microsoft Vice Chair and President Brad Smith pointed to “what people, bad actors, individuals or countries will do” with the technology.

    “That they’ll use it to undermine our elections, that they will use it to seek to break in to our computer networks. You know, that they’ll use it in ways that will undermine the security of our jobs,” he said.

    But, Smith argued, “the best way to solve these problems is to focus on them, to understand them, to bring people together, and to solve them. And the interesting thing about AI, in my opinion, is that when we do that, and we are determined to do that, we can use AI to defend against these problems far more effectively than we can today.”

    Pressed by Tapper about AI and compensation concerns listed in a recent letter signed by thousands of authors, Smith said: “I don’t want it to undermine anybody’s ability to make a living by creating, by writing. That is the balance that we should all want to strike.”

    All of the commitments are voluntary and White House officials acknowledged that there is no enforcement mechanism to ensure the companies stick to the commitments, some of which also lack specificity.

    Common Sense Media, a child internet-safety organization, commended the White House for taking steps to establish AI guardrails, but warned that “history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

    “If we’ve learned anything from the last decade and the complete mismanagement of social media governance, it’s that many companies offer a lot of lip service,” Common Sense Media CEO James Steyer said in a statement. “And then they prioritize their profits to such an extent that they will not hold themselves accountable for how their products impact the American people, particularly children and families.”

    The federal government’s failure to regulate social media companies at their inception – and the resistance from those companies – has loomed large for White House officials as they have begun crafting potential AI regulations and executive actions in recent months.

    “The main thing we stressed throughout the discussions with the companies was that we should make this as robust as possible,” Reed said. “The tech industry made a mistake in warding off any kind of oversight, legislation and regulation a decade ago and I think that AI is progressing even more rapidly than that and it’s important for this bridge to regulation to be a sturdy one.”

    The commitments were crafted during a monthslong back-and-forth between the AI companies and the White House that began in May when a group of AI executives came to the White House to meet with Biden, Vice President Kamala Harris and White House officials. The White House also sought input from non-industry AI safety and ethics experts.

    White House officials are working to move beyond voluntary commitments, readying a series of executive actions, the first of which is expected to be unveiled later this summer. Officials are also working closely with lawmakers on Capitol Hill to develop more comprehensive legislation to regulate AI.

    “This is a serious responsibility. We have to get it right. There’s an enormous, enormous potential upside as well,” Biden said.

    In the meantime, White House officials say the companies will “immediately” begin implementing the voluntary commitments and hope other companies sign on in the future.

    “We expect that other companies will see how they also have an obligation to live up to the standards of safety, security and trust. And they may choose – and we would welcome them choosing – joining these commitments,” a White House official said.

    This story has been updated with additional details.

    [ad_2]

    Source link

  • Italy blocks ChatGPT over privacy concerns | CNN Business

    Italy blocks ChatGPT over privacy concerns | CNN Business

    [ad_1]


    London
    CNN
     — 

    Regulators in Italy issued a temporary ban on ChatGPT Friday, effective immediately, due to privacy concerns and said they had opened an investigation into how OpenAI, the US company behind the popular chatbot, uses data.

    Italy’s data protection agency said users lacked information about the collection of their data and that a breach at ChatGPT had been reported on March 20.

    “There appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” the agency said.

    The Italian regulator also expressed concerns over the lack of age verification for ChatGPT users. It argued that this “exposes children to receiving responses that are absolutely inappropriate to their age and awareness.” The platform is supposed to be for users older than 13, it noted.

    The data protection agency said OpenAI would be barred from processing the data of Italian users until it “respects the privacy regulation.”

    OpenAI has been given 20 days to communicate the measures it will take to comply with Italy’s data rules. Otherwise, it could face a penalty of up to €20 million ($21.8 million), or up to 4% of its annual global turnover.

    Since its public release four months ago, ChatGPT has become a global phenomenon, amassing millions of users impressed with its ability to craft convincing written content, including academic essays, business plans and short stories.

    But concerns have also emerged about its rapid spread and what large-scale uptake of such tools could mean for society, putting pressure on regulators around the world to act.

    The European Union is finalizing rules on the use of artificial intelligence in the bloc. In the meantime, EU companies must comply with the General Data Protection Regulation, or GDPR, as well as the Digital Services Act and Digital Markets Act, which apply to tech platforms.

    Meanwhile, so-called “generative AI” tools available to the public are proliferating.

    Earlier this month, OpenAI released GPT-4, a new version of the technology underpinning ChatGPT that is even more powerful. The company said the updated technology passed a simulated law school bar exam with a score around the top 10% of test takers; by contrast, the prior version, GPT-3.5, scored around the bottom 10%.

    This week, some of the biggest names in tech, including Elon Musk, called for AI labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    — Julia Horowitz contributed reporting.

    [ad_2]

    Source link

  • Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    [ad_1]



    CNN
     — 

    Less than a few hours after Snapchat rolled out its My AI chatbot to all users last week, Lyndsi Lee, a mother from East Prairie, Missouri, told her 13-year-old daughter to stay away from the feature.

    “It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee, who works at a software company. She worries about how My AI presents itself to young users like her daughter on Snapchat.

    The feature is powered by the viral AI chatbot tool ChatGPT – and like ChatGPT, it can offer recommendations, answer questions and converse with users. But Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it, and bring it into conversations with friends.

    The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear you’re talking to a computer.

    “I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view,” Lee said. “I just think there is a really clear line [Snapchat] is crossing.”

    The new tool is facing backlash not only from parents but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticisms on social media over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    While some may find value in the tool, the mixed reactions hint at the risks companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow. Almost overnight, Snapchat has forced some families and lawmakers to reckon with questions that may have seemed theoretical only months ago.

    In a letter to the CEOs of Snap and other tech companies last month, weeks after My AI was released to Snap’s subscription customers, Democratic Sen. Michael Bennet raised concerns about the interactions the chatbot was having with younger users. In particular, he cited reports that it can provide kids with suggestions for how to lie to their parents.

    “These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote. “Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

    In a blog post last week, the company said: “My AI is far from perfect but we’ve made a lot of progress.”

    In the days since its formal launch, Snapchat users have been vocal about their concerns. One user called his interaction “terrifying” after he said it lied about not knowing where the user was located. After the user lightened the conversation, he said the chatbot accurately revealed he lived in Colorado.

    In another TikTok video with more than 1.5 million views, a user named Ariel recorded a song with an intro, chorus and piano chords written by My AI about what it’s like to be a chatbot. When she sent the recorded song back, she said the chatbot denied its involvement with the reply: “I’m sorry, but as an AI language model, I don’t write songs.” Ariel called the exchange “creepy.”

    Other users shared concerns about how the tool understands, interacts with and collects information from photos. “I snapped a picture … and it said ‘nice shoes’ and asked who the people [were] in the photo,” a Snapchat user wrote on Facebook.

    Snapchat told CNN it continues to improve My AI based on community feedback and is working to establish more guardrails to keep its users safe. The company also said that similar to its other tools, users don’t have to interact with My AI if they don’t want to.

    It’s not possible to remove My AI from chat feeds, however, unless a user subscribes to its monthly premium service, Snapchat+. Some teens say they have opted to pay the $3.99 Snapchat+ fee to turn off the tool before promptly canceling the service.

    But not all users dislike the feature.

    One user wrote on Facebook that she’s been asking My AI for homework help. “It gets all of the questions right.” Another noted she’s leaned on it for comfort and advice. “I love my little pocket, bestie!” she wrote. “You can change the Bitmoji [avatar] for it and surprisingly it offers really great advice to some real life situations. … I love the support it gives.”

    ChatGPT, which is trained on vast troves of data online, has previously come under fire for spreading inaccurate information, responding to users in ways they might find inappropriate and enabling students to cheat. But Snapchat’s integration of the tool risks heightening some of these issues, and adding new ones.

    Alexandra Hamlet, a clinical psychologist in New York City, said the parents of some of her patients have expressed concern about how their teenager could interact with Snapchat’s tool. There’s also concern around chatbots giving advice and about mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek out interactions that confirm their unhelpful beliefs.

    “If a teen is in a negative mood and does not have the awareness desire to feel better, they may seek out a conversation with a chatbot that they know will make them feel worse,” she said. “Over time, having interactions like these can erode a teens’ sense of worth, despite their knowing that they are really talking to a bot. In an emotional state of mind, it becomes less possible for an individual to consider this type of logic.”

    For now, the onus is on parents to start meaningful conversations with their teens about best practices for communicating with AI, especially as the tools start to show up in more popular apps and services.

    Sinead Bovell, the founder of WAYE, a startup that helps prepare youth for future with advanced technologies, said parents need to make it very clear “chatbots are not your friend.”

    “They’re also not your therapists or a trusted adviser, and anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say,” she said.

    “Parents should be talking to their kids now about how they shouldn’t share anything personal with a chatbot that they would a friend – even though from a user design perspective, the chatbot exists in the same corner of Snapchat.”

    She added that federal regulation that would require companies to abide by specific protocols is also needed to keep up the rapid pace of AI advancement.

    [ad_2]

    Source link

  • First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    [ad_1]



    CNN
     — 

    Three US senators are pressing Facebook-parent Meta, Google-parent Alphabet and Twitter about whether their layoffs may have hindered the companies’ ability to fight the spread of misinformation ahead of the 2024 elections.

    In a letter to the companies dated Tuesday, the lawmakers warned that reported staff cuts to content moderation and other teams could make it harder for the companies to fulfill their commitments to election integrity.

    “This is particularly troubling given the emerging use of artificial intelligence to mislead voters,” wrote Minnesota Democratic Sen. Amy Klobuchar, Vermont Democratic Sen. Peter Welch and Illinois Democratic Sen. Dick Durbin, according to a copy of the letter reviewed by CNN.

    Since purchasing Twitter in October, Elon Musk has slashed headcount by more than 80%, in some cases eliminating entire teams.

    Alphabet announced plans to cut roughly 12,000 workers across product areas and regions earlier this year. And Meta has previously said it would eliminate about 21,000 jobs over two rounds of layoffs, hitting across teams devoted to policy, user experience and well-being, among others.

    “We remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community – including our efforts to prepare for elections around the world,” Andy Stone, a spokesperson for Meta, said in a statement to CNN about the letter.

    Alphabet and Twitter did not immediately respond to a request for comment.

    The pullback at those companies has coincided with a broader industry retrenchment in the face of economic headwinds. Peers such as Microsoft and Amazon have also trimmed their workforces, while others have announced hiring freezes.

    But the social media companies are coming under greater scrutiny now in part due to their role facilitating the US electoral process.

    Tuesday’s letter asked Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai and Twitter CEO Linda Yaccarino how each company is preparing for the 2024 elections and for mis- and disinformation surrounding the campaigns.

    To illustrate their concerns, the lawmakers pointed to recent changes at Alphabet-owned YouTube to allow the sharing of false claims that the 2020 presidential election was stolen, along with what they described as content moderation “challenges” at Twitter since the layoffs.

    The letter, which seeks responses by July 10, also asked whether the companies may hire more content moderation employees or contractors ahead of the election, and how the platforms may be specifically preparing for the rise of AI-generated deepfakes in politics.

    Already, candidates such as Florida Gov. Ron DeSantis appear to have used fake, AI-generated images to attack their opponents, raising questions about the risks that artificial intelligence could pose for democracy.

    [ad_2]

    Source link

  • ‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree | CNN Business

    ‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree | CNN Business

    [ad_1]



    CNN
     — 

    A new crop of artificial intelligence tools carries the promise of streamlining tasks, improving efficiency and boosting productivity in the workplace. But that hasn’t been Neil Clarke’s experience so far.

    Clarke, an editor and publisher, said he recently had to temporarily shutter the online submission form for his science fiction and fantasy magazine, Clarkesworld, after his team was inundated with a deluge of “consistently bad” AI-generated submissions.

    “They’re some of the worst stories we’ve seen, actually,” Clarke said of the hundreds of pieces of AI-produced content he and his team of humans now must manually parse through. “But it’s more of the problem of volume, not quality. The quantity is burying us.”

    “It almost doubled our workload,” he added, describing the latest AI tools as “a thorn in our side for the last few months.” Clarke said that he anticipates his team is going to have to close submissions again. “It’s going to reach a point where we can’t handle it.”

    Since ChatGPT launched late last year, many of the tech world’s most prominent figures have waxed poetic about how AI has the potential to boost productivity, help us all work less and create new and better jobs in the future. “In the next few years, the main impact of AI on work will be to help people do their jobs more efficiently,” Microsoft co-founder Bill Gates said in a blog post recently.

    But as is often the case with tech, the long-term impact isn’t always clear or the same across industries and markets. Moreover, the road to a techno-utopia is often bumpy and plagued with unintended consequences, whether it’s lawyers fined for submitting fake court citations from ChatGPT or a small publication buried under an avalanche of computer-generated submissions.

    Big Tech companies are now rushing to jump on the AI bandwagon, pledging significant investments into new AI-powered tools that promise to streamline work. These tools can help people quickly draft emails, make presentations and summarize large datasets or texts.

    In a recent study, researchers at the Massachusetts Institute of Technology found that access to ChatGPT increased productivity for workers who were assigned tasks like writing cover letters, “delicate” emails and cost-benefit analyses. “I think what our study shows is that this kind of technology has important applications in white collar work. It’s a useful technology. But it’s still too early to tell if it will be good or bad, or how exactly it’s going to cause society to adjust,” Shakked Noy, a PhD student in MIT’s Department of Economics, who co-authored the paper, said in a statement.

    Mathias Cormann, the secretary-general of the Organization for Economic Co-operation and Development recently said the intergovernmental organization has found that AI can improve some aspects of job quality, but there are tradeoffs.

    “Workers do report, though, that the intensity of their work has increased after the adoption of AI in their workplaces,” Cormann said in public remarks, pointing to the findings of a report released by the organization. The report also found that for non-AI specialists and non-managers, the use of AI had only a “minimal impact on wages so far” – meaning that for the average employee, the work is scaling up, but the pay isn’t.

    Ivana Saula, the research director for the International Association of Machinists and Aerospace Workers, said that workers in her union have said they feel like “guinea pigs” as employers rush to roll out AI-powered tools on the job.

    And it hasn’t always gone smoothly, Saula said. The implementation of these new tech tools has often led to more “residual tasks that a human still needs to do.” This can include picking up additional logistics tasks that a machine simply can’t do, Saula said, adding more time and pressure to a daily work flow.

    The union represents a broad range of workers, including in air transportation, health care, public service, manufacturing and the nuclear industry, Saula said.

    “It’s never just clean cut, where the machine can entirely replace the human,” Saula told CNN. “It can replace certain aspects of what a worker does, but there’s some tasks that are outstanding that get placed on whoever remains.”

    Workers are also “saying that my workload is heavier” after the implementation of new AI tools, Saula said, and “the intensity at which I work is much faster because now it’s being set by the machine.” She added that the feedback they are getting from workers shows how important it is to “actually involve workers in the process of implementation.”

    “Because there’s knowledge on the ground, on the frontlines, that employers need to be aware of,” she said. “And oftentimes, I think there’s disconnects between frontline workers and what happens on shop floors, and upper management, and not to mention CEOs.”

    Perhaps nowhere are the pros and cons of AI for businesses as apparent as in the media industry. These tools offer the promise of accelerating if not automating copywriting, advertising and certain editorial work, but there have already been some notable blunders.

    News outlet CNET had to issue “substantial” corrections earlier this year after experimenting with using an AI tool to write stories. And what was supposed to be a simple AI-written story on Star Wars published by Gizmodo earlier this month similarly required a correction and resulted in employee turmoil. But both outlets have signaled they will still move forward with using the technology to assist in newsrooms.

    Others like Clarke, the publisher, have tried to combat the fallout from the rise of AI by relying on more AI. Clarke said he and his team turned to AI-powered detectors of AI-generated work to deal with the deluge of submissions but found these tools weren’t helpful because of how unreliably they flag “false positives and false negatives,” especially for writers whose second language is English.

    “You listen to these AI experts, they go on about how these things are going to do amazing breakthroughs in different fields,” Clarke said. “But those aren’t the fields they’re currently working in.”

    [ad_2]

    Source link