ReportWire

Tag: openai

  • Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    [ad_1]



    CNN
     — 

    Less than a few hours after Snapchat rolled out its My AI chatbot to all users last week, Lyndsi Lee, a mother from East Prairie, Missouri, told her 13-year-old daughter to stay away from the feature.

    “It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee, who works at a software company. She worries about how My AI presents itself to young users like her daughter on Snapchat.

    The feature is powered by the viral AI chatbot tool ChatGPT – and like ChatGPT, it can offer recommendations, answer questions and converse with users. But Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it, and bring it into conversations with friends.

    The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear you’re talking to a computer.

    “I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view,” Lee said. “I just think there is a really clear line [Snapchat] is crossing.”

    The new tool is facing backlash not only from parents but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticisms on social media over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    While some may find value in the tool, the mixed reactions hint at the risks companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow. Almost overnight, Snapchat has forced some families and lawmakers to reckon with questions that may have seemed theoretical only months ago.

    In a letter to the CEOs of Snap and other tech companies last month, weeks after My AI was released to Snap’s subscription customers, Democratic Sen. Michael Bennet raised concerns about the interactions the chatbot was having with younger users. In particular, he cited reports that it can provide kids with suggestions for how to lie to their parents.

    “These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote. “Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

    In a blog post last week, the company said: “My AI is far from perfect but we’ve made a lot of progress.”

    In the days since its formal launch, Snapchat users have been vocal about their concerns. One user called his interaction “terrifying” after he said it lied about not knowing where the user was located. After the user lightened the conversation, he said the chatbot accurately revealed he lived in Colorado.

    In another TikTok video with more than 1.5 million views, a user named Ariel recorded a song with an intro, chorus and piano chords written by My AI about what it’s like to be a chatbot. When she sent the recorded song back, she said the chatbot denied its involvement with the reply: “I’m sorry, but as an AI language model, I don’t write songs.” Ariel called the exchange “creepy.”

    Other users shared concerns about how the tool understands, interacts with and collects information from photos. “I snapped a picture … and it said ‘nice shoes’ and asked who the people [were] in the photo,” a Snapchat user wrote on Facebook.

    Snapchat told CNN it continues to improve My AI based on community feedback and is working to establish more guardrails to keep its users safe. The company also said that similar to its other tools, users don’t have to interact with My AI if they don’t want to.

    It’s not possible to remove My AI from chat feeds, however, unless a user subscribes to its monthly premium service, Snapchat+. Some teens say they have opted to pay the $3.99 Snapchat+ fee to turn off the tool before promptly canceling the service.

    But not all users dislike the feature.

    One user wrote on Facebook that she’s been asking My AI for homework help. “It gets all of the questions right.” Another noted she’s leaned on it for comfort and advice. “I love my little pocket, bestie!” she wrote. “You can change the Bitmoji [avatar] for it and surprisingly it offers really great advice to some real life situations. … I love the support it gives.”

    ChatGPT, which is trained on vast troves of data online, has previously come under fire for spreading inaccurate information, responding to users in ways they might find inappropriate and enabling students to cheat. But Snapchat’s integration of the tool risks heightening some of these issues, and adding new ones.

    Alexandra Hamlet, a clinical psychologist in New York City, said the parents of some of her patients have expressed concern about how their teenager could interact with Snapchat’s tool. There’s also concern around chatbots giving advice and about mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek out interactions that confirm their unhelpful beliefs.

    “If a teen is in a negative mood and does not have the awareness desire to feel better, they may seek out a conversation with a chatbot that they know will make them feel worse,” she said. “Over time, having interactions like these can erode a teens’ sense of worth, despite their knowing that they are really talking to a bot. In an emotional state of mind, it becomes less possible for an individual to consider this type of logic.”

    For now, the onus is on parents to start meaningful conversations with their teens about best practices for communicating with AI, especially as the tools start to show up in more popular apps and services.

    Sinead Bovell, the founder of WAYE, a startup that helps prepare youth for future with advanced technologies, said parents need to make it very clear “chatbots are not your friend.”

    “They’re also not your therapists or a trusted adviser, and anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say,” she said.

    “Parents should be talking to their kids now about how they shouldn’t share anything personal with a chatbot that they would a friend – even though from a user design perspective, the chatbot exists in the same corner of Snapchat.”

    She added that federal regulation that would require companies to abide by specific protocols is also needed to keep up the rapid pace of AI advancement.

    [ad_2]

    Source link

  • Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    Amazon looks to adapt Alexa to the rise of ChatGPT | CNN Business

    [ad_1]



    CNN
     — 

    For years, Alexa has been synonymous with virtual assistants that can interact with users and do tasks on their behalf.

    Now Amazon is trying to keep pace with a new wave of conversational AI tools that have accelerated the artificial intelligence arms race in the tech industry and rapidly reshaped what consumers may expect from their tech products.

    Amazon’s goal is to use AI “to create this great personal assistant,” said Dave Limp, senior VP of devices and services, in a recent interview with CNN. “We’ve been using all forms of AI for a long time, but now that we see this emergence of generative AI, we can accelerate that vision even faster.”

    Generative AI refers to a type of AI that can create new content, such as text and images, in response to user prompts. Limp did not elaborate on how generative AI could be used in Alexa products, but there are clear possibilities.

    In theory, this technology could one day help Alexa have more natural conversations with users, answer more complex questions, and be more creative by telling stories or making up song lyrics in seconds. It could also enable more personalized interactions, allowing the assistant to learn about the device owner’s interests, preferences and better tailor its responses to each person.

    “We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp said. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

    Alexa launched nearly a decade ago and, along with Siri, Cortana and other voice assistants, seemed poised to change the way people interacted with technology. But the viral success of ChatGPT has arguably accomplished that faster and across a wider range of everyday products.

    The effort to continue updating the technology that powers Alexa comes at a difficult moment for Amazon. Like other Big Tech companies, Amazon is now slashing staff and shelving products in an urgent effort to cut costs amid broader economic uncertainty. The Alexa division has not escaped unscathed.

    Amazon confirmed plans in January to lay off more than 18,000 employees as the global economic outlook continued to worsen. In March, the company said about 9,000 more jobs would be impacted. Limp said his division lost about 2,000 people, about half of which were from the Alexa team.

    Amazon also shut down some of the products it spun up earlier in the pandemic, such as its wearable fitness brand Halo, which allowed users to ask Alexa questions about their health and wellness. Limp said the company also shelved some “more risky” projects. “I wouldn’t doubt we’ll dust them off at some point and bring them back,” he said. “We’re still taking a lot of risks in this organization.”

    But Limp said Alexa remains a “North Star” for his division. “To give you a sense, there’s still thousands and thousands of people working on Alexa,” he said.

    Amazon is indeed still investing in Alexa and its related Echo smart speaker lineup. Last week, the company unveiled several new products, including the $39.99 Echo Pop and the $89.99 Echo Show 5, its smart speaker with a screen. While the products feature incremental updates, Limp said Amazon’s current lineup contains hints of what’s to come with its AI efforts, beyond generative AI.

    For example, if Alexa is enabled on an Echo Show, where it can rotate and follow users around the room, “you’ll see glimmers of where it’s going over the next months and years,” Limp said.

    But generative AI remains a key focus for the company. Amazon CEO Andy Jassy said in a letter to shareholders in April that the company is focused on “investing heavily” in the technology “across all of our consumer, seller, brand, and creator experiences.”

    The company is reportedly working on adding ChatGPT-like search capabilities for its e-commerce store. Amazon is also rumored to be planning to use generative AI to bring conversational language to a home robot.

    While Limp didn’t comment on the report, he said the end goal has long been for Alexa to communicate with users in a fluid, natural way, whether it’s through an Echo device or other products such as its robotic dog, Astro.

    The concept remains a “hard technical challenge,” he said, but one that is “more tractable” with generative AI. “There’s still some hard corner cases and things to work out,” he said.

    [ad_2]

    Source link

  • OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    [ad_1]

    Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman, launched on Monday.

    The project’s core offering is its World ID, which the company describes as a “digital passport” to prove that its holder is a real human, not an AI bot. To get a World ID, a customer signs up to do an in-person iris scan using Worldcoin’s ‘orb’, a silver ball approximately the size of a bowling ball. Once the orb’s iris scan verifies the person is a real human, it creates a World ID.

    The company behind Worldcoin is San Francisco and Berlin-based Tools for Humanity.

    The project has 2 million users from its beta period, and with Monday’s launch, Worldcoin is scaling up “orbing” operations to 35 cities in 20 countries. As an enticement, those who sign up in certain countries will receive Worldcoin’s cryptocurrency token WLD.

    WLD’s price rose in early trading on Monday. On the world’s largest exchange, Binance, it hit a peak of $5.29 and at 1000 GMT was at $2.49 from a starting price of $0.15, having seen $25.1 million of trading volume, according to Binance’s website.

    Blockchains can store the World IDs in a way that preserves privacy and can’t be controlled or shut down by any single entity, co-founder Alex Blania told Reuters.

    The project says World IDs will be necessary in the age of generative AI chatbots like ChatGPT, which produce remarkably humanlike language. World IDs could be used to tell the difference between real people and AI bots online.

    Altman told Reuters Worldcoin also can help address how the economy will be reshaped by generative AI.

    “People will be supercharged by AI, which will have massive economic implications,” he said.

    One example Altman likes is universal basic income, or UBI, a social benefits program usually run by governments where every individual is entitled to payments. Because AI “will do more and more of the work that people now do,” Altman believes UBI can help to combat income inequality. Since only real people can have World IDs, it could be used to reduce fraud when deploying UBI.

    Altman said he thought a world with UBI would be “very far in the future” and he did not have a clear idea of what entity could dole out money, but that Worldcoin lays groundwork for it to become a reality.

    “We think that we need to start experimenting with things so we can figure out what to do,” he said.

    [ad_2]

    Source link

  • The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI policy group claims | CNN Business

    The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI policy group claims | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    An AI policy think tank wants the US government to investigate OpenAI and its wildly popular GPT artificial intelligence product, claiming that algorithmic bias, privacy concerns and the technology’s tendency to produce sometimes inaccurate results may violate federal consumer protection law.

    The Federal Trade Commission should prohibit OpenAI from releasing future versions of GPT, the Center for AI and Digital Policy (CAIDP) said Thursday in an agency complaint, and establish new regulations for the rapidly growing AI sector.

    The complaint seeks to bring the full force of the FTC’s broad consumer protection powers to bear against what CAIDP portrayed as a Wild West of runaway experimentation in which consumers pay for the unintended consequences of AI development. And it could prove to be an early test of the US government’s appetite for directly regulating AI, as tech-skeptic officials such as FTC Chair Lina Khan have warned of the dangers of unchecked data use for commercial purposes and of novel ways that tech companies may try to entrench monopolies.

    The FTC declined to comment. OpenAI didn’t immediately respond to a request for comment.

    “We believe that the FTC should look closely at OpenAI and GPT-4,” said Marc Rotenberg, CAIDP’s president and a longtime consumer protection advocate on technology issues.

    The complaint attacks a range of risks associated with generative artificial intelligence, which has captured the world’s attention after OpenAI’s ChatGPT — powered by an earlier version of the GPT product — was first released to the public late last year. Everyday internet users have used ChatGPT to write poetry, create software and get answers to questions, all within seconds and with surprising sophistication. Microsoft and Google have both begun to integrate that same type of AI into their search products, with Microsoft’s Bing running on the GPT technology itself.

    But the race for dominance in a seemingly new field has also produced unsettling or simply flat-out incorrect results, such as confident claims that Feb. 12, 2023 came before Dec. 16, 2022. In industry parlance, these types of mistakes are known as “AI hallucinations” — and they should be considered legally enforceable violations, CAIDP argued in its complaint.

    “Many of the problems associated with GPT-4 are often described as ‘misinformation,’ ‘hallucinations,’ or ‘fabrications.’ But for the purpose of the FTC, these outputs should best be understood as ‘deception,’” the complaint said, referring to the FTC’s broad authority to prosecute unfair or deceptive business acts or practices.

    The complaint acknowledges that OpenAI has been upfront about many of the limitations of its algorithms. For example, the white paper linked to GPT’s latest release, GPT-4, explains that the model may “produce content that is nonsensical or untruthful in relation to certain sources.” OpenAI also makes similar disclosures about the possibility that tools like GPT can lead to broad-based discrimination against minorities or other vulnerable groups.

    But in addition to arguing that those outcomes themselves may be unfair or deceptive, CAIDP also alleges that OpenAI has violated the FTC’s AI guidelines by trying to offload responsibility for those risks onto its clients who use the technology.

    The complaint alleges that OpenAI’s terms require news publishers, banks, hospitals and other institutions that deploy GPT to include a disclaimer about the limitations of artificial intelligence. That does not insulate OpenAI from liability, according to the complaint.

    Citing a March FTC advisory on chatbots, CAIDP wrote: “Recently [the] FTC stated that ‘Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.’”

    Artificial intelligence also stands to have vast implications for consumer privacy and cybersecurity, said CAIDP, issues that sit squarely within the FTC’s jurisdiction but that the agency has not studied in connection with GPT’s inner workings.

    [ad_2]

    Source link

  • Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    Europe is leading the race to regulate AI. Here’s what you need to know | CNN Business

    [ad_1]


    London
    CNN
     — 

    The European Union took a major step Wednesday toward setting rules — the first in the world — on how companies can use artificial intelligence.

    It’s a bold move that Brussels hopes will pave the way for global standards for a technology used in everything from chatbots such as OpenAI’s ChatGPT to surgical procedures and fraud detection at banks.

    “We have made history today,” Brando Benifei, a member of the European Parliament working on the EU AI Act, told journalists.

    Lawmakers have agreed a draft version of the Act, which will now be negotiated with the Council of the European Union and EU member states before becoming law.

    “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei added.

    Hundreds of top AI scientists and researchers warned last month that the technology posed an extinction risk to humanity, and several prominent figures — including Microsoft President Brad Smith and OpenAI CEO Sam Altman — have called for greater regulation of the technology.

    At the Yale CEO Summit this week, more than 40% of business leaders — including Walmart chief Doug McMillion and Coca-Cola

    (KO)
    CEO James Quincy — said AI had the potential to destroy humanity five to 10 years from now.

    Against that backdrop, the EU AI Act seeks to “promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects.”

    Here are the key takeaways.

    Once approved, the Act will apply to anyone who develops and deploys AI systems in the EU, including companies located outside the bloc.

    The extent of regulation depends on the risks created by a particular application, from minimal to “unacceptable.”

    Systems that fall into the latter category are banned outright. These include real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China, which assign people a “health score” based on their behavior.

    The legislation also sets tight restrictions on “high-risk” AI applications, which are those that threaten “significant harm to people’s health, safety, fundamental rights or the environment.”

    These include systems used to influence voters in an election, as well as social media platforms with more than 45 million users that recommend content to their users — a list that would include Facebook, Twitter and Instagram.

    The Act also outlines transparency requirements for AI systems.

    For instance, systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content.

    Detailed summaries of the copyrighted data used to train these AI systems would also have to be published.

    AI systems with minimal or no risk, such as spam filters, fall largely outside of the rules.

    Most AI systems will likely fall into the high-risk or prohibited categories, leaving their owners exposed to potentially enormous fines if they fall foul of the regulations, according to Racheal Muldoon, a barrister (litigator) at London law firm Maitland Chambers.

    Engaging in prohibited AI practices could lead to a fine of up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    That goes much further than Europe’s signature data privacy law, the General Data Protection Regulation, under which Meta was hit with a €1.2 billion ($1.3 billion) fine last month. GDPR sets fines of up to €10 million ($10.8 million), or up to 2% of a firm’s global turnover.

    Fines under the AI Act serve as a “war cry from the legislators to say, ‘take this seriously’,” Muldoon said.

    At the same time, penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for start-ups.

    The Act also requires EU member states to establish at least one regulatory “sandbox” to test AI systems before they are deployed.

    “The one thing that we wanted to achieve with this text is balance,” Dragoș Tudorache, a member of the European Parliament, told journalists. The Act protects citizens while also “promoting innovation, not hindering creativity, and deployment and development of AI in Europe,” he added.

    The Act gives citizens the right to file complaints against providers of AI systems and makes a provision for an EU AI Office to monitor enforcement of the legislation. It also requires member states to designate national supervisory authorities for AI.

    Microsoft

    (MSFT)
    — which, together with Google, is at the forefront of AI development globally — welcomed progress on the Act but said it looked forward to “further refinement.”

    “We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson said in a statement.

    IBM

    (IBM)
    , meanwhile, called on EU policymakers to take a “risk-based approach” and suggested four “key improvements” to the draft Act, including further clarity around high-risk AI “so that only truly high-risk use cases are captured.”

    The Act may not come into force until 2026, according to Muldoon, who said revisions were likely, given how rapidly AI was advancing. The legislation has already gone through several updates since drafting began in 2021.

    “The law will expand in scope as the technology develops,” Muldoon said.

    [ad_2]

    Source link

  • ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    [ad_1]



    CNN
     — 

    Less than six months after ChatGPT-creator OpenAI unveiled an AI detection tool with the potential to help teachers and other professionals detect AI generated work, the company has pulled the feature.

    OpenAI quietly shut down the tool last week citing a “low rate of accuracy,” according to an update to the original company blog post announcing the feature.

    “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company wrote in the update. OpenAI said it is also committed to helping “users to understand if audio or visual content is AI-generated.”

    The news may renew concerns about whether the companies behind a new crop of generative AI tools are equipped to build safeguards. It also comes as educators prepare for the first full school year with tools like ChatGPT publicly available.

    The sudden rise of ChatGPT quickly raised alarms among some educators late last year over the possibility that it could make it easier than ever for students to cheat on written work. Public schools in New York City and Seattle banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators moved with remarkable speed to rethink their assignments in response to ChatGPT, even as it remained unclear how widespread use of the tool was among students and how harmful it could really be to learning.

    Against that backdrop, OpenAI announced the AI detection tool in February to allow users to check if an essay was written by a human or AI. The feature, which worked on English AI-generated text, was powered by a machine learning system that takes an input and assigns it to several categories. After pasting a body of text such as a school essay into the new tool, it gave one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”

    But even on its launch day, OpenAI admitted the tool was “imperfect” and results should be “taken with a grain of salt.”

    “We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Lama Ahmad, policy research director at OpenAI, told CNN at the time.

    While the tool might provide another reference point, such as comparing past examples of a student’s work and writing style, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”

    Although OpenAI may be shelving its tool for now, there are some alternatives on the market.

    Other companies such as Turnitin have also rolled out AI plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan introduced a similar AI detection feature, called ZeroGPT.

    [ad_2]

    Source link

  • 300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    300 million jobs could be affected by latest wave of AI, says Goldman Sachs | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    As many as 300 million full-time jobs around the world could be automated in some way by the newest wave of artificial intelligence that has spawned platforms like ChatGPT, according to Goldman Sachs economists.

    They predicted in a report Sunday that 18% of work globally could be computerized, with the effects felt more deeply in advanced economies than emerging markets.

    That’s partly because white-collar workers are seen to be more at risk than manual laborers. Administrative workers and lawyers are expected to be most affected, the economists said, compared to the “little effect” seen on physically demanding or outdoor occupations, such as construction and repair work.

    In the United States and Europe, approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely, the bank estimates.

    If generative artificial intelligence “delivers on its promised capabilities, the labor market could face significant disruption,” the economists wrote. The term refers to the technology behind ChatGPT, the chatbot sensation that has taken the world by storm.

    ChatGPT, which can answer prompts and write essays, has already prompted many businesses to rethink how people should work every day.

    This month, its developer unveiled the latest version of the software behind the bot, GPT-4. The platform has quickly impressed early users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Further use of such AI will likely lead to job losses, the Goldman Sachs economists wrote. But they noted that technological innovation that initially displaces workers has historically also created employment growth over the long haul.

    While workplaces may shift, widespread adoption of AI could ultimately increase labor productivity — and boost global GDP by 7% annually over a 10-year period, according to Goldman Sachs.

    “Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI,” the economists added.

    “Most workers are employed in occupations that are partially exposed to AI automation and, following AI adoption, will likely apply at least some of their freed-up capacity toward productive activities that increase output.”

    Of US workers expected to be affected, for instance, 25% to 50% of their workload “can be replaced,” the researchers added.

    “The combination of significant labor cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labor productivity boom like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer.”

    — CNN’s Nicole Goodkind contributed to this report.

    [ad_2]

    Source link

  • White House unveils an AI plan ahead of meeting with tech CEOs | CNN Business

    White House unveils an AI plan ahead of meeting with tech CEOs | CNN Business

    [ad_1]



    CNN
     — 

    The White House on Thursday announced a series of measures to address the challenges of artificial intelligence, driven by the sudden popularity of tools such as ChatGPT and amid rising concerns about the technology’s potential risks for discrimination, misinformation and privacy.

    The US government plans to introduce policies that shape how federal agencies procure and use AI systems, the White House said. The step could significantly influence the market for AI products and control how Americans interact with AI on government websites, at security checkpoints and in other settings.

    The National Science Foundation will also spend $140 million to promote research and development in AI, the White House added. The funds will be used to create research centers that seek to apply AI to issues such as climate change, agriculture and public health, according to the administration.

    The plan comes the same day that Vice President Kamala Harris and other administration officials are expected to meet with the CEOs of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic to emphasize the importance of ethical and responsible AI development. And it coincides with a UK government inquiry launched Thursday into the risks and benefits of AI.

    “Tech companies have a fundamental responsibility to make sure their products are safe and secure, and that they protect people’s rights before they’re deployed or made public,” a senior Biden administration official told reporters on a conference call.

    Officials cited a range of risks the public faces in the widespread adoption of AI tools, including the possible use of AI-created deepfakes and misinformation that could undermine the democratic process. Job losses linked to rising automation, biased algorithmic decision-making, physical dangers arising from autonomous vehicles and the threat of AI-powered malicious hackers are also on the White House’s list of concerns.

    It’s just the latest example of the federal government acknowledging concerns from the rapid development and deployment of new AI tools, and trying to find ways to address some of the risks.

    Testifying before Congress, members of the Federal Trade Commission have argued AI could “turbocharge” fraud and scams. Its chair, Lina Khan, wrote in a New York Times op-ed this week that the US government has ample existing legal authority to regulate AI by leaning on its mandate to protect consumers and competition.

    Last year, the Biden administration unveiled a proposal for an AI Bill of Rights calling for developers to respect the principles of privacy, safety and equal rights as they create new AI tools.

    Earlier this year, the Commerce Department released voluntary risk management guidelines for AI that it said could help organizations and businesses “govern, map, measure and manage” the potential dangers in each part of the development cycle. In April, the Department also said it is seeking public input on the best policies for regulating AI, including through audits and industry self-regulation.

    The US government isn’t alone in seeking to shape AI development. European officials anticipate hammering out AI legislation as soon as this year that could have major implications for AI companies around the world.

    [ad_2]

    Source link

  • Forget about the AI apocalypse. The real dangers are already here | CNN Business

    Forget about the AI apocalypse. The real dangers are already here | CNN Business

    [ad_1]



    CNN
     — 

    Two weeks after members of Congress questioned OpenAI CEO Sam Altman about the potential for artificial intelligence tools to spread misinformation, disrupt elections and displace jobs, he and others in the industry went public with a much more frightening possibility: an AI apocalypse.

    Altman, whose company is behind the viral chatbot tool ChatGPT, joined Google DeepMind CEO Demis Hassabis, Microsoft’s CTO Kevin Scott and dozens of other AI researchers and business leaders in signing a one-sentence letter last month stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The stark warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. But it also highlights an important dynamic in Silicon Valley right now: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.

    The dynamic has played out elsewhere recently, too. Tesla CEO Elon Musk, for example, said in a TV interview in April that AI could lead to “civilization destruction.” But he still remains deeply involved in the technology through investments across his sprawling business empire and has said he wants to create a rival to the AI offerings by Microsoft and Google.

    Some AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services.

    “Motives seemed to be mixed,” Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. Some of the execs are likely “genuinely worried about what they have unleashed,” he said, but others may be trying to focus attention on “abstract possibilities to detract from the more immediate possibilities.”

    Representatives for Google and OpenAI did not immediately respond to a request for comment. In a statement, a Microsoft spokesperson said: “We are optimistic about the future of AI, and we think AI advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly.”

    For Marcus, a self-described critic of AI hype, “the biggest immediate threat from AI is the threat to democracy from the wholesale production of compelling misinformation.”

    Generative AI tools like OpenAI’s ChatGPT and Dall-E are trained on vast troves of data online to create compelling written work and images in response to user prompts. With these tools, for example, one could quickly mimic the style or likeness of public figures in an attempt to create disinformation campaigns.

    In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

    Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright “hallucinating” responses and potentially perpetuating racial and gender biases.

    Gary Marcus, professor emeritus at New York University, right, listens to Sam Altman, chief executive officer and co-founder of OpenAI, speak during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.

    Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.

    Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.

    “If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer,” Bender told CNN.

    Regulators may be the real intended audience for the tech industry’s doomsday messaging.

    As Bender puts it, execs are essentially saying: “‘This stuff is very, very dangerous, and we’re the only ones who understand how to rein it in.’”

    Judging from Altman’s appearance before Congress, this strategy might work. Altman appeared to win over Washington by echoing lawmakers’ concerns about AI — a technology that many in Congress are still trying to understand — and offering suggestions for how to address it.

    This approach to regulation would be “hugely problematic,” Bender said. It could give the industry influence over the regulators tasked with holding it accountable and also leave out the voices and input of other people and communities experiencing negative impacts of this technology.

    “If the regulators kind of orient towards the people who are building and selling the technology as the only ones who could possibly understand this, and therefore can possibly inform how regulation should work, we’re really going to miss out,” Bender said.

    Bender said she tries, at every opportunity, to tell people “these things seem much smarter than they are.” As she put it, this is because “we are as smart as we are” and the way that we make sense of language, including responses from AI, “is actually by imagining a mind behind it.”

    Ultimately, Bender put forward a simple question for the tech industry on AI: “If they honestly believe that this could be bringing about human extinction, then why not just stop?”

    [ad_2]

    Source link

  • Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    [ad_1]



    CNN
     — 

    Some of the world’s top artificial intelligence companies are launching a new industry body to work together — and with policymakers and researchers — on ways to regulate the development of bleeding-edge AI.

    The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society.

    Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry.

    News of the forum comes after the four AI firms, along with several others including Amazon and Meta, pledged to the Biden administration to subject their AI systems to third-party testing before releasing them to the public and to clearly label AI-generated content.

    The industry-led forum, which is open to other companies designing the most advanced AI models, plans to make its technical evaluations and benchmarks available through a publicly accessible library, the companies said in a joint statement.

    “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

    The announcement comes a day after AI experts such as Anthropic CEO Dario Amodei and AI pioneer Yoshua Bengio warned lawmakers of potentially serious, even “catastrophic” societal risks stemming from unrestrained AI development.

    “In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.

    Within two to three years, Amodei said, AI could become powerful enough to help malicious actors build functional biological weapons, where today those actors may lack the specialized knowledge needed to complete the process.

    The best way to prevent major harms, Bengio told a Senate panel, is to restrict access to AI systems; develop standard and effective testing regimes to ensure those systems reflect shared societal values; limit how much of the world any single AI system can truly understand; and constrain the impact that AI systems can have on the real world.

    The European Union is moving toward legislation that could be finalized as early as this year that would ban the use of AI for predictive policing and limit its use in lower-risk scenarios.

    US lawmakers are much further behind. While a number of AI-related bills have already been introduced in Congress, much of the driving force for a comprehensive AI bill rests with Senate Majority Leader Chuck Schumer, who has prioritized getting members up to speed on the basics of the industry through a series of briefings this summer.

    Starting in September, Schumer has said, the Senate will hold a series of nine additional panels for members to learn about how AI could affect jobs, national security and intellectual property.

    [ad_2]

    Source link

  • Welcome to the era of viral AI generated ‘news’ images | CNN Business

    Welcome to the era of viral AI generated ‘news’ images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Pope Francis wearing a massive, white puffer coat. Elon Musk walking hand-in-hand with rival GM CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.

    None of these things actually happened, but AI-generated images depicting them did go viral online over the past week.

    The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. Model and TV personality Chrissy Teigen, for example, tweeted that she thought the pope’s puffer coat was real, saying, “didn’t give it a second thought. no way am I surviving the future of technology.” The images also sparked a slew of headlines, as news organizations rushed to debunk the false images, especially those of Trump, who was ultimately indicted by a Manhattan grand jury on Thursday but has not been arrested.

    The situation demonstrates a new online reality: the rise of a new crop of buzzy artificial intelligence tools has made it cheaper and easier than ever to create realistic images, as well as audio and videos. And these images are likely to pop up with increasing frequency on social media.

    While these AI tools may enable new means of expressing creativity, the spread of computer-generated media also threatens to further pollute the information ecosystem. That risks adding to the challenges for users, news organizations and social media platforms to vet what’s real, after years of grappling with online misinformation featuring far less sophisticated visuals. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.

    “I worry that it will sort of get to a point where there will be so much fake, highly realistic content online that most people will just go with their tribal instincts as a guide to what they think is real, more than actually informed opinions based on verified evidence,” said Henry Ajder, a synthethic media expert who works as an advisor to companies and government agencies, including Meta Reality Labs’ European Advisory Council.

    Images, compared to the AI-generated text that has also recently proliferated thanks to tools like ChatGPT, can be especially powerful in provoking emotions when people view them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, a nonprofit industry group. That can make it harder for people to slow down and evaluate whether what they’re looking at is real or fake.

    What’s more, coordinated bad actors could eventually attempt to create fake content in bulk — or suggest that real content is computer-generated — in order to confuse internet users and provoke certain behaviors.

    “The paranoia of an impending Trump … potential arrest created a really useful case study in understanding what the potential implications are, and I think we’re very lucky that things did not go south,” said Ben Decker, CEO of threat intelligence group Memetica. “Because if more people had had that idea en masse, in a coordinated fashion, I think there’s a universe where we could start to see the online to offline effects.”

    Computer-generated image technology has improved rapidly in recent years, from the photoshopped image of a shark swimming through a flooded highway that has been repeatedly shared during natural disasters to the websites that four years ago began churning out mostly unconvincing fake photos of non-existent people.

    Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts. On its website, Midjourney describes itself as “a small self-funded team,” with just 11 full-time staff members.

    A cursory glance at a Facebook page popular among Midjourney users reveals AI-generated images of a seemingly inebriated Pope Francis, elderly versions of Elvis and Kurt Cobain, Musk in a robotic Tesla bodysuit and many creepy animal creations. And that’s just from the past few days.

    Midjourney has emerged as a popular tool for users to create AI-generated images.

    The latest version of Midjourney is only available to a select number of paid users, Midjourney CEO David Holz told CNN in an email Friday. Midjourney this week paused access to the free trial of its earlier versions due to “extraordinary demand and trial abuse,” according to a Discord post from Holz, but he told CNN it was unrelated to the viral images. The creator of the Trump arrest images also claimed he was banned from the site.

    The rules page on the company’s Discord site asks users: “Don’t use our tools to make images that could inflame, upset, or cause drama. That includes gore and adult content.”

    “Moderation is hard and we’ll be shipping improved systems soon,” Holz told CNN. “We’re taking lots of feedback and ideas from experts and the community and are trying to be really thoughtful.”

    In most cases, the creators of the recent viral images don’t appear to have been acting malevolently. The Trump arrest images were created by the founder of the online investigative journalism outlet Bellingcat, who clearly labeled them as his fabrications, even if other social media users weren’t as discerning.

    There are efforts by platforms, AI technology companies and industry groups to improve the transparency around when a piece of content is generated by a computer.

    Platforms including Meta’s Facebook and Instagram, Twitter and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. But as use of AI-generated technologies grows, even such policies could threaten to undermine user trust. If, for example, a fake image accidentally slipped through a platform’s detection system, “it could give people false confidence,” Ajder said. “They’ll say, ‘there’s a detection system that says it’s real, so it must be real.’”

    Work is also underway on technical solutions that would, for example, watermark an AI-generated image or include a transparent label in an image’s metadata, so anyone viewing it across the internet would know it was created by a computer. The Partnership on AI has developed a set of standard, responsible practices for synthetic media along with partners like ChatGPT-creator OpenAI, TikTok, Adobe, Bumble and the BBC, which includes recommendations such as how to disclose an image was AI-generated and how companies can share data around such images.

    “The idea is that these institutions are all committed to disclosure, consent and transparency,” Leibowicz said.

    A group of tech leaders, including Musk and Apple co-founder Steve Wozniak, this week wrote an open letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” Still, it’s not clear whether any labs will take such a step. And as the technology rapidly improves and becomes accessible beyond a relatively small group of corporations committed to responsible practices, lawmakers may need to get involved, Ajder said.

    “This new age of AI can’t be held in the hands of a few massive companies getting rich off of these tools, we need to democratize this technology,” he said. “At the same time, there are also very real and legitimate concerns of having a radical open approach where you just open source a tool or have very minimal restrictions on its use is going to lead to a massive scaling of harm … and I think legislation will probably play a role in reigning in some of the more radically open models.”

    [ad_2]

    Source link

  • Microsoft opens up its AI-powered Bing to all users | CNN Business

    Microsoft opens up its AI-powered Bing to all users | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft is rolling out the new AI-powered version of its Bing search engine to anyone who wants to use it.

    Nearly three months after the company debuted a limited preview version of its new Bing, powered by the viral AI chatbot ChatGPT, Microsoft is opening it up to all users without a waitlist – as long as they’re signed into the search engine via Microsoft’s Edge browser.

    The move highlights Microsoft’s commitment to move forward with the product even as the AI technology behind it has sparked concerns around inaccuracies and tone. In some cases, people who baited the new Bing were subject to some emotionally reactive and aggressive responses.

    “We’re getting better at speed, we’re getting better at accuracy … but we are on a never-ending quest to make things better and better,” Yusuf Mehdi, a VP at Microsoft overseeing its AI initiatives, told CNN on Wednesday.

    Bing now gets more than 100 million daily active users each day, a significant uptick in the past few months, according to Mehdi. Google, which has long dominated the market, is also adding similar AI features to its search engine.

    In February, Microsoft showed off how its revamped search engine could write summaries of search results, chat with users to answer additional questions about a query and write emails or other compositions based on the results.

    At a press event in New York City on Wednesday, the company shared an early look at some updates, including the ability to ask questions with pictures, access chat history so the chatbot remembers its rapport with users, and export responses to Microsoft Word. Users can also personalize the tone and style of the chatbot’s responses, selecting from a lengthier, creative reply to something that’s shorter and to the point.

    The wave of attention in recent months around ChatGPT, developed by OpenAI with financial backing from Microsoft, helped renew an arms race among tech companies to deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

    Beyond adding AI features to search, Microsoft has said it plans to bring ChatGPT technology to its core productivity tools, including Word, Excel and Outlook, with the potential to change the way we work. The decision to add generative AI features to Bing could be particularly risky, however, given how much people rely on search engines for accurate and reliable information.

    Microsoft’s moves also come amid heightened scrutiny on the rapid pace of advancement in AI technology. In March, some of the biggest names in tech, including Elon Musk and Apple co-founder Steve Wozniak, called for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Mehdi said he doesn’t believe the AI industry is moving too fast and suggested the calls for a pause aren’t particularly helpful.

    “Some people think we should pause development for six months but I’m not sure that fixes anything or improves or moves things along,” he said. “But I understand where it’s coming from concern wise.”

    He added: “The only way to really build this technology well is to do it out in the open in the public so we can have conversations about it.”

    [ad_2]

    Source link