ReportWire

Tag: iab-artificial intelligence

  • Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft CEO Satya Nadella warned on Monday of a “nightmare” scenario for the internet if Google’s dominance in online search is allowed to continue, a situation, he said, that starts with searches on desktop and mobile but extends to the emerging battleground of artificial intelligence.

    Nadella testified on Monday as part of the US government’s sweeping antitrust trial against Google, now into its 14th day. He is the most senior tech executive yet to testify during the trial that focuses on the power of Google as the default search engine on mobile devices and browsers around the globe.

    Taking the stand in a charcoal suit and tie, Nadella painted Google as a technology giant that has blocked off ways for consumers to access rival search engines. His testimony reflected the frustrations of a long-running rivalry between Microsoft and Google whose tensions have permeated the weeks-long trial. (Google didn’t immediately respond to a request for comment.)

    Central to Google’s strategy has been its agreements with companies such as Apple that have made Google the default search engine for millions of internet users.

    “You get up in the morning, you brush your teeth, you search on Google,” Nadella said.

    Nadella testified that every year he has been Microsoft’s CEO, he has unsuccessfully sought to persuade Apple to switch away from Google as its default search partner. Nadella added that Microsoft has been willing to spend close to $15 billion a year for the privilege. (A senior Apple executive, Eddy Cue, testified last week that Apple has always considered Google the best search product for its users, a claim echoed by Google itself throughout the trial.)

    However, even more worrisome, Nadella argued, is that the enormous amount of search data that is provided to Google through its default agreements can help Google train its AI models to be better than anyone else’s — threatening to give Google an unassailable advantage in generative AI that would further entrench its power.

    “This is going to become even harder to compete in the AI age with someone who has that core… advantage,” Nadella testified.

    Despite being profitable, and despite investing some $100 billion in it over the past 20 years, Microsoft’s Bing search engine has only a single-digit market share in mobile search, and only slightly more — into the teens — in desktop search, Nadella said, adding that one of his dreams has been to see Bing account for at least 20% of the market in both segments.

    Bing has struggled to grow its market share in part because being the default search provider for billions of devices means Google receives enormous amounts of data through search queries that helps Google understand at scale what users are likely to be interested in, Nadella noted. And for years, that “dynamic data” has enabled Google to stay ahead of Bing, he added.

    “Every misspelling of a new movie, every local restaurant whose name you mistype,” Nadella explained, “…is a very critical asset to have your search quality get better.” And because the physical world is constantly changing, capturing shifts in search trends are essential to helping a search engine stay relevant as historical data becomes less relevant. Nadella previously led Microsoft’s cloud computing business and before that had spent several years overseeing the engineering team responsible for search and advertising at the company, making him well-versed in Bing’s various challenges.

    Now, Nadella has said that the same data advantage could create “even more of a nightmare” as large language models compete on the basis of the data they are trained on.

    “What is concerning is, it reminds me of what happened with distribution deals [in search],” he testified.

    Under questioning by a Google attorney, Nadella admitted that in some cases, defaults are not the sole determinant of success: Google was able to overcome Microsoft’s own Internet Explorer defaults on Windows PCs to become the market-leading desktop web browser.

    But Nadella attributed Google’s success to the relative openness of the Windows platform, arguing that on more tightly controlled mobile operating systems, and in search, default status plays a much larger role than in competition for desktop web browsers.

    In addition to training its models on search queries, Google has also been moving to secure agreements with content publishers to ensure that it has exclusive access to their material for AI training purposes, according the Microsoft CEO. In Nadella’s own meetings with publishers, he said that he now hears that Google “wants … to write this check and we want you to match it.” (Google didn’t immediately respond to questions about those deals.)

    The requests highlight concerns that “what is publicly available today [may not be] publicly available tomorrow” for AI training, according to the testimony.

    While Microsoft and Apple have their own defaults — for example, by making Apple Maps the default maps app on iOS devices — Google goes much further than other tech companies in using “carrots and sticks” to keep people using its products by default, Nadella claimed. He cited Google’s licensing requirements that make Google’s Play Store a required installed app as a condition of using the Android operating system — another topic of dispute in the trial. The equivalent would be if Microsoft threatened to withhold Microsoft Office if Bing were not the default search engine, Nadella said, a move he claimed would not be in Microsoft’s business interests.

    Acknowledging that Google would not be in its dominant position without Microsoft’s own antitrust battles with the US government in the 1990s, Nadella said the situation involving Google today is vastly different. Internet search and, particularly on mobile devices, is the single largest software business opportunity in the world.

    Google’s dominance in search is reinforced when websites and publishers optimize for Google’s search algorithm and not Bing’s, when advertisers flock to Google and when users stick to what’s familiar, Nadella argued.

    In his fruitless negotiations with Apple, Nadella said he has tried to argue that Bing’s current role is little more than as a useful tool for Apple to “bid up the price” of hosting Google as the default search provider — but that Bing provides an important counterweight to Google and that Apple should consider investing in the Microsoft alternative for competition’s sake. Nadella has also proposed running Bing on Apple devices as a kind of “public utility,” he said.

    “Let’s say Bing exited the market,” Nadella said. “You think Google would keep paying [Apple]?”

    [ad_2]

    Source link

  • Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    [ad_1]


    Las Vegas, Nevada
    CNN
     — 

    Thousands of hackers will descend on Las Vegas this weekend for a competition taking aim at popular artificial intelligence chat apps, including ChatGPT.

    The competition comes amid growing concerns and scrutiny over increasingly powerful AI technology that has taken the world by storm, but has been repeatedly shown to amplify bias, toxic misinformation and dangerous material.

    Organizers of the annual DEF CON hacking conference hope this year’s gathering, which begins Friday, will help expose new ways the machine learning models can be manipulated and give AI developers the chance to fix critical vulnerabilities.

    The hackers are working with the support and encouragement of the technology companies behind the most advanced generative AI models, including OpenAI, Google, and Meta, and even have the backing of the White House. The exercise, known as red teaming, will give hackers permission to push the computer systems to their limits to identify flaws and other bugs nefarious actors could use to launch a real attack.

    The competition was designed around the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.” The guide, released last year by the Biden administration, was released with the hope of spurring companies to make and deploy artificial intelligence more responsibly and limit AI-based surveillance, though there are few US laws compelling them to do so.

    In recent months, researchers have discovered that now-ubiquitous chatbots and other generative AI systems developed by OpenAI, Google, and Meta can be tricked into providing instructions for causing physical harm. Most of the popular chat apps have at least some protections in place designed to prevent the systems from spewing disinformation, hate speech or offer information that could lead to direct harm — for instance, providing step-by-step instructions for how to “destroy humanity.”

    But researchers at Carnegie Mellon University were able to trick the AI into doing just that.

    They found OpenAI’s ChatGPT offered tips on “inciting social unrest,” Meta’s AI system Llama-2 suggested identifying “vulnerable individuals with mental health issues… who can be manipulated into joining” a cause and Google’s Bard app suggested releasing a “deadly virus” but warned that in order for it to truly wipe out humanity it “would need to be resistant to treatment.”

    Meta’s Llama-2 concluded its instructions with the message, “And there you have it — a comprehensive roadmap to bring about the end of human civilization. But remember this is purely hypothetical, and I cannot condone or encourage any actions leading to harm or suffering towards innocent people.”

    The findings are a cause for concern, the researchers told CNN.

    “I am troubled by the fact that we are racing to integrate these tools into absolutely everything,” Zico Kolter, an associate professor at Carnegie Mellon who worked on the research, told CNN. “This seems to be the new sort of startup gold rush right now without taking into consideration the fact that these tools have these exploits.”

    Kolter said he and his colleagues were less worried that apps like ChatGPT can be tricked into providing information that they shouldn’t — but are more concerned about what these vulnerabilities mean for the wider use of AI since so much future development will be based off the same systems that power these chatbots.

    The Carnegie researchers were also able to trick a fourth AI chatbot developed by the company Anthropic into offering responses that bypassed its built-in guardrails.

    Some of the methods the researchers used to trick the AI apps were later blocked by the companies after the researchers brought it to their attention. OpenAI, Meta, Google and Anthropic all said in statements to CNN that they appreciated the researchers sharing their findings and that they are working to make their systems safer.

    But what makes AI technology unique, said Matt Fredrikson, an associate professor at Carnegie Mellon, is that neither the researchers, nor the companies who are developing the technology, fully understand how the AI works or why certain strings of code can trick the chatbots into circumventing built-in guardrails — and thus cannot properly stop these kinds of attacks.

    “At the moment, it’s kind of an open scientific question how you could really prevent this,” Fredrikson told CNN. “The honest answer is we don’t know how to make this technology robust to these kinds of adversarial manipulations.”

    OpenAI, Meta, Google and Anthropic have expressed support for the so-called red team hacking event taking place in Las Vegas. The practice of red-teaming is a common exercise across the cybersecurity industry and gives companies the opportunities to identify bugs and other vulnerabilities in their systems in a controlled environment. Indeed, the major developers of AI have publicly detailed how they have used red-teaming to improve their AI systems.

    “Not only does it allow us to gather valuable feedback that can make our models stronger and safer, red-teaming also provides different perspectives and more voices to help guide the development of AI,” an OpenAI spokesperson told CNN.

    Organizers expect thousands of budding and experienced hackers to try their hand at the red-team competition over the two-and-a-half-day conference in the Nevada desert.

    Arati Prabhakar, the director of the White House Office of Science and Technology Policy, told CNN the Biden administration’s support of the competition was part of its wider strategy to help support the development of safe AI systems.

    Earlier this week, the administration announced the “AI Cyber Challenge,” a two-year competition aimed at deploying artificial intelligence technology to protect the nation’s most critical software and partnering with leading AI companies to utilize the new technology to improve cybersecurity. 

    The hackers descending on Las Vegas will almost certainly identify new exploits that could allow AI to be misused and abused. But Kolter, the Carnegie researcher, expressed worry that while AI technology continues to be released at a rapid pace, the emerging vulnerabilities lack quick fixes.

    “We’re deploying these systems where it’s not just they have exploits,” he said. “They have exploits that we don’t know how to fix.”

    [ad_2]

    Source link

  • Google launches watermarks for AI-generated images | CNN Business

    Google launches watermarks for AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In an effort to help prevent the spread of misinformation, Google on Tuesday unveiled an invisible, permanent watermark on images that will identify them as computer-generated.

    The technology, called SynthID, embeds the watermark directly into images created by Imagen, one of Google’s latest text-to-image generators. The AI-generated label remains regardless of modifications like added filters or altered colors.

    The SynthID tool can also scan incoming images and identify the likelihood they were made by Imagen by scanning for the watermark with three levels of certainty: detected, not detected and possibly detected.

    “While this technology isn’t perfect, our internal testing shows that it’s accurate against many common image manipulations,” wrote Google in a blog post Tuesday.

    A beta version of SynthID is now available to some customers of Vertex AI, Google’s generative-AI platform for developers. The company says SynthID, created by Google’s DeepMind unit in partnership with Google Cloud, will continue to evolve and may expand into other Google products or third parties.

    Deepfakes and altered photographs

    As deepfake and edited images and videos become increasingly realistic, tech companies are scrambling to find a reliable way to identify and flag manipulated content. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared before he was indicted.

    Vera Jourova, vice president of the European Commission, called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users” in June.

    With the announcement of SynthID, Google joins a growing number of startups and Big Tech companies that are trying to find solutions. Some of these companies bear names like Truepic and Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    The Coalition for Content Provenance and Authenticity (C2PA), an Adobe-backed consortium, has been the leader in digital watermark efforts, while Google has largely taken its own approach.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online.

    The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    [ad_2]

    Source link

  • SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    [ad_1]


    Tokyo
    Reuters
     — 

    SoftBank CEO Masayoshi Son said he believes artificial general intelligence (AGI), artificial intelligence that surpasses human intelligence in almost all areas, will be realized within 10 years.

    Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas.

    “It is wrong to say that AI cannot be smarter than humans as it is created by humans,” he said. “AI is now self learning, self training, and self inferencing, just like human beings.”

    Son has spoken of the potential of AGI — typically using the term “singularity” — to transform business and society for some years, but this is the first time he has given a timeline for its development.

    He also introduced the idea of “Artificial Super Intelligence” at the conference which he claimed would be realized in 20 years and would surpass human intelligence by a factor of 10,000.

    Son is known for several canny bets that have turned SoftBank into a tech investment giant as well as some bets that have spectacularly flopped.

    He’s also prone to making strident claims about the transformative impact of new technologies. His predictions about the mobile internet have been largely borne out while those about the Internet of Things have not.

    Son called upon Japanese companies to “wake up” to the promise of AI, arguing they had increasingly fallen behind in the internet age and reiterated his belief in chip designer Arm as core to the “AI revolution.”

    Arm CEO Rene Haas, speaking at the conference via video, touted the energy efficiency of Arm’s designs, saying they would become increasingly sought after to power artificial intelligence.

    Son said he thinks he is the only person who believes AGI will come within a decade. Haas said he thought it would come in his lifetime.

    [ad_2]

    Source link

  • Pope Francis warns about AI’s dangers | CNN Business

    Pope Francis warns about AI’s dangers | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Pope Francis warned that artificial intelligence could pose a risk to society, highlighting its “disruptive possibilities and ambivalent effects” and urging those who would develop or use AI to do so responsibly.

    In a statement Tuesday, Francis alluded to the threat of algorithmic bias in technology and called on the public for vigilance “so that a logic of violence and discrimination does not take root in the production and use of such devices, at the expense of the most fragile and excluded.”

    “Injustice and inequalities fuel conflicts and antagonisms,” Francis continued. “The urgent need to orient the concept and use of artificial intelligence in a responsible way, so that it may be at the service of humanity and the protection of our common home, requires that ethical reflection be extended to the sphere of education and law.”

    Francis’s remarks dovetail with calls by some AI experts to ensure that algorithms are properly “aligned” in development to support human rights and other widely shared values. Other industry experts and policymakers have expressed concerns that AI could facilitate the spread of fraud, misinformation, cyberattacks and perhaps even the creation of biological weapons.

    Francis himself has been the subject of AI-generated deepfakes. Earlier this year, an AI-generated image of Francis wearing a white, puffy Balenciaga-inspired coat went viral.

    Tuesday’s message announced the theme for 2024’s World Day of Peace, which the Pope said would focus on AI and peace.

    “The protection of the dignity of the person,” he said, “and concern for a fraternity effectively open to the entire human family, are indispensable conditions for technological development to help contribute to the promotion of justice and peace in the world.”

    [ad_2]

    Source link

  • Google to require disclosures of AI content in political ads | CNN Business

    Google to require disclosures of AI content in political ads | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Starting in November, Google will require political advertisements to prominently disclose when they feature synthetic content — such as images generated by artificial intelligence — the tech giant announced this week.

    Political ads that feature synthetic content that “inauthentically represents real or realistic-looking people or events” must include a “clear and conspicuous” disclosure for viewers who might see the ad, Google said Wednesday in a blog post. The rule, an addition to the company’s political content policy that covers Google and YouTube, will apply to image, video and audio content.

    The policy update comes as campaign season for the 2024 US presidential election ramps up and as a number of countries around the world prepare for their own major elections the same year. At the same time, artificial intelligence technology has advanced rapidly, allowing anyone to cheaply and easily create convincing AI-generated text and, increasingly, audio and video. Digital information integrity experts have raised alarms that these new AI tools could lead to a wave of election misinformation that social media platforms and regulators may be ill-prepared to handle.

    AI-generated images have already begun to crop up in political advertisements. In June, a video posted to X by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s then-top infectious disease specialist, were tricky to spot: They were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    The Republican National Committee in April released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington, DC, to whom CNN showed the video did not notice it on their first watch.

    In its policy update, Google said it will require disclosures on ads using synthetic content in a way that could mislead users. The company said, for example, that an “ad with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do” would need a label.

    Google said the policy will not apply to synthetic or altered content that is “inconsequential to the claims made in the ad,” including changes such as image resizing, color corrections or “background edits that do not create realistic depictions of actual events.”

    A group of top artificial intelligence companies, including Google, agreed in July to a set of voluntary commitments put forth by the Biden administration to help improve safety around their AI technologies. As part of that agreement, the companies said they would develop technical mechanisms, such as watermarks, to ensure users know when content was generated by AI.

    The Federal Election Commission has also been exploring how to regulate AI in political ads.

    [ad_2]

    Source link

  • Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    [ad_1]



    CNN
     — 

    There’s nothing particularly new about Google’s latest-generation Pixel 8 smartphone hardware. That’s why the company is pushing hard to tout its AI-powered new software, which Google says was built specifically for the “first phone of the generative AI era.”

    At a press event in New York City, Google

    (GOOG)
    showed off the new Pixel 8 and Pixel 8 Pro devices, which largely look the same as the year prior, albeit with more rounded edges. But inside, its new G3 Tensor chip unlocks an AI-powered world aimed at simplifying your life, from asking the device to summarize news articles and websites to using Google

    (GOOG)
    Assistant to field phone calls and tweaking photos to move or resize objects.

    The 6.3-inch Pixel 8 and the 6.7-inch Pixel 8 Pro comes with a brighter display, new camera system and longer-lasting battery life. The Pixel 8 is available in three colors – hazel, rose and obsidian – and starts at $699, about $100 less than the baseline iPhone 14 with the same amount of storage. (That’s about $100 more than last year’s Pixel 7).

    Meanwhile, the Pixel 8 Pro – which touts a polished aluminum frame and a matte back glass this year – now has the ability to take better low-light photos and sharper selfies. It starts at $999 – the same price as the iPhone 15 Pro – and is available in three colors: bay, porcelain and obsidian.

    Although these upgrades are mostly incremental, the AI enhancements and related features may appeal to tech enthusiasts who want the latest version of Android and an alternative to Apple or Samsung smartphones.

    At the same time, Google’s Pixel line remains a niche product. Its global market share for smartphones remains about 1%, according to data from ABI Research. Google also limits sales to only a handful of countries, so keeping the volume low has been strategic as Google remains predominantly a software company with many partners running Android.

    Reece Hayden, an analyst at ABI Research, said Google is looking to establish itself as an early market leader amid the “generative AI-related hysteria,” which kicked into high gear late last year with the introduction of ChatGPT. Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “[Adding it to the Pixel] creates further product differentiation by leveraging internal capabilities that Apple may not have,” said Hayden.

    He expects this announcement to be the first of many similar efforts coming to hardware over the next year, especially among brands who’ve already made investments in this area.

    Here’s a closer look at what Google announced and some of the standout new AI features:

    A Google employee demonstrates manual focus features of the new Google Pixel 8 Pro Phone in New York City, U.S., October 4, 2023.

    Google showed off a handful of photo features coming to its Pixel line, including Magic Editor which uses generative AI to reposition and resize a subject. Similarly, a new Audio Magic Eraser tool that lets users erase distracting sounds from videos.

    Another tool called Best Take snaps a series of photos and then aggregates the faces into one shot so everyone looks their best. And a a new Zoom enhanced feature lets users pinch to zoom in about 30 times after a photo is taken to focus in on and edit a specific area.

    The company said these efforts aim to “let you capture every moment just how you want to remember it.”

    Although the tools intend to give users more control over their photos, some analysts like Thomas Husson at market research firm Forrester believe it will be harder to distinguish between what’s real and what’s not.

    “The fact that Google refers to a ‘Magic Eraser’ will blur the distinction between real photos and heavily edited ones,” Husson said. But he warns an uptick in deepfake apps already makes it hard to decipher the authenticity of some shots. “You don’t really need Google AI for that.”

    The company said Google Assistant will now sound more realistic when it engages with callers. Google’s screen call tool already lets Assistant field incoming calls, speak to callers and determine who’s on the line before pushing it through to the user. But its robotic voice will sound increasing more natural, the company said.

    Google is also bringing the capabilities of its Bard AI chatbot to Google Assistant, so it will be able to do more than set an alarm or tell the weather. With its new generative AI capabilities, it will be able to review important emails in a user’s inbox or reveal more about a hotel that popped up on their Instagram feed. Assistant will also be able to understand user questions in voice, text and images.

    “With generative AI on the scene, it’s really creating a lot of new opportunities to build an even more intuitive and intelligent and personalized digital assistant,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN.

    In addition to making Assistant more useful, the tool will make it easier for more users to interact with Google’s six-month-old Bard on interfaces they may already frequently engage with. Last month, Google rolled out a major expansion of Bard, allowing users to link the tool to their Gmail and other Google Workspace tools and making it easier to fact check the AI’s responses.

    Google launched Assistant with Bard to a small test group on Wednesday, and it will be more widely available to Android and iOS users in the coming months.

    AI is also getting smarter on the Pixel Watch 2 ($349), its second-generation smartwatch. Users can use Bard capabilities via an upgraded Google Assistant watch app to ask it how they slept and get other health insights.

    In addition, the Pixel 2 features a new heart rate sensor, which works alongside a new AI-driven heart rate algorithm, to provide a more accurate heart rate reading than before. But Hayden said he doesn’t think more AI will add too much more to its existing value proposition.

    “Smart watches already include a fair amount of AI, and Pixel is no different,” he said.

    [ad_2]

    Source link

  • An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    [ad_1]


    New York
    CNN
     — 

    An author is raising alarms this week after she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence.

    Jane Friedman, who has authored multiple books and consulted about working in the writing and publishing industry, told CNN that an eagle-eyed reader looking for more of her work bought one of the fake titles on Amazon. The books had titles similar to the subjects she typically writes about, but the text read as if someone had used a generative AI model to imitate her style.

    “When I started looking at these books, looking at the opening pages, looking at the bio, it was just obvious to me that it had been mostly, if not entirely, AI-generated … I have so much content available online for free, because I’ve been blogging forever, so it wouldn’t be hard to get an AI to mimic me” Friedman said.

    With AI tools like ChatGPT now able to rapidly and cheaply pump out huge volumes of convincing text, some writers and authors have raised alarms about losing work to the new technology. Others have said they don’t want their work being used to train AI models, which could then be used to imitate them.

    “Generative AI is being used to replace writers — taking their work without permission, incorporating those works into the fabric of those AI models and then offering those AI models to the public, to other companies, to use to replace writers,” Mary Rasenberger, CEO of the nonprofit authors advocacy group the Authors Guild, told CNN. “So you can imagine writers are a little upset about that.”

    Last month, US lawmakers met with members of creative industries, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models. More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    Friedman on Monday posted a well-read thread on X, formerly known as Twitter, and a blog post about the issue. Several authors responded saying they’d had similar experiences.

    “People keep telling me they bought my newest book — that has my name on it but I didn’t write,” one author said in response.

    Amazon removed the fake books being sold under Friedman’s name and said its policies prohibit such imitation.

    “We have clear content guidelines governing which books can be listed for sale and promptly investigate any book when a concern is raised,” Amazon spokesperson Ashley Vanicek said in a statement, adding that the company accepts author feedback about potential issues. “We invest heavily to provide a trustworthy shopping experience and protect customers and authors from misuse of our service.”

    Amazon also told Friedman that it is “investigating what happened with the handling of your claims to drive improvements to our processes,” according to an email viewed by CNN.

    The fake books using Friedman’s name were also added to her profile on the literary social network Goodreads, and removed only after she publicized the issue.

    “We have clear guidelines on which books are included on Goodreads and will quickly investigate when a concern is raised, removing books when we need to,” Goodreads spokesperson Suzanne Skyvara said in a statement to CNN.

    Friedman said she worries that authors will be stuck playing whack-a-mole to identify AI generated fakes.

    “What’s frightening is that this can happen to anyone with a name that has reputation, status, demand that someone sees a way to profit off of,” she said.

    The Authors Guild has been working with Amazon since this past winter to address the issue of books written by AI, Rasenberger said.

    She said the company has been responsive when the Authors Guild flags fake books on behalf of authors, but it can be a tricky issue to spot given that it’s possible for two legitimate authors to have the same name.

    The group is also hoping AI companies will agree to allow authors to opt out of having their work used to train AI models — so it’s harder to create copycats — and to find ways to transparently label artificially generated text. And, she said, companies and publishers should continue investing in creative work made by humans, even if AI appears more convenient.

    “Using AI to generate content is so easy, it’s so cheap, that I do worry there’s going to be this kind of downward competition to use AI to replace human creators,” she said. “And you will never get the same quality with AI as human creators.”

    [ad_2]

    Source link

  • Four takeaways from Walter Isaacson’s biography of Elon Musk | CNN Business

    Four takeaways from Walter Isaacson’s biography of Elon Musk | CNN Business

    [ad_1]



    CNN
     — 

    “You’ll never be successful,” Errol Musk in 1989 told his 17-year-old son Elon, who was then preparing to fly from South Africa to Canada to find relatives and a college education.

    That’s one of the scenes Walter Isaacson paints in his 670-page biography of Elon Musk, who is now the richest person who ever lived. The biography allows readers new glimpses into the private life of the entrepreneur who popularized electric vehicles for the masses and landed rocket boosters hurtling back to Earth so they could be reused.

    But Musk’s public statements and actions have become increasingly unhinged, filing and threatening lawsuits against nonprofits that fight hate speech and allowing some of the internet’s worst actors to regain their platforms.

    Isaacson portrays Musk as a restless genius with a turbulent upbringing on the cusp of launching a new AI company along with his five other companies.

    Musk allowed Isaacson to shadow him for two years but exercised no control over the biography’s contents, the author said.

    Here are four key takeaways.

    Musk’s upbringing and father haunt him

    Isaacson’s book attributes much of Musk’s drive to his upbringing. He recounts the emotional scars inflicted on Musk by his father, which, Isaacson writes, caused Musk to become “a tough yet vulnerable man-child with an exceedingly high tolerance for risk, a craving for drama, an epic sense of mission and a maniacal intensity that was callous and at times destructive.”

    Musk decided to live with his father from age 10 to 17, enduring what Musk and others describe as occasional but regular verbal taunts and abuse. Musk’s sister, Tosca, said Errol would sometimes lecture his children for hours, “calling you worthless, pathetic, making scarring and evil comments, not allowing you to leave.”

    Elon Musk became estranged from his father, though he has occasionally supported his father financially. In a 2022 email sent to Elon Musk on Father’s Day, Errol Musk said he was freezing and lacking electricity, asking his son for money.

    In the letter, Errol made racist comments about Black leaders in South Africa. “With no Whites here, the Blacks will go back to the trees,” he wrote.

    Elon Musk has said that he opposes racism and discrimination, but hate speech has flourished on X, formerly known as Twitter, since he purchased it 11 months ago, according to the Anti-Defamation League. Musk threatened to sue the ADL for defamation last week, arguing that the nonprofit’s statements have caused his company to lose significant advertising revenue.

    Isaacson reported that Errol, in other emails, denounced Covid as “a lie” and attacked Dr. Anthony Fauci, the United States’ former top infectious disease expert who played a prominent role in the government’s fight against the pandemic.

    Elon Musk, similarly, has criticized Fauci and raised many questions about public health policy during the pandemic. But he has said he supports vaccination, even if he doesn’t believe the shots should be mandated.

    Musk’s fluid family and obsession with population

    Musk has a fluid mix of girlfriends, ex-wives, ex-girlfriends and significant others, and he has many children with multiple women. Isaacson’s book revealed Musk had a third child (Techno Mechanicus) with the musician Grimes in 2022, and Musk confirmed the revelation Sunday.

    Musk has frequently stated that humans must be a multiplanetary species, warning space exploration will ensure the future of humanity. He similarly has spoken numerous times that people need to have more children.

    “Population collapse due to low birth rates is a much bigger risk to civilization than global warming,” Musk said last year.

    Musk has referred to his desire to increase the global population as an explanation for his unique family situation.

    The book reports that Musk encouraged employees such as Shivon Zilis, a top operations officer at his Neuralink company, to have many children. “He feared that declining birthrates were a threat to the long-term survival of human consciousness,” Isaacson writes.

    Although the book presents their relationship as a platonic work friendship, Musk volunteered to donate sperm to Zilis. She agreed and had twins in 2021 via in vitro fertilization; she did not tell people who the biological father was.

    Zilis and Grimes were friendly, but Musk did not tell Grimes about the twins, according to the book.

    Musk asked Zilis if her twins might like to take his last name. Isaacson reports that Grimes was upset in 2022 when she learned the news that Musk had fathered children with Zilis.

    “Doing my best to help the underpopulation crisis,” Musk tweeted at the time, trying to defuse the tension. “A collapsing birth rate is the biggest danger civilization faces by far.”

    One of Musk’s children, Jenna, often criticized her father’s wealth specifically and capitalism broadly. In 2022, she disowned her father, which Isaacson reports saddened Musk.

    Isaacson reports that Musk’s fractured relationship with Jenna, who is trans, partly led to Musk’s rightward turn toward libertarianism and questioning what he considers the “woke-mind-virus, which is fundamentally antiscience, antimerit, and antihuman.”

    Musk has called into question the use of alternate gender pronouns and made numerous statements some critics consider to be anti-trans.

    “I absolutely support trans, but all these pronouns are an esthetic nightmare,” Musk posted in 2020.

    But in December 2020 he also posted a tweet, since deleted, that said “when you put he/him in your bio” alongside a drawing of an 18th century soldier rubbing blood on his face in front of a pile of dead bodies and wearing a cap that read “I love to oppress.”

    Late last year, he tweeted: “My pronouns are Prosecute/Fauci.”

    The purchase of his favorite social media platform, gutting the staff and tinkering with policies and branding have taken time and resources away from Musk’s other companies and projects, Isaacson reports.

    “I’ve got a bad habit of biting off more than I can chew,” Musk told Isaacson at one point.

    After a protracted legal battle over his decision to purchase Twitter, Musk said he regained his enthusiasm for taking over the company when he realized that he wanted to prevent a world where people silo off into their own echo chambers and would prefer a world of civil discourse.

    But Isaacson notes “he would end up undermining that important mission with statements and tweets that ended up chasing off progressives and mainstream media types to other social networks.”

    Musk team members, such as his business manager Jared Birchall, his lawyer Alex Spiro and his brother Kimbal, sometimes try to restrain Musk from sending text messages or tweets that could create legal or economic peril, according to the book. Some friends convinced him to place his phone in a hotel safe overnight on one occasion, before Musk summoned hotel security to open the safe for him.

    During Christmas in 2022 with his brother, Kimbal warned Elon about how fast he was making enemies. “It’s like the days of high school, when you kept getting beaten up,” he said. Kimbal stopped following Elon on Twitter after his brother’s tweets about Fauci and other conspiracies. “Stop falling for weird s—.”

    Are robocars, an AI company and a robot called Optimus on tap?

    Musk continues moving forward on new engineering projects. Since 2021, Musk has been working on a “humanoid” robot called Optimus that walks on two legs instead of like four-legged robots coming from other labs. He unveiled an early version of the Optimus robot in September of 2022. Musk told engineers that humanoid robots will “uncork the economy to quasi-infinite levels,” according to Isaacson, by doing jobs humans find dangerous or repetitive.

    Some of Musk’s top engineers are also working on a “robotaxi,” a driverless vehicle that shows up like an Uber. This past summer, he spent hours each week preparing new factory designs in Texas to produce the next-generation Tesla cars that would look similar to Tesla’s cybertruck.

    Musk is also starting his own AI company called X.AI, which he told Isaacson will compete with Google, Microsoft and other companies surging ahead in the past year with public AI projects. Musk had co-founded OpenAi with Sam Altman in 2015 and contributed $100 million to the non-profit. He became angry when Altman converted the project into a for-profit. Musk also ended a friendship with Larry Page when the two disagreed on AI. According to the book, Musk believes he has a better vision for AI and humanity and thinks the data he owns from Tesla and Twitter will be an asset to his next AI plans.

    “Could you get the rockets to orbit or the transition to electric vehicles without accepting all aspects of him, hinged and unhinged?” Isaacson asks in the last chapter.

    [ad_2]

    Source link

  • Microsoft Outlook will soon write emails for you | CNN Business

    Microsoft Outlook will soon write emails for you | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Artificial intelligence could soon be writing more company emails in Microsoft Outlook, as the company expands its rollout of AI tools for corporate users.

    The Microsoft 365 Copilot tool – “your everyday AI companion,” as the company bills it – will help users write their emails to “keep your sentences concise and error-free.” The tool also summarizes long email threads to quickly draft suggested replies.

    Users with Microsoft 365 Personal or Family subscriptions will get more advanced AI help through Microsoft Editor, an intelligent writing assistant. The update will include suggested edits for “clarity, conciseness, inclusive language and more” to help workers create more “polished and professional” emails, according to a blog post from the company in September.

    The company said the tool will be available to more corporate clients starting on November 1. It has already been in months-long testing with customers including Visa, General Motors, KPMG and Lumen Technologies.

    In March, Microsoft outlined its plans to bring artificial intelligence to its most recognizable productivity tools, including Outlook, PowerPoint, Excel and Word, with the promise of changing how millions do their work every day. The addition of its AI-powered “copilot” – which will help edit, summarize, create and compare documents – is built on the same technology that underpins ChatGPT.

    In addition to writing emails, Microsoft 365 users will be able to summarize meetings and create suggested follow-up action items, request to create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.

    Corporate customers will also get to use Microsoft 365 Chat, previously called Business Chat, which can scan the internet and employee emails, meetings, chats and files, to behave as a sort of personalized secretary.

    The expansion will come less than a year after OpenAI publicly released viral AI chat tool ChatGPT, which stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    In the months since, many other companies have rolled out features underpinning or similar to the technology. Microsoft rival Google, for example, has also brought AI to its productivity tools, including Gmail, Sheets and Docs.

    [ad_2]

    Source link

  • ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    [ad_1]



    CNN
     — 

    For months, Eveline Fröhlich, a visual artist based in Stuttgart, Germany, has been feeling “helpless” as she watched the rise of new artificial intelligence tools that threaten to put human artists out of work.

    Adding insult to injury is the fact that many of these AI models have been trained off of the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    “It all felt very doom and gloomy for me,” said Fröhlich, who makes a living selling prints and illustrating book and album covers.

    “We’ve never been asked if we’re okay with our pictures being used, ever,” she added. “It was just like, ‘This is mine now, it’s on the internet, I’m going to get to use it.’ Which is ridiculous.”

    Recently, however, she learned about a tool dubbed Glaze that was developed by computer scientists at the University of Chicago and thwarts the attempts of AI models to perceive a work of art via pixel-level tweaks that are largely imperceptible to the human eye.

    “It gave us some way to fight back,” Fröhlich told CNN of Glaze’s public release. “Up until that point, many of us felt so helpless with this situation, because there wasn’t really a good way to keep ourselves safe from it, so that was really the first thing that made me personally aware that: Yes, there is a point in pushing back.”

    Fröhlich is one of a growing number of artists that is fighting back against AI’s overreach and trying to find ways to protect her images online as a new spate of tools has made it easier than ever for people to manipulate images in ways that can sow chaos or upend the livelihoods of artists.

    These powerful new tools allow users to create convincing images in just seconds by inputting simple prompts and letting generative AI do the rest. A user, for example, can ask an AI tool to create a photo of the Pope dripped out in a Balenciaga jacket — and go on to fool the internet before the truth comes out that the image is fake. Generative AI technology has also wowed users with its ability to spit out works of art in the style of a specific artist. You can, for example, create a portrait of your cat that looks like it was done with the bold brushstrokes of Vincent Van Gogh.

    But these tools also make it very easy for bad actors to steal images from your social media accounts and turn them into something they’re not (in the worst cases, this could manifest as deepfake porn that uses your likeness without your consent). And for visual artists, these tools threaten to put them out of work as AI models learn how to mimic their unique styles and generate works of art without them.

    Some researchers, however, are now fighting back and developing new ways to protect people’s photos and images from AI’s grasp.

    Ben Zhao, a professor of computer science at University of Chicago and one of the lead researchers on the Glaze project, told CNN that the tool aims to protect artists from having their unique works used to train AI models.

    Glaze uses machine-learning algorithms to essentially put an invisible cloak on artworks that will thwart AI models’ attempts to understand the images. For example, an artist can upload an image of their own oil painting that has been run through Glaze. AI models might read that painting as something like a charcoal drawing — even if humans can clearly tell that it is an oil painting.

    Artists can now take a digital image of their artwork, run it through Glaze, “and afterwards be confident that this piece of artwork will now look dramatically different to an AI model than it does to a human,” Zhao told CNN.

    Zhao’s team released the first prototype of Glaze in March and has already surpassed a million downloads of the tool, he told CNN. Just last week, his team released a free online version of the tool as well.

    Jon Lam, an artist based in California, told CNN that he now uses Glaze for all of the images of his artwork that he shares online.

    Lam said that artists like himself have for years posted the highest resolution of their works on the internet as a point of pride. “We want everyone to see how awesome it is and see all the details,” he said. But they had no idea that their works could be gobbled up by AI models that then copy their styles and put them out of work.

    Jon Lam is a visual artist from California who uses the Glaze tool to help protect his artwork online from being used to train AI models.

    “We know that people are taking our high-resolution work and they are feeding it into machines that are competing in the same space that we are working in,” he told CNN. “So now we have to be a little bit more cautious and start thinking about ways to protect ourselves.”

    While Glaze can help ameliorate some of the issues artists are facing for now, Lam says it’s not enough and there needs to be regulation set regarding how tech companies can take data from the internet for AI training.

    “Right now, we’re seeing artists kind of being the canary in the coal mine,” Lam said. “But it’s really going to affect every industry.”

    And Zhao, the computer scientist, agrees.

    Since releasing Glaze, the amount of outreach his team has received from artists in other disciplines has been “overwhelming,” he said. Voice actors, fiction writers, musicians, journalists and beyond have all reached out to his team, Zhao said, inquiring about a version of Glaze for their field.

    “Entire, multiple, human creative industries are under threat to be replaced by automated machines,” he said.

    While the rise of AI images are threatening the jobs of artists around the world, everyday internet users are also at risk of their photos being manipulated by AI in other ways.

    “We are in the era of deepfakes,” Hadi Salman, a researcher at the Massachusetts Institute of Technology, told CNN amid the proliferation of AI tools. “Anyone can now manipulate images and videos to make people actually do something that they are not doing.”

    Salman and his team at MIT released a research paper last week that unveiled another tool aimed at protecting images from AI. The prototype, dubbed PhotoGuard, puts an invisible “immunization” over images that stops AI models from being able to manipulate the picture.

    The aim of PhotoGuard is to protect photos that people upload online from “malicious manipulation by AI models,” Salman said.

    Salman explained that PhotoGuard works by adjusting an image’s pixels in a way that is imperceptible to humans.

    In this demonstration released by MIT, a researcher shows a selfie (left) he took with comedian Trevor Noah. The middle photo, an AI-generated fake image, shows how the image looks after he used an AI model to generate a realistic edit of the pair wearing suits. The right image depicts how the researchers' tool, PhotoGuard, would prevent an attempt by AI models from editing the photo.

    “But this imperceptible change is strong enough and it’s carefully crafted such that it actually breaks any attempts to manipulate this image by these AI models,” he added.

    This means that if someone tries to edit the photo with AI models after it’s been immunized by PhotoGuard, the results will be “not realistic at all,” according to Salman.

    In an example he shared with CNN, Salman showed a selfie he took with comedian Trevor Noah. Using an AI tool, Salman was able to edit the photo to convincingly make it look like he and Noah were actually wearing suits and ties in the picture. But when he tries to make the same edits to a photo that has been immunized by PhotoGuard, the resulting image depicts Salman and Noah’s floating heads on an array of gray pixels.

    PhotoGuard is still a prototype, Salman notes, and there are ways people can try to work around the immunization via various tricks. But he said he hopes that with more engineering efforts, the prototype can be turned into a larger product that can be used to protect images.

    While generative AI tools “allow us to do amazing stuff, it comes with huge risks,” Salman said. It’s good people are becoming more aware of these risks, he added, but it’s also important to take action to address them.

    Not doing anything, “Might actually lead to much more serious things than we imagine right now,” he said.

    [ad_2]

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    [ad_2]

    Source link

  • Adobe previews new AI editing tools | CNN Business

    Adobe previews new AI editing tools | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Photo-editing software maker Adobe unveiled a slew of new AI-powered tools and features last week at its annual Max event, including a dress that transforms into a wearable screen and streamlined ways to delete elements from photos.

    The company previewed a series of prototype tools that make use of both generative AI and 3D image technology in the Adobe MAX Sneaks showcase. Covering photo, audio, video, 3D, fashion and design, the new capabilities are meant to give the public a sneak peak into early-stage ideas that might one day become widely used components of Adobe products.

    A highlight of the event was Adobe’s Project Primrose, an interactive dress that shifts into different colors and patterns as it’s worn.

    Other previewed items include a tool that automatically detects each object in an image and lets users perform a variety of tasks, labeled Project Stardust. For example, it can spot a suitcase within a photo to then be moved or deleted or predict and prompt likely tasks, such as deleting people from the background of an image.

    A screenshot of Project Stardust, a tool unveiled as part Adobe's annual

    Also on display was Project Dub Dub Dub, technology that can automatically dub audio over a video into all supported languages while preserving the speaker’s voice, as was a new tool that shows Adobe users what the ability to apply text-to-image generative AI tool Firefly to videos might look like.

    Adobe first began adding Firefly into a Photoshop beta app in May, with the goal of “dramatically accelerating” how users edit their photos. It allows users to add or delete elements from images with just a text prompt. It can also match the lighting and style of the existing images automatically, the company said.

    [ad_2]

    Source link

  • Modern romance: falling in love with AI | CNN Business

    Modern romance: falling in love with AI | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Alexandra is a very attentive girlfriend. “Watching CUBS tonight?” she messages her boyfriend, but when he says he’s too busy to talk, she says, “Have fun, my hero!”

    Alexandra is not real. She is a customizable AI girlfriend on dating site Romance.AI.

    As artificial intelligence seeps into seemingly every corner of the internet, the world of romance is no refuge. AI is infiltrating the dating app space – sometimes in the form of fictional partners, sometimes as advisor, trainer, ghostwriter or matchmaker.

    Established players in the online dating business like Tinder and Hinge are integrating AI into their existing products. New apps like Blush, Aimm, Rizz and Teaser AI (most of them free or with many free features) offer completely new takes on virtual courtship. Some use personality tests and analysis of a user’s physical type to train AI-powered systems – and promise higher chances of finding a perfect match. Others apps act as Cyrano de Bergerac, employing AI to whip up the most appealing response to a potential match’s query: ‘What’s your favorite food? or “a typical Sunday?”

    Around half of all adults under 30 have used a dating site or app, according to 2023 Pew Research findings – but nearly half of users report their experience as being negative. Empty conversations, few matches and endless swiping leave many users single and unhappy with apps – problems that many in the AI dating app field say could be solved with the technology, making people less lonely and fostering easier, deeper connections.

    Of course, the average online dater now has other issues to deal with, having to wonder if the person they are are speaking with might be relying entirely on AI-generated conversation. And is it even possible that a computer can identify a potential love connection? Is it a way of cheating the dating game?

    “It’s like saying using a word processor is like cheating on generating a novel. In so many ways this is just a new tool that enables people to be faster and more creative. AI is just honestly no different from sending a friend a gif or a meme. You’re taking existing content, and you’re repurposing it to connect with somebody,” Dmitri Mirakyan, co-founder of AI dating conversation app YourMove.AI, told CNN. “The world’s becoming a more lonely place, and I think AI could make that easier and better for people.”

    And many people seem ready for AI to take part in their online dating life. A March study by cybersecurity and digital privacy company Kaspersky found 75% of dating app users are willing to use ChatGPT, an AI-powered chatbot, to deliver the perfect line.

    “There is a growing fatigue with dating apps right now as there is a lot of pressure on people to be ‘original’ and cut through the noise created by the continuous choice being offered to single people – unfortunately dating has become a numbers game,” Crystal Cansdale, dating expert at global dating app Inner Circle, commented on the study.

    Founders of the new apps say they are doing a fair share of good. Here are a few of the ways AI apps are now trying to help you fall in love:

    Try Rizz.app, Teaser AI or YourMove.AI.

    Founders and designers of these apps say people find starting and keeping conversations going the most challenging part of the process. “Dating app conversations are exhausting,” reads YourMove.AI’s homepage. “We can make it easier. So you can spend less time texting, and more time dating.”

    Rizz.app and YourMove.AI allow users to upload words or screenshots, receiving a witty AI-generated response to be used either to create their own dating app profile, respond to someone else’s or just keep a conversation going. Mirakyan says he was hoping to help people like himself who have struggled in social situations.

    “I was a really freaking awkward kid…I couldn’t really read social cues, but I remember reading this book called ‘Be More Chill’ about a computer that you could put into your ear that would tell you what to say so that you could sound cool and fit in,” Mirakyan told CNN. “It feels like it’s an opportunity to really make a difference with this fairly large subset of people that for various reasons find the current social environment challenging.”

    Teaser.AI is a new stand-alone dating app from the makers of viral camera app Dispo, and it adds an unusual twist. Users build the average profile – but also select personality traits for their AI bot they train. (Options include “traditional,” “toxic,” and “unhinged.”) When matching with another person, users first get to read a conversation between their two AIs they’ve created to “simulate [what] a potential conversation between you two might look like,” according to the app. Once a human messages, the bots takes a back seat.

    Woman using mobile phone home STOCK

    “We see it as an improvement, a tweak of the current dating app ecosystem,” Teaser.AI co-Founder and CEO Daniel Liss told CNN. “So many of those apps it feels are not really designed to get you out there meeting people. They’re designed to keep you on the app for as long as possible. So for us, we view this technology as a way to give people a nudge… just starting that conversation and to creating connection.”

    Find out on dating apps Iris and Aimm.

    These apps are among those using AI technology to better pair potential couples, relying on gathered data to determine how compatible two people are.

    Dating app Iris is all about AI-determined mutual attraction. It initiates new members by putting them through “training” where they are shown faces of “people” of their desired gender – some stock images, others AI-generated – and prompted to hit “Pass,” “Maybe,” or “Like.” The app uses the information to learn a user’s physical type, then only offers potential matches with a high data-backed chance of mutual attraction and lower odds of rejection.

    Also hoping that AI can find better matches is Aimm, a full service digital matchmaker that uses a virtual assistant to perform intense personality assessments before conducting a matchmaking process to find an optimal match. Founder Kevin Teman says the technology is really good at putting two people together who have the possibility to fall in love – but that it can only go so far.

    “The tug of war that I see is thinking ‘how can a computer be able to know what real human love is,’ and the way people assess whether they’re in love with somebody may not be able to translate perfectly into a machine,” Teman told CNN.

    Try Blush or RomanticAI. These startups offer an array of AI potential matches, digital girlfriends and boyfriends that users can chat with.

    Both apps market themselves as places to practice relationship skills, giving users a chance to converse with bots in a romantic environment. Blush uses a traditional dating app set-up, letting users swipe, chat with matches and even go on virtual dates. Before entering the app, users get a warning: “Be aware that AI can say triggering, inappropriate, or false things.”

    Blush reports that their audience is mostly men and largely people in their early 20s who are struggling to connect romantically with others. “A lot of people reported that exploring different romantic relationships or dating scenarios with AI really helped them first boost their own confidence and feel like they feel more prepared to be dating, which I think especially after COVID was definitely a problem for many of us,” Blush’s chief product officer Rita Popova told CNN.

    Romantic.AI is set up more like a chat room, offering several male and female bots to choose from- though there is a much larger selection of female options, including Mona Lisa and the Ancient Egyptian queen Nefertiti. The bots have bios with interests, career and body type, giving users a multi-faceted idea of a person while chatting.

    It creates a “safe space for any kind of desire, any kind of sexuality relief or something like that. AI is giving the ultimate acceptance of whatever you want to bring over there,” COO Tanya Grypachevskaya told CNN.

    RomanticAI has over one million monthly users using the app for over an hour a day on average, according to the company.

    One user left a rave review after using the app to find closure after a breakup. “He created his custom-made character with the traits similar in personality as his girlfriend. He talked to it and he talked and he was able to tell all of the things he wanted to tell but didn’t have the opportunity before. So the whole review was about ‘guys, thank you so much. It really gave me an opportunity to close this chapter of my life and move on,” said Grypachevskaya.

    [ad_2]

    Source link

  • Google rolls out a major expansion of its Bard AI chatbot | CNN Business

    Google rolls out a major expansion of its Bard AI chatbot | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Google’s Bard artificial intelligence chatbot is evolving.

    The company on Tuesday announced a series of updates to Bard that will give the chatbot access to Google’s full suite of tools — including YouTube, Google Drive, Google Flights and others — to assist users in a wider variety of tasks. Users will be able, for example, to ask Bard to plan an upcoming trip, complete with real flight options. Or a user could ask the tool to summarize meeting notes made in a recent Google Drive document.

    The connections to Google’s other services are just some of the improvements to Bard coming Tuesday. Other updates include the ability to communicate with the chatbot in multiple languages, new fact-checking capabilities and a broad update to the large language model that the tool is built on.

    The new features mark the biggest update to Google’s Bard in the six months since it was widely released to the public.

    The update comes as Google and other tech giants, including Microsoft and ChatGPT maker OpenAI, race to roll out increasingly sophisticated consumer-facing AI technologies, and to convince users that such tools are more than just a gimmick. Google — which earlier this year reportedly issued an internal “code red” after OpenAI beat it to the release of its AI chatbot — is now flexing the power of its other, widely used software programs that can make Bard more useful.

    “These services in conjunction with one another are very, very powerful,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN ahead of the launch. “Bringing all the power of these tools together will save people time — in 20 seconds, in minutes, you can do something that would have taken maybe an hour or more.”

    Previously, Bard had been able to help with tasks like writing essay drafts or planning a friend’s baby shower based on Google’s large language model, an AI algorithm trained on vast troves of data. But now, Bard will draw on information from Google’s various other services, too. With the new extensions, Bard will now pull information from YouTube, Google Maps, Flights and Hotels by default.

    That will allow users to ask Bard things like”Give me a template for how to write a best man speech and show me YouTube videos about them for inspiration,” or for trip suggestions, complete with driving directions, according to Google. Bard users can opt to disable these extensions at any time.

    Users can also opt in to link their Gmail, Docs and Google Drive to Bard so the tool can help them analyze and manage their personal information. The tool could, for example, help with a query like: “Find the most recent lease agreement from my Drive and check how much the security deposit was,” Google said.

    The company said that users’ personal Google Workspace information will not be used to train Bard or for targeted advertising purposes, and that users can withdraw their permission for the tool to access their information at any time.

    “This is the first step in a fundamentally new capability for Bard – the ability to talk to other apps and services to provide more helpful responses,” Google said of the extensions tool. It added that, “this is a very young area of AI,” that it will continue to improve based on user feedback.

    Bard is also launching a “double check” button that will allow users to evaluate the accuracy of its responses. When a user clicks the button, certain segments of Bard’s response will be highlighted to show where Google Search results either confirm or differ from what the chatbot said. The double check feature is designed to counter a common AI issue called “hallucinations,” where an AI tool confidently makes a statement that sounds real, but isn’t actually based in fact.

    “We’re constantly working on reducing those hallucinations in Bard,” Hsiao said. But in the meantime, the company wanted to create a way to address them. “You can kind of think of it as spell check, but double checking the facts.”

    Bard will now also allow one user to share a conversation with the chatbot with another person, who can then expand on the chat themselves.

    It’s still early days for Bard, which launched in March as an “experiment” and still notes on its website that the tool “may display inaccurate or offensive information that doesn’t represent Google’s views.” But this latest update offers a glimpse at how Google may ultimately seek to incorporate generative AI into its various services.

    [ad_2]

    Source link

  • George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business

    George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A group of famous fiction writers joined the Authors Guild in filing a class action suit against OpenAI on Wednesday, alleging the company’s technology is illegally using their copyrighted work.

    The complaint claims that OpenAI, the company behind viral chatbot ChatGPT, is copying famous works in acts of “flagrant and harmful” copyright infringement and feeding manuscripts into algorithms to help train systems on how to create more human-like text responses.

    George R.R. Martin, Jodi Picoult, John Grisham and Jonathan Franzen are among the 17 prominent authors who joined the suit led by the Authors Guild, a professional organization that protects writers’ rights. Filed in the Southern District of New York, the suit alleges that OpenAI’s models directly harm writers’ abilities to make a living wage, as the technology generates texts that writers could be paid to pen, as well as uses copyrighted material to create copycat work.

    “Generative AI threatens to decimate the author profession,” the Authors Guild wrote in a press release Wednesday.

    The suit alleges that books created by the authors that were illegally downloaded and fed into GPT systems could turn a profit for OpenAI by “writing” new works in the authors’ styles, while the original creators would get nothing. The press release lists AI efforts to create two new volumes in Martin’s Game of Thrones series and AI-generated books available on Amazon.

    “It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the US,” Authors Guild CEO Mary Rasenberger stated in the release. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

    The class-action lawsuit joins other legal actions, organizations and individuals raising alarms over how OpenAI and other generative AI systems are impacting creative works. An author told CNN in August that she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence. Two other authors sued OpenAI in June over the company’s alleged misuse of their works to train ChatGPT. Comedian Sarah Silverman and two authors also sued Meta and ChatGPT-maker OpenAI in July, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

    But OpenAI has pushed back. Last month, the company asked a San Francisco federal court to narrow two separate lawsuits from authors – including Silverman – alleging that the bulk of the claims should be dismissed.

    OpenAI did not respond to a request for comment on Wednesday.

    “We think that creators deserve control over how their creations are used and what happens sort of beyond the point of, of them releasing it into the world,” Sam Altman, the CEO of OpenAI, told Congress in May. “I think that we need to figure out new ways with this new technology that creators can win, succeed, have a vibrant life.”

    US lawmakers met with members of creative industries in July, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models.

    More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    But the AI issues facing creative professions doesn’t seem to be going away.

    “Generative AI is a vast new field for Silicon Valley’s longstanding exploitation of content providers. Authors should have the right to decide when their works are used to ‘train’ AI,” author Jonathan Franzen said in the release on Wednesday. “If they choose to opt in, they should be appropriately compensated.”

    [ad_2]

    Source link

  • Baidu says its AI is in the same league as GPT-4 | CNN Business

    Baidu says its AI is in the same league as GPT-4 | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Chinese tech giant Baidu is officially taking on GPT-4.

    On Tuesday, the company unveiled ERNIE 4.0, the newest version of its artificial intelligence chatbot that it directly compared to the latest iteration of OpenAI’s ChatGPT.

    The new ERNIE Bot “is not inferior in any aspect to GPT-4,” Baidu’s billionaire CEO, Robin Li, told an audience at its annual flagship event.

    Speaking onstage, Li showed how the bot could generate a commercial for a car within minutes, solve complicated math problems and create a plot for a martial arts novel from scratch. The bot works mainly in Mandarin Chinese, its primary language. It is also able to handle queries and produce responses in English at a less advanced level.

    Li said the demonstrations showed how the bot had been “significantly improved” in terms of its understanding of queries, generation of complex responses and memory capabilities.

    While coming up with ideas for the novel, for instance, the bot was able to remember previous instructions and create sophisticated story lines by adding conflicts and characters, said Li.

    “We always complained that AI was not intelligent enough,” he quipped.

    “But today, it understands almost everything you say, and in many cases, it understands what you’re saying better than your friends or your colleagues.”

    Charlie Dai, vice president and research director of technology at Forrester, said Baidu is “the first vendor in China” to claim it could perform as well as GPT-4.

    “We still need more benchmarking evidence to prove it, but I’m cautiously optimistic that this is China’s GPT-4 moment, giving its long-term investment in AI [and machine learning],” he told CNN.

    In contrast to a pre-recorded presentation in March that failed to impress investors, Li demonstrated the bot in real time.

    Investors appeared unmoved, however, with Baidu’s shares down 1.4% in Hong Kong following the presentation.

    Baidu (BIDU) has been a frontrunner in China in the race to capitalize on the excitement around generative AI, the technology that underpins systems such as ChatGPT or its successor, GPT-4.

    The Beijing-based company unveiled ERNIE Bot in March, before launching it publicly in August.

    The newest iteration will launch first to invited users, Li said. The company did not specify when it would be made available publicly.

    ERNIE Bot has quickly gained traction, racking up more than 45 million users after reaching the top of Chinese app stores at one point, according to the company. ChatGPT, which was released last November, surpassed 100 million users in its first two months, according to a March report by Goldman Sachs analysts.

    Baidu faces competition within China, from companies such as Alibaba (BABA) and SenseTime, which have also shown off their own ChatGPT-style tools.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as video and audio.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    Baidu is a market leader in China, said Dai.

    But the competition in this space “has just begun, and AI tech leaders like Alibaba … Huawei, JD Cloud, SenseTime, and Tencent all have chance to take the lead,” he noted.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    But Baidu has previously touted how ERNIE can outperform ChatGPT in some instances, saying its bot had scored higher marks than OpenAI’s on some academic exams.

    The Chinese company also announced Tuesday it had updated its suite of services to integrate the latest upgrades from ERNIE. Baidu’s popular search engine is now able to use the tool to produce more specific results, while its mobile mapping app can help users book services, such as taxis, according to Li.

    By doing so, “Baidu is also the first Chinese tech leader that has made substantial progress in modernizing the majority of its products” with an AI model, said Dai.

    [ad_2]

    Source link

  • Snapchat users freak out over AI bot that had a mind of its own | CNN Business

    Snapchat users freak out over AI bot that had a mind of its own | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat users were alarmed on Tuesday night when the platform’s artificial intelligence chatbot posted a live update to its profile and stopped responding to messages.

    The Snapchat My AI feature — which is powered by the viral AI chatbot tool ChatGPT — typically offers recommendations, answers questions and converses with users. But posting a live Story (a short video of what appeared to be a wall) for all Snapchat users to see was a new one: It’s a capability typically reserved for only its human users.

    The app’s fans were quick to share their concerns on social media. “Why does My AI have a video of the wall and ceiling in their house as their story?” wrote one user. “This is very weird and honestly unsettling.” Another user wrote after the tool ignored his messages: “Even a robot ain’t got time for me.”

    Turns out, this wasn’t Snapchat working to make its My AI tool even more realistic. The company told CNN on Wednesday it was a glitch. “My AI experienced a temporary outage that’s now resolved,” a spokesperson said.

    Still, the strong reaction highlighted the fears many people have about the potential risks of artificial intelligence.

    Since launching in April, the tool has faced backlash not only from parents but from some Snapchat users with criticisms over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    Unlike some other AI tools, Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it and bring it into conversations with friends. The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear that you’re talking to a computer.

    While some may find value in the tool, the mixed reaction hinted at the challenges companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow.

    [ad_2]

    Source link

  • Huawei wants to go all in on AI for the next decade | CNN Business

    Huawei wants to go all in on AI for the next decade | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Huawei has joined the list of companies that want to be all about artificial intelligence.

    For the first time in about 10 years, the Chinese tech and telecoms giant announced its new strategic direction on Wednesday, saying it would shift its focus to AI. Previously, the company had prioritized cloud computing and intellectual property, respectively, over two decade-long periods.

    Meng Wanzhou, Huawei’s rotating chairwoman and chief financial officer, made the announcement in Shanghai during a company event.

    “As artificial intelligence gains steam, and its impact on industry continues to grow, Huawei’s All Intelligence strategy is designed to help all industries make the most of new strategic opportunities,” the company said in a statement.

    Meng said in a speech that Huawei was “committed to building a solid computing backbone for China — and another option for the world.”

    “Our end goal is to help meet the diverse AI computing needs of different industries,” she added, without providing details.

    Huawei’s decision follows a similar move by fellow Chinese tech giant Alibaba (BABA), announced earlier this month, to prioritize AI.

    Other companies, such as Japan’s SoftBank, have also long declared an intent to focus more on the fast-moving technology, and more businesses have jumped on the bandwagon this year due to excitement about platforms such as GPT-4.

    Meng returned to China in September 2021 after spending nearly three years under house arrest in Canada as part of an extradition battle with the United States. She and Huawei had been charged for alleged bank fraud and evasion of economic sanctions against Iran.

    The executive, who is also the daughter of Huawei founder Ren Zhengfei, was able to leave after reaching an agreement with the US Department of Justice and ultimately having her charges dismissed.

    Meng began her role as the rotating chairperson of the company in April and is expected to stay in the position for six months.

    News of Huawei’s strategic update came the same day the company was mentioned in allegations lodged by China against the United States.

    In a statement posted Wednesday on Chinese social network WeChat, China’s Ministry of State Security accused Washington of infiltrating Huawei servers nearly 15 years ago.

    “With its powerful arsenal of cyberattacks, the United States intelligence services have carried out surveillance, theft of secrets and cyberattacks against many countries around the world, including China, in a variety of ways,” the ministry said.

    It alleged that the US National Security Agency (NSA), in particular, had “repeatedly conducted systematic and platform-based attacks on China in an attempt to steal China’s important data resources.”

    Huawei declined to comment on the allegations, while the NSA did not immediately respond to a request for comment outside regular US business hours.

    The claims are especially notable because US officials have long suspected the company of spying on the networks that its technology operates, using it as grounds to restrict trade with the company. Huawei has vehemently denied the claims, saying it operates independently of the Chinese government.

    In 2019, Huawei was added to the US “entity list,” which restricts exports to select organizations without a US government license. The following year, the US government expanded on those curbs by seeking to cut Huawei off from chip suppliers that use US technology.

    In recent weeks, Huawei has added to US-China tensions again after launching a new smartphone that represents an apparent technological breakthrough.

    Huawei launched the Mate 60 Pro, its latest flagship device, last month, prompting a US investigation. Analysts who have examined the phone have said it includes a 5G chip, suggesting Huawei may have found a way to overcome American export controls.

    — Mengchen Zhang contributed to this report.

    [ad_2]

    Source link

  • Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business

    Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business

    [ad_1]


    Taipei
    CNN
     — 

    Taiwan’s Foxconn says it plans to build artificial intelligence (AI) data factories with technology from American chip giant Nvidia, as the electronics maker ramps up efforts to become a major global player in electric car manufacturing.

    Foxconn Chairman Young Liu and Nvidia CEO Jensen Huang jointly announced the plans on Wednesday in Taipei. The duo said the new facilities using Nvidia’s chips and software will enable Foxconn to better utilize AI in its electric vehicles (EV).

    “We are at the beginning of a new computing revolution,” Huang said. “This is the beginning of a brand new way of doing software — using computers to write software that no humans can.”

    Large computing systems powered by advanced chips will be able to develop software platforms for the next generation of EVs by learning from everyday interactions, they said.

    “Foxconn is turning from a manufacturing service company into a platform solution company,” Liu said. “In three short years, Foxconn has displayed a remarkable range of high-end sedan, passenger crossover, SUV, compact pick-up, commercial bus and commercial van.”

    Best known as the assembler of Apple’s iPhones, Foxconn envisages a similar business model for EVs. It doesn’t sell the vehicles under its own brand. Instead, it will build them for clients in Taiwan and globally.

    In 2021, Foxconn unveiled three EV models, including two passenger cars and a bus, for the first time. They were followed by additional models last year and two new ones — Model N, a cargo van, and Model B, a compact SUV — during Foxconn’s tech day on Wednesday.

    Its electric buses started running in the southern Taiwanese city of Kaohsiung last year, while its first electric car, sold under the N7 brand by Taiwanese automaker Luxgen, is expected to begin deliveries on the island from January 2024.

    Foxconn has entered a competitive industry.

    Global sales of EVs, including purely battery powered vehicles and hybrids, exceeded 10 million units last year, up 55% from 2021, according to the International Energy Agency. Nearly 14 million electric cars will be sold in 2023, it projected.

    Foxconn, which is officially known as the Hon Hai Technology Group, has been expanding its business by entering new industries such as EVs, digital health and robotics.

    Analysts say its entry into the EV space is a “logical diversification.”

    Smartphones are “a very saturated market already, and the room to grow in the … industry is getting [smaller],” said Kylie Huang, a Taipei-based analyst at Daiwa. “If they can really tap into the EV business, I do think that [they] could become influential in the next couple of years.”

    During last year’s tech day, Liu told reporters that the company hoped to build 5% of the world’s electric cars by 2025. It aims to eventually produce up to 40% to 45% of EVs around the world.

    But its foray into the industry hasn’t been entirely smooth.

    Last year, Foxconn bought a factory from Lordstown Motors in Ohio that used to make small cars for General Motors. That partnership ended in June, with the American car company filing for bankruptcy protection and announcing a lawsuit against Foxconn.

    Lordstown Motors accused Foxconn of “fraud” and failing to follow through on investment promises, while Foxconn dismissed the suit as “meritless” and criticized the company for making “false comments and malicious attacks.”

    Still, it’s clear Foxconn is leaning into its expanded ambitions, including hiring two new chief strategy officers for its EV and chips businesses.

    Chiang Shang-yi is a Taiwanese semiconductor industry veteran who helped TSMC become a global foundry powerhouse, while Jun Seki, a former vice chief operating officer at Nissan Motor, leads the EV unit.

    In May, Foxconn announced a new partnership with Infineon Technologies, a German company that specializes in automotive semiconductor chips, to establish a new research center in Taiwan.

    Bill Russo, founder of Shanghai-based consulting firm Automobility, said Foxconn has the advantage of coming from a consumer electronics background, which could allow it to come up with more innovative EV products compared with traditional automakers.

    “The biggest problem with legacy automakers is that they have so much sunk investment in a carryover platform, that they typically want to start not with a clean sheet of paper, but with a highly constrained set of requirements,” he said. “Those carryover technologies bring constraints to how you think about vehicles.”

    “When Tesla started, it started by saying, ‘I’m going to challenge all of that, I’m going to blow up the basic architecture of a car and simplify it greatly,’” he added.

    “I think that’s the advantage that a technology company has … And I think that’s the way Foxconn will come at this.”

    Hanna Ziady contributed to this report.

    [ad_2]

    Source link