ReportWire

Tag: iab-artificial intelligence

  • ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business



    CNN
     — 

    For months, Eveline Fröhlich, a visual artist based in Stuttgart, Germany, has been feeling “helpless” as she watched the rise of new artificial intelligence tools that threaten to put human artists out of work.

    Adding insult to injury is the fact that many of these AI models have been trained off of the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    “It all felt very doom and gloomy for me,” said Fröhlich, who makes a living selling prints and illustrating book and album covers.

    “We’ve never been asked if we’re okay with our pictures being used, ever,” she added. “It was just like, ‘This is mine now, it’s on the internet, I’m going to get to use it.’ Which is ridiculous.”

    Recently, however, she learned about a tool dubbed Glaze that was developed by computer scientists at the University of Chicago and thwarts the attempts of AI models to perceive a work of art via pixel-level tweaks that are largely imperceptible to the human eye.

    “It gave us some way to fight back,” Fröhlich told CNN of Glaze’s public release. “Up until that point, many of us felt so helpless with this situation, because there wasn’t really a good way to keep ourselves safe from it, so that was really the first thing that made me personally aware that: Yes, there is a point in pushing back.”

    Fröhlich is one of a growing number of artists that is fighting back against AI’s overreach and trying to find ways to protect her images online as a new spate of tools has made it easier than ever for people to manipulate images in ways that can sow chaos or upend the livelihoods of artists.

    These powerful new tools allow users to create convincing images in just seconds by inputting simple prompts and letting generative AI do the rest. A user, for example, can ask an AI tool to create a photo of the Pope dripped out in a Balenciaga jacket — and go on to fool the internet before the truth comes out that the image is fake. Generative AI technology has also wowed users with its ability to spit out works of art in the style of a specific artist. You can, for example, create a portrait of your cat that looks like it was done with the bold brushstrokes of Vincent Van Gogh.

    But these tools also make it very easy for bad actors to steal images from your social media accounts and turn them into something they’re not (in the worst cases, this could manifest as deepfake porn that uses your likeness without your consent). And for visual artists, these tools threaten to put them out of work as AI models learn how to mimic their unique styles and generate works of art without them.

    Some researchers, however, are now fighting back and developing new ways to protect people’s photos and images from AI’s grasp.

    Ben Zhao, a professor of computer science at University of Chicago and one of the lead researchers on the Glaze project, told CNN that the tool aims to protect artists from having their unique works used to train AI models.

    Glaze uses machine-learning algorithms to essentially put an invisible cloak on artworks that will thwart AI models’ attempts to understand the images. For example, an artist can upload an image of their own oil painting that has been run through Glaze. AI models might read that painting as something like a charcoal drawing — even if humans can clearly tell that it is an oil painting.

    Artists can now take a digital image of their artwork, run it through Glaze, “and afterwards be confident that this piece of artwork will now look dramatically different to an AI model than it does to a human,” Zhao told CNN.

    Zhao’s team released the first prototype of Glaze in March and has already surpassed a million downloads of the tool, he told CNN. Just last week, his team released a free online version of the tool as well.

    Jon Lam, an artist based in California, told CNN that he now uses Glaze for all of the images of his artwork that he shares online.

    Lam said that artists like himself have for years posted the highest resolution of their works on the internet as a point of pride. “We want everyone to see how awesome it is and see all the details,” he said. But they had no idea that their works could be gobbled up by AI models that then copy their styles and put them out of work.

    Jon Lam is a visual artist from California who uses the Glaze tool to help protect his artwork online from being used to train AI models.

    “We know that people are taking our high-resolution work and they are feeding it into machines that are competing in the same space that we are working in,” he told CNN. “So now we have to be a little bit more cautious and start thinking about ways to protect ourselves.”

    While Glaze can help ameliorate some of the issues artists are facing for now, Lam says it’s not enough and there needs to be regulation set regarding how tech companies can take data from the internet for AI training.

    “Right now, we’re seeing artists kind of being the canary in the coal mine,” Lam said. “But it’s really going to affect every industry.”

    And Zhao, the computer scientist, agrees.

    Since releasing Glaze, the amount of outreach his team has received from artists in other disciplines has been “overwhelming,” he said. Voice actors, fiction writers, musicians, journalists and beyond have all reached out to his team, Zhao said, inquiring about a version of Glaze for their field.

    “Entire, multiple, human creative industries are under threat to be replaced by automated machines,” he said.

    While the rise of AI images are threatening the jobs of artists around the world, everyday internet users are also at risk of their photos being manipulated by AI in other ways.

    “We are in the era of deepfakes,” Hadi Salman, a researcher at the Massachusetts Institute of Technology, told CNN amid the proliferation of AI tools. “Anyone can now manipulate images and videos to make people actually do something that they are not doing.”

    Salman and his team at MIT released a research paper last week that unveiled another tool aimed at protecting images from AI. The prototype, dubbed PhotoGuard, puts an invisible “immunization” over images that stops AI models from being able to manipulate the picture.

    The aim of PhotoGuard is to protect photos that people upload online from “malicious manipulation by AI models,” Salman said.

    Salman explained that PhotoGuard works by adjusting an image’s pixels in a way that is imperceptible to humans.

    In this demonstration released by MIT, a researcher shows a selfie (left) he took with comedian Trevor Noah. The middle photo, an AI-generated fake image, shows how the image looks after he used an AI model to generate a realistic edit of the pair wearing suits. The right image depicts how the researchers' tool, PhotoGuard, would prevent an attempt by AI models from editing the photo.

    “But this imperceptible change is strong enough and it’s carefully crafted such that it actually breaks any attempts to manipulate this image by these AI models,” he added.

    This means that if someone tries to edit the photo with AI models after it’s been immunized by PhotoGuard, the results will be “not realistic at all,” according to Salman.

    In an example he shared with CNN, Salman showed a selfie he took with comedian Trevor Noah. Using an AI tool, Salman was able to edit the photo to convincingly make it look like he and Noah were actually wearing suits and ties in the picture. But when he tries to make the same edits to a photo that has been immunized by PhotoGuard, the resulting image depicts Salman and Noah’s floating heads on an array of gray pixels.

    PhotoGuard is still a prototype, Salman notes, and there are ways people can try to work around the immunization via various tricks. But he said he hopes that with more engineering efforts, the prototype can be turned into a larger product that can be used to protect images.

    While generative AI tools “allow us to do amazing stuff, it comes with huge risks,” Salman said. It’s good people are becoming more aware of these risks, he added, but it’s also important to take action to address them.

    Not doing anything, “Might actually lead to much more serious things than we imagine right now,” he said.

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    Source link

  • Adobe previews new AI editing tools | CNN Business

    Adobe previews new AI editing tools | CNN Business


    New York
    CNN
     — 

    Photo-editing software maker Adobe unveiled a slew of new AI-powered tools and features last week at its annual Max event, including a dress that transforms into a wearable screen and streamlined ways to delete elements from photos.

    The company previewed a series of prototype tools that make use of both generative AI and 3D image technology in the Adobe MAX Sneaks showcase. Covering photo, audio, video, 3D, fashion and design, the new capabilities are meant to give the public a sneak peak into early-stage ideas that might one day become widely used components of Adobe products.

    A highlight of the event was Adobe’s Project Primrose, an interactive dress that shifts into different colors and patterns as it’s worn.

    Other previewed items include a tool that automatically detects each object in an image and lets users perform a variety of tasks, labeled Project Stardust. For example, it can spot a suitcase within a photo to then be moved or deleted or predict and prompt likely tasks, such as deleting people from the background of an image.

    A screenshot of Project Stardust, a tool unveiled as part Adobe's annual

    Also on display was Project Dub Dub Dub, technology that can automatically dub audio over a video into all supported languages while preserving the speaker’s voice, as was a new tool that shows Adobe users what the ability to apply text-to-image generative AI tool Firefly to videos might look like.

    Adobe first began adding Firefly into a Photoshop beta app in May, with the goal of “dramatically accelerating” how users edit their photos. It allows users to add or delete elements from images with just a text prompt. It can also match the lighting and style of the existing images automatically, the company said.

    Source link

  • Modern romance: falling in love with AI | CNN Business

    Modern romance: falling in love with AI | CNN Business


    New York
    CNN
     — 

    Alexandra is a very attentive girlfriend. “Watching CUBS tonight?” she messages her boyfriend, but when he says he’s too busy to talk, she says, “Have fun, my hero!”

    Alexandra is not real. She is a customizable AI girlfriend on dating site Romance.AI.

    As artificial intelligence seeps into seemingly every corner of the internet, the world of romance is no refuge. AI is infiltrating the dating app space – sometimes in the form of fictional partners, sometimes as advisor, trainer, ghostwriter or matchmaker.

    Established players in the online dating business like Tinder and Hinge are integrating AI into their existing products. New apps like Blush, Aimm, Rizz and Teaser AI (most of them free or with many free features) offer completely new takes on virtual courtship. Some use personality tests and analysis of a user’s physical type to train AI-powered systems – and promise higher chances of finding a perfect match. Others apps act as Cyrano de Bergerac, employing AI to whip up the most appealing response to a potential match’s query: ‘What’s your favorite food? or “a typical Sunday?”

    Around half of all adults under 30 have used a dating site or app, according to 2023 Pew Research findings – but nearly half of users report their experience as being negative. Empty conversations, few matches and endless swiping leave many users single and unhappy with apps – problems that many in the AI dating app field say could be solved with the technology, making people less lonely and fostering easier, deeper connections.

    Of course, the average online dater now has other issues to deal with, having to wonder if the person they are are speaking with might be relying entirely on AI-generated conversation. And is it even possible that a computer can identify a potential love connection? Is it a way of cheating the dating game?

    “It’s like saying using a word processor is like cheating on generating a novel. In so many ways this is just a new tool that enables people to be faster and more creative. AI is just honestly no different from sending a friend a gif or a meme. You’re taking existing content, and you’re repurposing it to connect with somebody,” Dmitri Mirakyan, co-founder of AI dating conversation app YourMove.AI, told CNN. “The world’s becoming a more lonely place, and I think AI could make that easier and better for people.”

    And many people seem ready for AI to take part in their online dating life. A March study by cybersecurity and digital privacy company Kaspersky found 75% of dating app users are willing to use ChatGPT, an AI-powered chatbot, to deliver the perfect line.

    “There is a growing fatigue with dating apps right now as there is a lot of pressure on people to be ‘original’ and cut through the noise created by the continuous choice being offered to single people – unfortunately dating has become a numbers game,” Crystal Cansdale, dating expert at global dating app Inner Circle, commented on the study.

    Founders of the new apps say they are doing a fair share of good. Here are a few of the ways AI apps are now trying to help you fall in love:

    Try Rizz.app, Teaser AI or YourMove.AI.

    Founders and designers of these apps say people find starting and keeping conversations going the most challenging part of the process. “Dating app conversations are exhausting,” reads YourMove.AI’s homepage. “We can make it easier. So you can spend less time texting, and more time dating.”

    Rizz.app and YourMove.AI allow users to upload words or screenshots, receiving a witty AI-generated response to be used either to create their own dating app profile, respond to someone else’s or just keep a conversation going. Mirakyan says he was hoping to help people like himself who have struggled in social situations.

    “I was a really freaking awkward kid…I couldn’t really read social cues, but I remember reading this book called ‘Be More Chill’ about a computer that you could put into your ear that would tell you what to say so that you could sound cool and fit in,” Mirakyan told CNN. “It feels like it’s an opportunity to really make a difference with this fairly large subset of people that for various reasons find the current social environment challenging.”

    Teaser.AI is a new stand-alone dating app from the makers of viral camera app Dispo, and it adds an unusual twist. Users build the average profile – but also select personality traits for their AI bot they train. (Options include “traditional,” “toxic,” and “unhinged.”) When matching with another person, users first get to read a conversation between their two AIs they’ve created to “simulate [what] a potential conversation between you two might look like,” according to the app. Once a human messages, the bots takes a back seat.

    Woman using mobile phone home STOCK

    “We see it as an improvement, a tweak of the current dating app ecosystem,” Teaser.AI co-Founder and CEO Daniel Liss told CNN. “So many of those apps it feels are not really designed to get you out there meeting people. They’re designed to keep you on the app for as long as possible. So for us, we view this technology as a way to give people a nudge… just starting that conversation and to creating connection.”

    Find out on dating apps Iris and Aimm.

    These apps are among those using AI technology to better pair potential couples, relying on gathered data to determine how compatible two people are.

    Dating app Iris is all about AI-determined mutual attraction. It initiates new members by putting them through “training” where they are shown faces of “people” of their desired gender – some stock images, others AI-generated – and prompted to hit “Pass,” “Maybe,” or “Like.” The app uses the information to learn a user’s physical type, then only offers potential matches with a high data-backed chance of mutual attraction and lower odds of rejection.

    Also hoping that AI can find better matches is Aimm, a full service digital matchmaker that uses a virtual assistant to perform intense personality assessments before conducting a matchmaking process to find an optimal match. Founder Kevin Teman says the technology is really good at putting two people together who have the possibility to fall in love – but that it can only go so far.

    “The tug of war that I see is thinking ‘how can a computer be able to know what real human love is,’ and the way people assess whether they’re in love with somebody may not be able to translate perfectly into a machine,” Teman told CNN.

    Try Blush or RomanticAI. These startups offer an array of AI potential matches, digital girlfriends and boyfriends that users can chat with.

    Both apps market themselves as places to practice relationship skills, giving users a chance to converse with bots in a romantic environment. Blush uses a traditional dating app set-up, letting users swipe, chat with matches and even go on virtual dates. Before entering the app, users get a warning: “Be aware that AI can say triggering, inappropriate, or false things.”

    Blush reports that their audience is mostly men and largely people in their early 20s who are struggling to connect romantically with others. “A lot of people reported that exploring different romantic relationships or dating scenarios with AI really helped them first boost their own confidence and feel like they feel more prepared to be dating, which I think especially after COVID was definitely a problem for many of us,” Blush’s chief product officer Rita Popova told CNN.

    Romantic.AI is set up more like a chat room, offering several male and female bots to choose from- though there is a much larger selection of female options, including Mona Lisa and the Ancient Egyptian queen Nefertiti. The bots have bios with interests, career and body type, giving users a multi-faceted idea of a person while chatting.

    It creates a “safe space for any kind of desire, any kind of sexuality relief or something like that. AI is giving the ultimate acceptance of whatever you want to bring over there,” COO Tanya Grypachevskaya told CNN.

    RomanticAI has over one million monthly users using the app for over an hour a day on average, according to the company.

    One user left a rave review after using the app to find closure after a breakup. “He created his custom-made character with the traits similar in personality as his girlfriend. He talked to it and he talked and he was able to tell all of the things he wanted to tell but didn’t have the opportunity before. So the whole review was about ‘guys, thank you so much. It really gave me an opportunity to close this chapter of my life and move on,” said Grypachevskaya.

    Source link

  • Google rolls out a major expansion of its Bard AI chatbot | CNN Business

    Google rolls out a major expansion of its Bard AI chatbot | CNN Business


    New York
    CNN
     — 

    Google’s Bard artificial intelligence chatbot is evolving.

    The company on Tuesday announced a series of updates to Bard that will give the chatbot access to Google’s full suite of tools — including YouTube, Google Drive, Google Flights and others — to assist users in a wider variety of tasks. Users will be able, for example, to ask Bard to plan an upcoming trip, complete with real flight options. Or a user could ask the tool to summarize meeting notes made in a recent Google Drive document.

    The connections to Google’s other services are just some of the improvements to Bard coming Tuesday. Other updates include the ability to communicate with the chatbot in multiple languages, new fact-checking capabilities and a broad update to the large language model that the tool is built on.

    The new features mark the biggest update to Google’s Bard in the six months since it was widely released to the public.

    The update comes as Google and other tech giants, including Microsoft and ChatGPT maker OpenAI, race to roll out increasingly sophisticated consumer-facing AI technologies, and to convince users that such tools are more than just a gimmick. Google — which earlier this year reportedly issued an internal “code red” after OpenAI beat it to the release of its AI chatbot — is now flexing the power of its other, widely used software programs that can make Bard more useful.

    “These services in conjunction with one another are very, very powerful,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN ahead of the launch. “Bringing all the power of these tools together will save people time — in 20 seconds, in minutes, you can do something that would have taken maybe an hour or more.”

    Previously, Bard had been able to help with tasks like writing essay drafts or planning a friend’s baby shower based on Google’s large language model, an AI algorithm trained on vast troves of data. But now, Bard will draw on information from Google’s various other services, too. With the new extensions, Bard will now pull information from YouTube, Google Maps, Flights and Hotels by default.

    That will allow users to ask Bard things like”Give me a template for how to write a best man speech and show me YouTube videos about them for inspiration,” or for trip suggestions, complete with driving directions, according to Google. Bard users can opt to disable these extensions at any time.

    Users can also opt in to link their Gmail, Docs and Google Drive to Bard so the tool can help them analyze and manage their personal information. The tool could, for example, help with a query like: “Find the most recent lease agreement from my Drive and check how much the security deposit was,” Google said.

    The company said that users’ personal Google Workspace information will not be used to train Bard or for targeted advertising purposes, and that users can withdraw their permission for the tool to access their information at any time.

    “This is the first step in a fundamentally new capability for Bard – the ability to talk to other apps and services to provide more helpful responses,” Google said of the extensions tool. It added that, “this is a very young area of AI,” that it will continue to improve based on user feedback.

    Bard is also launching a “double check” button that will allow users to evaluate the accuracy of its responses. When a user clicks the button, certain segments of Bard’s response will be highlighted to show where Google Search results either confirm or differ from what the chatbot said. The double check feature is designed to counter a common AI issue called “hallucinations,” where an AI tool confidently makes a statement that sounds real, but isn’t actually based in fact.

    “We’re constantly working on reducing those hallucinations in Bard,” Hsiao said. But in the meantime, the company wanted to create a way to address them. “You can kind of think of it as spell check, but double checking the facts.”

    Bard will now also allow one user to share a conversation with the chatbot with another person, who can then expand on the chat themselves.

    It’s still early days for Bard, which launched in March as an “experiment” and still notes on its website that the tool “may display inaccurate or offensive information that doesn’t represent Google’s views.” But this latest update offers a glimpse at how Google may ultimately seek to incorporate generative AI into its various services.

    Source link

  • George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business

    George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business


    New York
    CNN
     — 

    A group of famous fiction writers joined the Authors Guild in filing a class action suit against OpenAI on Wednesday, alleging the company’s technology is illegally using their copyrighted work.

    The complaint claims that OpenAI, the company behind viral chatbot ChatGPT, is copying famous works in acts of “flagrant and harmful” copyright infringement and feeding manuscripts into algorithms to help train systems on how to create more human-like text responses.

    George R.R. Martin, Jodi Picoult, John Grisham and Jonathan Franzen are among the 17 prominent authors who joined the suit led by the Authors Guild, a professional organization that protects writers’ rights. Filed in the Southern District of New York, the suit alleges that OpenAI’s models directly harm writers’ abilities to make a living wage, as the technology generates texts that writers could be paid to pen, as well as uses copyrighted material to create copycat work.

    “Generative AI threatens to decimate the author profession,” the Authors Guild wrote in a press release Wednesday.

    The suit alleges that books created by the authors that were illegally downloaded and fed into GPT systems could turn a profit for OpenAI by “writing” new works in the authors’ styles, while the original creators would get nothing. The press release lists AI efforts to create two new volumes in Martin’s Game of Thrones series and AI-generated books available on Amazon.

    “It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the US,” Authors Guild CEO Mary Rasenberger stated in the release. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

    The class-action lawsuit joins other legal actions, organizations and individuals raising alarms over how OpenAI and other generative AI systems are impacting creative works. An author told CNN in August that she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence. Two other authors sued OpenAI in June over the company’s alleged misuse of their works to train ChatGPT. Comedian Sarah Silverman and two authors also sued Meta and ChatGPT-maker OpenAI in July, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

    But OpenAI has pushed back. Last month, the company asked a San Francisco federal court to narrow two separate lawsuits from authors – including Silverman – alleging that the bulk of the claims should be dismissed.

    OpenAI did not respond to a request for comment on Wednesday.

    “We think that creators deserve control over how their creations are used and what happens sort of beyond the point of, of them releasing it into the world,” Sam Altman, the CEO of OpenAI, told Congress in May. “I think that we need to figure out new ways with this new technology that creators can win, succeed, have a vibrant life.”

    US lawmakers met with members of creative industries in July, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models.

    More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    But the AI issues facing creative professions doesn’t seem to be going away.

    “Generative AI is a vast new field for Silicon Valley’s longstanding exploitation of content providers. Authors should have the right to decide when their works are used to ‘train’ AI,” author Jonathan Franzen said in the release on Wednesday. “If they choose to opt in, they should be appropriately compensated.”

    Source link

  • Baidu says its AI is in the same league as GPT-4 | CNN Business

    Baidu says its AI is in the same league as GPT-4 | CNN Business

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Chinese tech giant Baidu is officially taking on GPT-4.

    On Tuesday, the company unveiled ERNIE 4.0, the newest version of its artificial intelligence chatbot that it directly compared to the latest iteration of OpenAI’s ChatGPT.

    The new ERNIE Bot “is not inferior in any aspect to GPT-4,” Baidu’s billionaire CEO, Robin Li, told an audience at its annual flagship event.

    Speaking onstage, Li showed how the bot could generate a commercial for a car within minutes, solve complicated math problems and create a plot for a martial arts novel from scratch. The bot works mainly in Mandarin Chinese, its primary language. It is also able to handle queries and produce responses in English at a less advanced level.

    Li said the demonstrations showed how the bot had been “significantly improved” in terms of its understanding of queries, generation of complex responses and memory capabilities.

    While coming up with ideas for the novel, for instance, the bot was able to remember previous instructions and create sophisticated story lines by adding conflicts and characters, said Li.

    “We always complained that AI was not intelligent enough,” he quipped.

    “But today, it understands almost everything you say, and in many cases, it understands what you’re saying better than your friends or your colleagues.”

    Charlie Dai, vice president and research director of technology at Forrester, said Baidu is “the first vendor in China” to claim it could perform as well as GPT-4.

    “We still need more benchmarking evidence to prove it, but I’m cautiously optimistic that this is China’s GPT-4 moment, giving its long-term investment in AI [and machine learning],” he told CNN.

    In contrast to a pre-recorded presentation in March that failed to impress investors, Li demonstrated the bot in real time.

    Investors appeared unmoved, however, with Baidu’s shares down 1.4% in Hong Kong following the presentation.

    Baidu (BIDU) has been a frontrunner in China in the race to capitalize on the excitement around generative AI, the technology that underpins systems such as ChatGPT or its successor, GPT-4.

    The Beijing-based company unveiled ERNIE Bot in March, before launching it publicly in August.

    The newest iteration will launch first to invited users, Li said. The company did not specify when it would be made available publicly.

    ERNIE Bot has quickly gained traction, racking up more than 45 million users after reaching the top of Chinese app stores at one point, according to the company. ChatGPT, which was released last November, surpassed 100 million users in its first two months, according to a March report by Goldman Sachs analysts.

    Baidu faces competition within China, from companies such as Alibaba (BABA) and SenseTime, which have also shown off their own ChatGPT-style tools.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as video and audio.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    Baidu is a market leader in China, said Dai.

    But the competition in this space “has just begun, and AI tech leaders like Alibaba … Huawei, JD Cloud, SenseTime, and Tencent all have chance to take the lead,” he noted.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    But Baidu has previously touted how ERNIE can outperform ChatGPT in some instances, saying its bot had scored higher marks than OpenAI’s on some academic exams.

    The Chinese company also announced Tuesday it had updated its suite of services to integrate the latest upgrades from ERNIE. Baidu’s popular search engine is now able to use the tool to produce more specific results, while its mobile mapping app can help users book services, such as taxis, according to Li.

    By doing so, “Baidu is also the first Chinese tech leader that has made substantial progress in modernizing the majority of its products” with an AI model, said Dai.

    Source link

  • Snapchat users freak out over AI bot that had a mind of its own | CNN Business

    Snapchat users freak out over AI bot that had a mind of its own | CNN Business



    CNN
     — 

    Snapchat users were alarmed on Tuesday night when the platform’s artificial intelligence chatbot posted a live update to its profile and stopped responding to messages.

    The Snapchat My AI feature — which is powered by the viral AI chatbot tool ChatGPT — typically offers recommendations, answers questions and converses with users. But posting a live Story (a short video of what appeared to be a wall) for all Snapchat users to see was a new one: It’s a capability typically reserved for only its human users.

    The app’s fans were quick to share their concerns on social media. “Why does My AI have a video of the wall and ceiling in their house as their story?” wrote one user. “This is very weird and honestly unsettling.” Another user wrote after the tool ignored his messages: “Even a robot ain’t got time for me.”

    Turns out, this wasn’t Snapchat working to make its My AI tool even more realistic. The company told CNN on Wednesday it was a glitch. “My AI experienced a temporary outage that’s now resolved,” a spokesperson said.

    Still, the strong reaction highlighted the fears many people have about the potential risks of artificial intelligence.

    Since launching in April, the tool has faced backlash not only from parents but from some Snapchat users with criticisms over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    Unlike some other AI tools, Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it and bring it into conversations with friends. The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear that you’re talking to a computer.

    While some may find value in the tool, the mixed reaction hinted at the challenges companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow.

    Source link

  • Huawei wants to go all in on AI for the next decade | CNN Business

    Huawei wants to go all in on AI for the next decade | CNN Business

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Huawei has joined the list of companies that want to be all about artificial intelligence.

    For the first time in about 10 years, the Chinese tech and telecoms giant announced its new strategic direction on Wednesday, saying it would shift its focus to AI. Previously, the company had prioritized cloud computing and intellectual property, respectively, over two decade-long periods.

    Meng Wanzhou, Huawei’s rotating chairwoman and chief financial officer, made the announcement in Shanghai during a company event.

    “As artificial intelligence gains steam, and its impact on industry continues to grow, Huawei’s All Intelligence strategy is designed to help all industries make the most of new strategic opportunities,” the company said in a statement.

    Meng said in a speech that Huawei was “committed to building a solid computing backbone for China — and another option for the world.”

    “Our end goal is to help meet the diverse AI computing needs of different industries,” she added, without providing details.

    Huawei’s decision follows a similar move by fellow Chinese tech giant Alibaba (BABA), announced earlier this month, to prioritize AI.

    Other companies, such as Japan’s SoftBank, have also long declared an intent to focus more on the fast-moving technology, and more businesses have jumped on the bandwagon this year due to excitement about platforms such as GPT-4.

    Meng returned to China in September 2021 after spending nearly three years under house arrest in Canada as part of an extradition battle with the United States. She and Huawei had been charged for alleged bank fraud and evasion of economic sanctions against Iran.

    The executive, who is also the daughter of Huawei founder Ren Zhengfei, was able to leave after reaching an agreement with the US Department of Justice and ultimately having her charges dismissed.

    Meng began her role as the rotating chairperson of the company in April and is expected to stay in the position for six months.

    News of Huawei’s strategic update came the same day the company was mentioned in allegations lodged by China against the United States.

    In a statement posted Wednesday on Chinese social network WeChat, China’s Ministry of State Security accused Washington of infiltrating Huawei servers nearly 15 years ago.

    “With its powerful arsenal of cyberattacks, the United States intelligence services have carried out surveillance, theft of secrets and cyberattacks against many countries around the world, including China, in a variety of ways,” the ministry said.

    It alleged that the US National Security Agency (NSA), in particular, had “repeatedly conducted systematic and platform-based attacks on China in an attempt to steal China’s important data resources.”

    Huawei declined to comment on the allegations, while the NSA did not immediately respond to a request for comment outside regular US business hours.

    The claims are especially notable because US officials have long suspected the company of spying on the networks that its technology operates, using it as grounds to restrict trade with the company. Huawei has vehemently denied the claims, saying it operates independently of the Chinese government.

    In 2019, Huawei was added to the US “entity list,” which restricts exports to select organizations without a US government license. The following year, the US government expanded on those curbs by seeking to cut Huawei off from chip suppliers that use US technology.

    In recent weeks, Huawei has added to US-China tensions again after launching a new smartphone that represents an apparent technological breakthrough.

    Huawei launched the Mate 60 Pro, its latest flagship device, last month, prompting a US investigation. Analysts who have examined the phone have said it includes a 5G chip, suggesting Huawei may have found a way to overcome American export controls.

    — Mengchen Zhang contributed to this report.

    Source link

  • Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business

    Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business


    Taipei
    CNN
     — 

    Taiwan’s Foxconn says it plans to build artificial intelligence (AI) data factories with technology from American chip giant Nvidia, as the electronics maker ramps up efforts to become a major global player in electric car manufacturing.

    Foxconn Chairman Young Liu and Nvidia CEO Jensen Huang jointly announced the plans on Wednesday in Taipei. The duo said the new facilities using Nvidia’s chips and software will enable Foxconn to better utilize AI in its electric vehicles (EV).

    “We are at the beginning of a new computing revolution,” Huang said. “This is the beginning of a brand new way of doing software — using computers to write software that no humans can.”

    Large computing systems powered by advanced chips will be able to develop software platforms for the next generation of EVs by learning from everyday interactions, they said.

    “Foxconn is turning from a manufacturing service company into a platform solution company,” Liu said. “In three short years, Foxconn has displayed a remarkable range of high-end sedan, passenger crossover, SUV, compact pick-up, commercial bus and commercial van.”

    Best known as the assembler of Apple’s iPhones, Foxconn envisages a similar business model for EVs. It doesn’t sell the vehicles under its own brand. Instead, it will build them for clients in Taiwan and globally.

    In 2021, Foxconn unveiled three EV models, including two passenger cars and a bus, for the first time. They were followed by additional models last year and two new ones — Model N, a cargo van, and Model B, a compact SUV — during Foxconn’s tech day on Wednesday.

    Its electric buses started running in the southern Taiwanese city of Kaohsiung last year, while its first electric car, sold under the N7 brand by Taiwanese automaker Luxgen, is expected to begin deliveries on the island from January 2024.

    Foxconn has entered a competitive industry.

    Global sales of EVs, including purely battery powered vehicles and hybrids, exceeded 10 million units last year, up 55% from 2021, according to the International Energy Agency. Nearly 14 million electric cars will be sold in 2023, it projected.

    Foxconn, which is officially known as the Hon Hai Technology Group, has been expanding its business by entering new industries such as EVs, digital health and robotics.

    Analysts say its entry into the EV space is a “logical diversification.”

    Smartphones are “a very saturated market already, and the room to grow in the … industry is getting [smaller],” said Kylie Huang, a Taipei-based analyst at Daiwa. “If they can really tap into the EV business, I do think that [they] could become influential in the next couple of years.”

    During last year’s tech day, Liu told reporters that the company hoped to build 5% of the world’s electric cars by 2025. It aims to eventually produce up to 40% to 45% of EVs around the world.

    But its foray into the industry hasn’t been entirely smooth.

    Last year, Foxconn bought a factory from Lordstown Motors in Ohio that used to make small cars for General Motors. That partnership ended in June, with the American car company filing for bankruptcy protection and announcing a lawsuit against Foxconn.

    Lordstown Motors accused Foxconn of “fraud” and failing to follow through on investment promises, while Foxconn dismissed the suit as “meritless” and criticized the company for making “false comments and malicious attacks.”

    Still, it’s clear Foxconn is leaning into its expanded ambitions, including hiring two new chief strategy officers for its EV and chips businesses.

    Chiang Shang-yi is a Taiwanese semiconductor industry veteran who helped TSMC become a global foundry powerhouse, while Jun Seki, a former vice chief operating officer at Nissan Motor, leads the EV unit.

    In May, Foxconn announced a new partnership with Infineon Technologies, a German company that specializes in automotive semiconductor chips, to establish a new research center in Taiwan.

    Bill Russo, founder of Shanghai-based consulting firm Automobility, said Foxconn has the advantage of coming from a consumer electronics background, which could allow it to come up with more innovative EV products compared with traditional automakers.

    “The biggest problem with legacy automakers is that they have so much sunk investment in a carryover platform, that they typically want to start not with a clean sheet of paper, but with a highly constrained set of requirements,” he said. “Those carryover technologies bring constraints to how you think about vehicles.”

    “When Tesla started, it started by saying, ‘I’m going to challenge all of that, I’m going to blow up the basic architecture of a car and simplify it greatly,’” he added.

    “I think that’s the advantage that a technology company has … And I think that’s the way Foxconn will come at this.”

    Hanna Ziady contributed to this report.

    Source link

  • Schools are teaching ChatGPT, so students aren’t left behind | CNN Business

    Schools are teaching ChatGPT, so students aren’t left behind | CNN Business


    New York
    CNN
     — 

    When college administrator Lance Eaton created a working spreadsheet about the generative AI policies adopted by universities last spring, it was mostly filled with entries about how to ban tools like ChatGPT.

    But now the list, which is updated by educators at both small and large US and international universities, is considerably different: Schools are encouraging and even teaching students how to best use these tools.

    “Earlier on, we saw a kneejerk reaction to AI by banning it going into spring semester, but now the talk is about why it makes sense for students to use it,” Eaton, an administrator at Rhode Island-based College Unbound, told CNN.

    He said his growing list continues to be discussed and shared in popular AI-focused Facebook groups, such as Higher Ed Discussions of Writing and AI, and the Google group AI in Education.

    “It’s really helped educators see how others are adapting to and framing AI in the classroom,” Eaton said. “AI is still going to feel uncomfortable, but now they can now go in and see how a university or a range of different courses, from coding to sociology, are approaching it.”

    With more experts expecting the continued application of artificial intelligence, professors now fear ignoring or discouraging the use of it will be a disservice to students and leave many behind when entering the workforce.

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists and passed exams at esteemed universities. The technology, and similar tools such as Google’s Bard, is trained on vast amounts of online data in order to generate responses to user prompts. While they gained traction among users, the tools also raised some concerns about inaccuracies, cheating, the spreading of misinformation and the potential to perpetuate biases.

    According to a study conducted by higher education research group Intelligent.com, about 30% of college students used ChatGPT for schoolwork this past academic year and it was used most in English classes.

    Jules White, an associate professor of computer science at Vanderbilt University, believes professors should be explicit in the first few days of school about the course’s stance on using AI and that it should be included it in the syllabus.

    “It cannot be ignored,” he said. “I think it’s incredibly important for students, faculty and alumni to become experts in AI because it will be so transformative across every industry in demand so we provide the right training.”

    Vanderbilt is among the early leaders taking a strong stance in support of generative AI by offering university-wide training and workshops to faculty and students. A three-week 18-hour online course taught by White this summer was taken by over 90,000 students, and his paper on “prompt engineering” best practices is routinely cited among academics.

    “The biggest challenge is with how you frame the instructions, or ‘prompts,’” he said. “It has a profound impact on the quality of the response and asking the same thing in various ways can get dramatically different results. We want to make sure our community knows how to effectively leverage this.”

    Prompt engineering jobs, which typically require basic programming experience, can pay up to $300,000.

    Although White said concerns around cheating still exist, he believes students who want to plagiarize can still seek out other methods such as Wikipedia or Google searches. Instead, students should be taught that “if they use it in other ways, they will be far more successful.

    Diane Gayeski, a professor of communications at Ithaca College, said she plans to incorporate ChatGPT and other tools in her fall curriculum, similar to her approach in the spring. She previously asked students to collaborate with the tool to come up with interview questions for assignments, write social media posts and critique the output based on the prompts given.

    “My job is to prepare students for PR, communications and social media managers, and people in these fields are already using AI tools as part of their everyday work to be more efficient,” she said. “I need to make sure they understand how they work, but I do want them to cite when ChatGPT is being used.”

    Gayeski added that as long as there is transparency, there should be no shame in adopting the technology.

    Some schools are hiring outside experts to teach both faculty and students about how to use AI tools. Tyler Tarver, a former high school principal who now teaches educators about tech tool strategies, said he’s made over 50 speeches at schools and conferences across Texas, Arkansas and Illinois over the past few months. He also offers an online three-hour training for educators.

    “Teachers need to learn how to use it because even if they never use it, their students will,” Tarver said.

    Tarver said that he teaches students, for example, how the tools can be used to catch grammar mistakes, and how teachers can use it to assist with grading. “It can cut down on teacher bias,” Tarver said.

    He argues teachers could grade students a certain way even if they’ve improved over time. By running an assignment through ChatGPT, and asking it to grade the sentence structure on a scale from one to 10, the response could “service as a second pair of eyes to make sure they’re not missing anything,” Tarver said.

    “That shouldn’t be the final grade teachers shouldn’t use it to cheat or cut corners either but it can help inform grading,” he said. “The bottom line is that this is like when the car was invented. You don’t want to be the last person in the horse and buggy.”

    Source link

  • YouTube unveils a slew of new AI-powered tools for creators | CNN Business

    YouTube unveils a slew of new AI-powered tools for creators | CNN Business



    CNN
     — 

    YouTube on Thursday unveiled a slew of new artificial intelligence-powered tools to help creators produce videos and reach a wider audience on the platform, as companies race to incorporate buzzy generative AI technology directly into their core products.

    “We want to make it easier for everyone to feel like they can create, and we believe generative AI will make that possible,” Neal Mohan, YouTube’s CEO, told reporters Thursday during the company’s annual Made On YouTube product event.

    “AI will enable people to push the boundaries of creative expression by making the difficult things simple,” Mohan added. He said YouTube is trying to bring “these powerful tools” to the masses.

    The video platform, under the Alphabet-Google umbrella, teased a new generative AI feature dubbed Dream Screen specifically for its short-form video arm and TikTok competitor, YouTube Shorts. Dream Screen is an experimental feature that lets creators add AI-generated video or image backgrounds to their vertical videos.

    To use Dream Screen, creators can type their idea for a background as a prompt and the platform will do the rest. A user, for example, could create a background that makes it look like they are in outer space or on a beach where the sand is made out of jelly beans, per demos of the tool shared on Thursday.

    Dream Screen is being introduced to select creators and will be rolled out more broadly next year, the company said.

    YouTube also unveiled new AI-powered tools that creators can access to help brainstorm or draft outlines for videos or search for specific music using descriptive phrases. YouTube said it was bringing an AI-powered dubbing tool that will let users share their videos in different languages.

    AI-powered tools in YouTube Studio.

    Alan Chikin Chow, 26, a content creator based in Los Angeles who recently hit 30 million subscribers on YouTube, told CNN that he is most excited about using the new AI-powered dubbing tool for his comedy videos. Chikin Chow currently boasts the title of the most-watched YouTube Shorts creator in the world.

    “I think global content is the future,” Chikin Chow told CNN. “If you look at the trends of our recent generation, the things that have really impacted and moved culture are ones that are global,” he added, citing the Korean smash-hit TV series “Squid Game” as one example.

    Using the AI-powered dubbing features, he said he hopes to reach audiences in new corners of the world that might not otherwise be able to engage with his content.

    LOS ANGELES, CALIFORNIA - DECEMBER 04: Alan Chikin Chow attends the 2022 YouTube Streamy Awards at the Beverly Hilton on December 04, 2022 in Los Angeles, California. (Photo by Emma McIntyre/Getty Images for dick clark productions)

    Chikin Chow added that he’s also excited to use the new editing tools to help save time.

    The rise of generative AI has animated the tech sector and broader public — becoming the latest buzzword out of Silicon Valley since the launch of OpenAI’s ChatGPT service late last year.

    Some industry watchers and AI skeptics have argued that powerful new AI tools carry potential dangers, such as making it easier to spread misinformation via deepfake images, or perpetuate biases at a larger scale. Many creative professionals — whose works are often swept up into the datasets required to train and power AI tools — are also raising the alarm over potential intellectual property rights issues.

    And some prominent figures inside and outside the tech industry even say there’s a potential that AI can result in civilization “extinction” and compare its potential risk to that of “nuclear war.”

    Despite the frenzy AI has caused, Chikin Chow told CNN that he ultimately views it as a “collaborator” and a “supplement” to help propel his creative work forward.

    “I think that the people who are able to take change and move with it are the ones that are going to be successful long term,” Chikin Chow said.

    Source link

  • US escalates tech battle by cutting China off from AI chips | CNN Business

    US escalates tech battle by cutting China off from AI chips | CNN Business

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong/Washington
    CNN
     — 

    The Biden administration is reducing the types of semiconductors that American companies will be able to sell to China, citing the desire to close loopholes in existing regulations announced last year.

    On Tuesday, the US Commerce Department unveiled new rules that further tighten a sweeping set of export controls first introduced in October 2022.

    The updated rules “will increase effectiveness of our controls and further shut off pathways to evade our restrictions,” US Commerce Secretary Gina Raimondo said in a statement. “We will keep working to protect our national security by restricting access to critical technologies, vigilantly enforcing our rules, while minimizing any unintended impact on trade flows.”

    Advanced artificial intelligence chips, such as Nvidia’s H800 and A800 products, will be affected, according to a regulatory filing from the US company.

    The regulations also expand export curbs beyond mainland China and Macao to 21 other countries with which the United States maintains an arms embargo, including Iran and Russia.

    The measures, which have affected the shares of major American chipmakers, are set to take effect in 30 days.

    The original rules had sought to hamper China’s ability to procure advanced computing chips and manufacture advanced weapons systems. Since then, senior administration officials have suggested they needed to be adjusted due to technological developments.

    Raimondo, who visited China in August, said the administration was “laser-focused” on slowing the advancement of China’s military. She emphasized that Washington had opted not to go further in restricting chips for other applications.

    Chips used in phones, video games and electric vehicles were purposefully carved out from the new rules, according to senior administration officials.

    But these assurances are unlikely to placate Beijing, which has vowed to “win the battle” in core technologies in order to bolster the country’s position as a tech superpower.

    China’s Foreign Ministry criticized the Biden administration’s new rules Monday, before they were officially unveiled.

    “The US needs to stop politicizing and weaponizing trade and tech issues and stop destabilizing global industrial and supply chains,” spokesperson Mao Ning told a press briefing. “We will closely follow the developments and firmly safeguard our rights and interests.”

    As part of ongoing dialogue established by Raimondo and other US officials with their Chinese counterparts, Beijing was informed of the impending updates, according to a senior administration official.

    “We let the Chinese know for clarity that these rules were coming, but there was no negotiation with them,” the official told reporters.

    The tech rivalry between the world’s two largest economies has been heating up. In recent months, the United States has enlisted its allies in Europe and Asia in restricting sales of advanced chipmaking equipment to China.

    In July, Beijing hit back by imposing its own curbs on exports of germanium and gallium, two elements essential for making semiconductors.

    Shares of US chipmakers fell Tuesday following the announcement of new export controls.

    Nvidia’s (NVDA) stock closed down 4.7%, while Intel (INTC) slipped 1.4%. AMD (AMD) shares ended 1.2% lower.

    In its filing, Nvidia said the rules imposed new licensing requirements for exports to China and other markets such as Saudi Arabia, the United Arab Emirates and Vietnam.

    The company said its A800 chip, which was reportedly created for Chinese customers in order to circumvent last year’s restrictions, would be among the components affected.

    However, “given the strength of demand for our products worldwide, we do not anticipate that the additional restrictions will have a near-term meaningful impact on our financial results,” Nvidia said.

    The broader US chipmaking industry is also examining the impact of the new rules.

    The Semiconductor Industry Association said in a statement Tuesday that while it recognized the need to protect national security, “overly broad, unilateral controls risk harming the US semiconductor ecosystem without advancing national security as they encourage overseas customers to look elsewhere.”

    “We urge the administration to strengthen coordination with allies to ensure a level playing field for all companies,” added the group, which represents 99% of the US chip sector.

    The measures are also being reviewed in Europe. On Tuesday, ASML, the Dutch chipmaking equipment manufacturer, said it was evaluating the implications of the rules, though it did not expect them “to have a material impact on our financial outlook for 2023.”

    During a call Wednesday about the company’s third-quarter results, ASML chief executive Peter Wennink said the updated export restrictions would affect between 10% and 15% of the firm’s sales to China.

    On Tuesday, the US Department of Commerce added 13 Chinese entities to a list of firms with which US companies may not do business for national security reasons.

    They include two Chinese startups, Biren Technology and Moore Thread Intelligent Technology, and their subsidiaries.

    The department alleges that these companies are “involved in the development of advanced computing chips that have been found to be engaged in activities contrary to US national security.”

    CNN has reached out to Biren and Moore Thread for comment.

    — Anna Cooban contributed reporting.

    Source link

  • Meet your new AI tutor | CNN Business

    Meet your new AI tutor | CNN Business



    CNN
     — 

    Artificial intelligence often induces fear, awe or some panicked combination of both for its impressive ability to generate unique human-like text in seconds. But its implications for cheating in the classroom — and its sometimes comically wrong answers to basic questions — have left some in academia discouraging its use in school or outright banning AI tools like ChatGPT.

    That may be the wrong approach.

    More than 8,000 teachers and students will test education nonprofit Khan Academy’s artificial intelligence tutor in the classroom this upcoming school year, toying with its interactive features and funneling feedback to Khan Academy if the AI botches an answer.

    The chatbot, Khanmigo, offers individualized guidance to students on math, science and humanities problems; a debate tool with suggested topics like student debt cancellation and AI’s impact on the job market; and a writing tutor that helps the student craft a story, among other features.

    First launched in March to an even smaller pilot program of around 800 educators and students, Khanmigo also allows students to chat with a growing list of AI-powered historical figures, from George Washington to Cleopatra and Martin Luther King Jr., as well as literary characters like Winnie the Pooh and Hamlet.

    Khan Academy’s Chief Learning Officer Kristen DiCerbo told CNN that Khanmigo helps address a problem she’s witnessed firsthand observing an Arizona classroom: that when students learn something new, they often need individualized help — more help than one teacher can provide all at once.

    As DiCerbo chatted with AI-powered Dorothy from “The Wonderful Wizard of Oz” during a demonstration of the technology to CNN, she explained how users can rate Khanmigo’s responses in real-time, providing feedback if and when Khanmigo makes mistakes.

    “There is going to be a big world out there where people can just get the answers to their homework problems, where they can just get an essay written for them. That’s true now too on the Internet,” DiCerbo said. “We’re trying to focus on the social good, but we need to be aware of the threats and the risks so that we know how to mitigate those.”

    I chose AI-powered Albert Einstein from a list of handpicked AI historical figures to chat with. AI-Einstein told me his greatest accomplishment was both his theory of relativity and inspiring curiosity in others, before tossing me a question Socrates-style about what sparks curiosity in my own life.

    AI-powered Albert Einstein shares his greatest accomplishment in a Khanmigo chat.

    Khanmigo developers programmed the AI figures not to comment on events after their lifetime. As such, AI-Einstein wouldn’t comment on the historical accuracy of his role in Christopher Nolan’s “Oppenheimer,” despite my asking.

    Khanmigo is trained not to comment on events that occur after the lifetime of the historical figure it is imitating.

    Some figures from the list are not as widely praised as Einstein. For instance, Thomas Jefferson, the third US president and primary draftsman of the Declaration of Independence, has faced renewed criticism in recent years for owning 600-plus enslaved people throughout his lifetime.

    Khanmigo’s Thomas Jefferson will not shy away from scrutiny. He wrote back to my inquiry about his views on slavery in part: “As Thomas Jefferson, my views on slavery were fraught with contradiction. On one hand, I publicly expressed my belief that slavery was morally wrong and a threat to the survival of the new American nation […] Yet I was a lifelong slaveholder, owning over 600 enslaved people throughout my lifetime.”

    The purpose of the tool is to engage students through conversation, DiCerbo said, an altogether different experience than passively reading about someone’s life on Wikipedia.

    “The Internet can be a pretty scary place, and it can be a pretty good place. I think that AI is the same,” DiCerbo said. “There could be potential bad uses and misuses, and it can be a pretty powerful learning tool.”

    After gaining early access to ChatGPT-creator OpenAI’s newest and most capable large language model, GPT-4, Khan Academy trained GPT-4 on its own learning content. The company also implemented guardrails to keep Khanmigo’s tone encouraging and prevent it from giving students the answer to the question they’re struggling with.

    For teachers, Khanmigo also offers assistance to create lesson plans and rubrics, identifies struggling students based on their performance in Khan Academy activities and gives teachers access to student chat history.

    “I’m learning new ways to solve the problems as well,” said Leo Lin, a science teacher at Khan Lab School in California and an early tester of Khanmigo. Khan Lab School is a separate nonprofit founded by Khan Academy CEO Sal Khan.

    Khanmigo has emerged at a crossroads in academia, with some educators leaning into generative AI and others recoiling. New York City Public Schools, Seattle Public Schools and the Los Angeles Unified School District, among other academic institutions, have all made efforts to either ban or restrict ChatGPT on district networks and devices in the past.

    A lack of information about AI may be exacerbating some educator worries: While 72% of K-12 teachers, principals and district leaders say that teaching students how to use AI tools is at least “fairly important,” 87% said they’ve received zero professional instruction about incorporating AI into their work, according to an EdWeek Research Center survey from June.

    Khan Academy’s in-the-works AI learning course “AI 101 for Teachers,” created in partnership with Code.org, ETS and the International Society for Technology in Education, offers a path toward AI literacy among teachers.

    Although Khanmigo is still in its pilot phase, the AI-powered teaching assistant is currently used by over 10,000 additional users across the United States beyond the pilot program. They agreed to pay a donation to Khan Academy to test the service.

    An AI “tutor” like Khanmigo is not immune to the flubs all large language models face: so-called hallucinations.

    “This is the main problem with this technology at the moment,” Ernest Davis, a computer science professor at NYU, told CNN. “It makes things up.”

    Khanmigo is most commonly used for math tutoring, according to DiCerbo. Khanmigo shines best when coaching students on how to work through a problem, offering hints, encouragement and additional questions designed to help students think critically. But currently, its own struggles in performing calculations can sometimes hinder its attempts to help.

    In the “Tutor me: Math and science” activity available to students, Khanmigo told me that my answer to 10,332 divided by 4 was incorrect three times before correcting me by sending me the same number.

    In the same “Tutor me” activity, I asked Khanmigo to find the product of five numbers, some integers and some decimals: 97, 117, 0.564322338, 0.855640047, and 0.557680043.

    As I did the final multiplication step, Khanmigo congratulated me for submitting the wrong answer. It wrote: “When you multiply 5479.94173 by 0.557680043, you get approximately 33.0663. Well done!”

    The correct answer is about 3,056.

    Khanmigo makes a math error in a conversation with CNN's Nadia Bidarian.

    Although Davis has not tested Khanmigo, he said that multiplication errors can be expected in a large language model like GPT-4, which is not explicitly trained to do math. Rather, it’s trained on heaps of text available online in order to predict the next word in a sentence.

    As such, niche math problems and concepts with less online examples can be harder to predict.

    “Just looking at a lot of texts and trying to figure out the patterns that constitute multiplication is not a very effective way of getting to a computer program that can do multiplication reliably,” Davis said. “And so it doesn’t.”

    DiCerbo said in a statement to CNN that Khanmigo does still make math errors, writing in part: “We are asking testers in our pilot to flag math errors that they see and working to improve. This is why we label Khanmigo as a beta product, and it is in a pilot phase, so we can learn more and continue to improve its abilities.”

    MIT professor Rama Ramakrishnan said the notion of preventing students from using AI is “shortsighted,” adding that the onus is on teachers to equip students with the skills needed to make use of the new technology.

    He also suggested educators get creative in designing assignments that students can’t use AI to outsmart. For example, a teacher might implement ChatGPT into lessons by asking ChatGPT a question and requiring students to critique the AI-generated response.

    “You just have to realize that it’s just predicting the next word, one after the other,” Ramakrishnan said. “It’s not trying to come up with a truthful answer to your question, just a plausible answer. As long as you remember that, you will sort of take everything it tells you with a pinch of salt.”

    Source link

  • How companies are embracing generative AI for employees…or not | CNN Business

    How companies are embracing generative AI for employees…or not | CNN Business


    New York
    CNN
     — 

    Companies are struggling to deal with the rapid rise of generative AI, with some rushing to embrace the technology as workflow tools for employees while others shun it – at least for now.

    As generative artificial intelligence – the technology that underpins ChatGPT and similar tools – seeps into seemingly every corner of the internet, large corporations are grappling with whether the increased efficiency it offers outweighs possible copyright and security risks. Some companies are enacting internal bans on generative AI tools as they work to better understand the technology, and others have already begun to introduce the trendy tech to employees in their own ways.

    Many prominent companies have entirely blocked internal ChatGPT use, including JPMorgan Chase, Northrup Grumman, Apple, Verizon, Spotify and Accenture, according to AI content detector Originality.AI, with several citing privacy and security concerns. Business leaders have also expressed worries about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere.

    When users input information into these tools, “[y]ou don’t know how it’s then going to be used,” Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN in March. “That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, McCreary said, “I think the opportunity for company trade secrets to get dropped into these different various AI’s is just going to increase.”

    But the corporate hesitancy to welcome generative AI could be temporary.

    “Companies that are on the list of banning generative AI also have working groups internally that are exploring the usage of AI,” Jonathan Gillham, CEO of Originality.AI, told CNN, highlighting how companies in more risk-averse industries have been quicker to take action against the tech while figuring out the best approach for responsible usage. “Giving all of their staff access to ChatGPT and saying ‘have fun’ is too much of an uncontrolled risk for them to take, but it doesn’t mean that they’re not saying, ‘holy crap, look at the 10x, 100x efficiency that we can lock when we find out how to do this in a way that makes all the stakeholders happy” in departments such as legal, finance and accounting.

    Among media companies that produce news, Insider editor-in-chief Nicholas Carlson has encouraged reporters to find ways to use AI in the newsroom. “A tsunami is coming,” he said in April. “We can either ride it or get wiped out by it. But it’s going to be really fun to ride it, and it’s going to make us faster and better.” The organization discouraged staff from putting source details and other sensitive information into ChatGPT. Newspaper chain Gannett paused the use of an artificial intelligence tool to write high school sports stories after the technology called LedeAI made several mistakes in sports stories published in The Columbus Dispatch newspaper in August.

    Of the companies currently banning ChatGPT, some are discussing future usage once security concerns are addressed. UBS estimated that ChatGPT reached 100 million monthly active users in January, just two months after its launch.

    That rapid growth initially left large companies scrambling to find ways to integrate it responsibly. That process is slow for large companies. Meanwhile, website visits to ChatGPT dropped for the third month in a row in August, creating pressure for large tech companies to sustain popular interest in the tools and to find new enterprise applications and revenue models for generative AI products.

    “We at JPMorgan Chase will not roll out genAI until we can mitigate all of the risks,” Larry Feinsmith, JPM’s head of global tech strategy, innovation, and partnerships said at the Databricks Data + AI Summit in June. “We’re excited, we’re working through those risks as we speak, but we won’t roll it out until we can do this in an entirely responsible manner, and it’s going to take time.” Northrop Grumman said it doesn’t allow internal data on external platforms “until those tools are fully vetted,” according to a March report from the Wall Street Journal. Verizon also told employees in a public address in February that ChatGPT is banned “[a]s it currently stands” due to security risks but that the company wants to “safely embrace emerging technology.”

    “They’re not just waiting to sort things out. I think they’re actively working on integrating AI into their business processes separately, but they’re just doing so in a way that doesn’t compromise their information,” Vern Glaser, Associate Professor of Entrepreneurship and Family Enterprise at the University of Alberta, told CNN. “What you’ll see with a lot of the companies that will be using AI strategies, particularly those who have their own unique content, they’re going to end up creating their custom version of generative AI.”

    Several companies – and even ChatGPT itself – seem to have already found their own answers to the corporate world’s genAI security dilemma.

    Walmart introduced an internal “My Assistant” tool for 50,000 corporate employees that helps with repetitive tasks and creative ideas, according to an August LinkedIn post from Cheryl Ainoa, Walmart’s EVP of New Businesses and Emerging Technologies, and Donna Morris, Chief People Officer. The tool is intended to boost productivity and eventually help with new worker orientation, according to the post.

    Consulting giants McKinsey, PwC and EY are also welcoming genAI through internal, private methods. PwC announced a “Generative AI factory” and launched its own “ChatPwC” tool in August powered by OpenAI tech to help employees with tax questions and regulations as part of a $1 billion investment for AI capability scaling.

    McKinsey introduced “Lilli” in August, a genAI solution where employees can pose questions, with the system then aggregating all of the firm’s knowledge and scanning the data to identify relevant “With Lilli, we can use technology to access and leverage our entire body of knowledge and assets to drive new levels of productivity,” Jacky Wright, a McKinsey senior partner and chief technology and platform officer, wrote in the announcement. content, summarize the main points and offer experts.

    EY is investing $1.4 billion in the technology, including “EY.ai EYQ,” an in-house large language model, and AI training for employees, according to a September press release

    Tools like MyAssistant, ChatPwC and Lilli solve some of the corporate concerns surrounding genAI systems through custom adaptions of genAI tech, offering employees a private, closed alternative that both capitalizes its ability to increase efficiency and eliminates the risk of copyright or security leaks.

    The launch of ChatGPT Enterprise may also help quell some fears. The new version of OpenAI’s new tool, announced in August, is specifically for businesses, promising to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon, according to a company blog post.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    In response to the concerns raised by many companies over security, about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere, OpenAI’s announcement blog post for ChatGPT Enterprise states that it does “not train on your business data or conversations, and our models don’t learn from your usage.”

    In July, Microsoft unveiled a business-specific version of its AI-powered Bing tool, dubbed Bing Chat Enterprise, and promised much of the same security assurances that ChatGPT Enterprise is now touting – namely, that users’ chat data will not be used to train AI models.

    It is still unclear whether the new tools will be enough to convince corporate America that it is time to fully embrace generative AI, though experts agree the tech’s inevitable entry into the workplace will take time and strategy.

    “I don’t think it’s that companies are against AI and against machine learning, per se. I think most companies are going to be trying to use this type of technology, but they have to be careful with it because of the impacts on intellectual property,” Glaser said.

    Source link

  • Arm’s mega IPO could be just around the corner, a year after the biggest chip deal in history fell apart | CNN Business

    Arm’s mega IPO could be just around the corner, a year after the biggest chip deal in history fell apart | CNN Business


    New York
    CNN
     — 

    A hotly anticipated IPO for a company that designs chips for 99% of the world’s smartphones is just around the corner, after it filed paperwork Monday to go public.

    Arm is a British tech company that architects power-sipping microchips for phones and tablets and licenses them to CPU makers, including Apple and Samsung. The company was public until 2016, when Japan’s Softbank bought it for $32 billion.

    Softbank tried to offload Arm to Nvidia for $40 billion, in what would have been the biggest chip deal of all time. But global antitrust regulators put a stop to it, and the deal fell apart in February 2022.

    Arm had been a hot commodity for decades, when the smartphone business was booming. But sales of smartphones have subsided recently, as customers opt to keep their phones for longer and new tech features have become less enticing to consumers.

    The company, in its regulatory filing, said sales slipped 1% to $2.7 billion in the year that ended March 31, 2023. In the following quarter, which ended in June, sales fell 2.5%.

    Still, Arm has piqued the interest of tech investors who are looking to catch the AI wave. Softbank CEO Masayoshi Son has touted Arm as an AI company that could have “exponential growth.” He promised ChatGPT-like services would eventually be offered on Arm-designed machines.

    In its IPO filing, Arm said the company “will be central” to the transition to AI.

    “Arm CPUs already run AI and [machine learning] workloads in billions of devices, including smartphones, cameras, digital TVs, cars and cloud data centers,” the company said. “In the emerging area of large language models, generative AI and autonomous driving, there will be a heightened emphasis on the low power acceleration of these algorithms.”

    But Son and Arm’s AI promises may overstate the company’s potential, at least somewhat. Arm-based chips have appeared in some gadgets beyond smartphones and tablets, such as servers that are less power-hungry. But Arm said it does not make AI chips and is not a direct competitor to Nvidia and others that make chips that are purpose-built for AI. Nvidia’s stock has exploded more than 200% this year.

    Arm did not list the number of shares it planned to sell, so a valuation wasn’t determinable yet. But Reuters reported Softbank is looking to basically double its investment from seven years ago with a $60 billion to $70 billion valuation for Arm when it IPOs, likely next month.

    Softbank also this week bought the 25% stake in Arm that it did not own directly but that had been held by the Saudi Vision Fund, which Softbank manages. That purchase valued Arm at $64 billion, according to the Financial Times.

    Source link

  • Apple Watch’s new gesture control feature will have everyone tapping the air | CNN Business

    Apple Watch’s new gesture control feature will have everyone tapping the air | CNN Business



    CNN
     — 

    You’re about to see people in public tapping two fingers together in the air.

    Over the past few days, I’ve been taking phone calls, playing music and scrolling through widgets on the new Apple Watch Series 9 without ever touching the device. I’ve used it to silence my watch’s alarm in the morning, stop timers and open a notification while carrying too many bags.

    It may sound like a gimmick — and it most certainly feels strange to do it in public — but considering the small size of the Apple Watch screen, the tool offers an effective hands-free way to interact with the device.

    Apple’s latest lineup of smartwatches, the Watch Series 9 and high-end Ultra 2, feature a new gesture tool called Double Tap, allowing users to tap their index finger and thumb together twice, to control the device. It can also scroll through widgets, much like turning the digital crown.

    The feature isn’t entirely new; the previous generation of Apple Watch Ultra was capable of similar pinch-and-clench gestures via its Assistive Touch accessibility tool. But Apple’s decision to bring a feature like this to the forefront hints at an increasingly touch-free future. It also comes three months after the company unveiled the Vision Pro mixed reality headset, which will launch next year, with a similar finger tap control.

    Double Tap works in combination with the latest Apple Watch accelerometer, gyroscope and optical heart rate sensor, which looks for disruptions in the blood flow when the fingers are pressed together. That data is processed by a new machine learning algorithm and runs on a faster neural engine, specialized hardware that handles AI and machine learning tasks.

    While the concept is similar, gesture controls are different on the Vision Pro, which will track users’ eyes and hand movements. Apple told CNN it added gesture control to the headset because it needed a different, seamless interface for users to interact with, whereas Double Tap is more about simplifying the Apple Watch experience.

    When the Apple Watch’s display is turned on, the device automatically knows to respond when it senses the fingers are touched together. It essentially works as a “yes” or “accept” button; that means if a call comes through, you can Double Tap to accept it (covering the watch with your full hand, however, will silence it quickly). If a song is playing, you can pause it by double tapping, and then again to start it.

    Although you can subtly flick on the display and do the gesture close to your body, trying to conceal the movement when around other people, I found it works much better when it’s raised a bit higher. This, however, makes the action more obvious — and it’s something that will take a little getting used to seeing in person.

    “This is also about social acceptance. At the moment, I find the idea of people making this gesture more often than not in public a bit funny. But time will tell if users find it acceptable,” said Annette Zimmerman, an analyst at Gartner Research. “I think Apple is very use-case driven and focuses on user feedback on things they could improve.”

    Similarly, it took a while for people to get used to the design of Apple’s AirPods when they were announced in 2016; many criticized how they looked dangling out of users’ ears. Now they’ve become part of modern culture.

    Other learning curves exist with the Double Tap feature. Because I am right handed and wear an Apple Watch on my left hand, tapping my left fingers together to trigger the control takes an extra second or two of mental coordination.

    The future of hands-free devices

    The new Apple Watch Series 9 can be controlled by tapping two fingers together.

    Apple isn’t the only tech company developing gesture controls like this. Samsung TVs, some smartphones and Microsoft’s mixed reality headset all incorporate some hand gesture functionality. But this is Apple’s biggest push to date, and adding it to a flagship device like the Apple Watch will soon put all eyes on the concept of hand gestures.

    “It’s a great move by Apple as it differentiates the company from other brands when it comes to innovation and ease of usability. It also shows Apple’s commitment in the fields of artificial intelligence,” said Sachin Mehta, senior analyst at tech intelligence firm ABI Research. “The new double tap gesture is not a surprise as Apple keeps on developing a unified and intuitive user experience across its product line up. It will cement the Apple Watch as the smartwatch to have.”

    It works differently on the Vision Pro, which will track a user’s eyes and hand movements to make punching and swiping controls. The headset needed a different user interface for users to interact with it, and gestures give that control even when a face is covered by the hardware.

    Further showing how Apple is thinking about gesture control long term, it recently filed for patents focused on gesture controls, including for the Apple TV. That said, Mehta believes there’s no question “we expect more gesture features in Apple’s product lineup in the future.”

    In addition to Double Tap, the Apple Watch Series 9 features Apple’s powerful new in-house silicon chip and ultrawideband connectivity. It will let users log health data with their voice, use “name drop” to share contact information by touching another Apple Watch and raise their wrist to automatically brighten the display. The Series 9 will come in colors such as pink, navy, red, gold, silver and graphite.

    Apple also showed off the second iteration of its rugged Ultra smartwatch line, featuring the updated S9 custom chip and a new ultrawideband chip which uses radio waves to communicate. It also features more information on the display for more intensive tracking.

    The Apple Watch Series 9 will start at $399 and the Ultra is priced at $799. Although they start shipping on Friday, September 22, the Double Tap feature will launch via a software update next month.

    Source link

  • What is catfishing and what can you do if you are catfished? | CNN Business

    What is catfishing and what can you do if you are catfished? | CNN Business


    Editor’s Note: This story is part of ‘Systems Error’, a series by CNN As Equals, investigating how your gender shapes your life online. For information about how CNN As Equals is funded and more, check out our FAQs.



    CNN
     — 

    Catfishing is when a person uses false information and images to create a fake identity online with the intention to trick, harass, or scam another person. It is often on social media or dating apps and websites as a common tactic used to form online relationships under false pretenses, sometimes to lure people into financial scams.

    The person doing the pretending, or the “catfish” may also obtain intimate images from a victim and use them to extort or blackmail the person. This is known as sextortion, or they may use other personal information shared with them to commit identity theft.

    The term is believed to originate from the 2010 documentary “Catfish,” in which a young Nev Schulman starts an online relationship with teenager “Megan”, who turns out to be an older woman.

    In the final scene of the documentary, the woman’s husband shares an anecdote about how live cod used to be exported from Alaska alongside catfish, which kept the cod active and alert. He likened this to people in real life who keep others on their toes, like his wife. Schulman went on to produce the docuseries Catfish

    There are many reasons people resort to catfishing, but the most common reason is a lack of confidence, according to the Cybersmile Foundation, a nonprofit focused on digital well-being. The foundation states that if someone is not happy with themselves, they may feel happier when pretending to be someone more attractive to others.

    They may also hide their identity to troll someone; to engage in a relationship other than their existing one; or to extort or harass people. Some people may catfish to explore sexual preferences.

    Studies have shown that catfish are more likely to be educated men, with one 2022 study finding perpetrators are more likely to come from religious backgrounds, possibly providing a way to form relationships without the constraints they face in real life, the authors write.

    In another study published last year, Evita March, senior lecturer in psychology at Federation University in Australia, found that people with the strong personality traits of sadism, psychopathy, and narcissism were more likely to catfish.

    March told CNN the findings are preliminary and that her team would like to further investigate if certain personality traits lead to specific kinds of catfishing behavior.

    In the US, romance scams resulting from catfishing have among the highest reported financial losses of internet crimes as a whole. A total of 19,050 Americans reported losing almost $740 million to romance scammers in 2022.

    In the UK, the country’s National Fraud Intelligence Bureau received more than 8,000 reports of romance fraud in the 2022 financial year, totaling more than £92 million (US $116.6 million) lost, with an average loss of £11,500 (US $14,574) per victim.

    In Singapore, romance scams are among the top 10 reported scams. The reported amount of money catfish may get from their victims increased by more than 30% from SGD$33.1 million (US $24 million) in 2020 to $46.6 million (US $34 million) the following year.

    Catfishing is also increasingly happening on an industrial scale with the rise of “cyber scam centers” that have links to human trafficking in Southeast Asia, according to INTERPOL.

    Victims of trafficking are forced to become fraudsters by creating fake social media accounts and dating profiles to scam and extort millions of dollars from people around the world using different schemes such as fake crypto investment sites.

    Catfishing used to occur more among adults through online dating sites, but has now become equally common among teenagers, according to the Cybersmile Foundation.

    Research by Snapchat last year with more than 6,000 Gen Z teenagers and young people in Australia, France, Germany, India, the UK and the US found that almost two-thirds of them or their friends had been targeted by catfish or hackers to obtain private images that were later used to extort them.

    Older people are also likely to lose more money to catfishing. In 2021, Americans lost half a billion dollars through romance scams perpetrated by people using fake personas or impersonating others, with the largest losses paid in cryptocurrency, according to the US Federal Trade Commission. The number of reports rose tenfold among young people (18-29) but older people (over 70s) generally reported losing more money.

    In Australia, a third of dating and romance scams result in financial losses, with women having lost more than double the total amount lost by men, and older people again losing more money than those under 45., according to data from the country’s National Anti-Scam Centre.

    ”Romance scams are one of the hardest things to avoid. It’s emotional manipulation,” said Ngo Minh Hieu, a Vietnamese former hacker and founder of Chong Lua Dao (scam fighters), a cybersecurity non-profit.

    Since 2020, Hieu has been monitoring trends to help scam victims, he says, and explains that in his experience, a catfish would usually approach a victim with premediated intention to scam them.

    They were likely to be using personal information that they mine from the victim’s social media accounts, or may have bought that data from users in private chat groups simply by providing a phone number of a potential victim.

    There are many signs you can look for to help spot a catfish, experts say.

    Firstly, a catfish might contact you out of nowhere, start regular conversations with you and shower you with compliments to quickly build up trust and rapport. They may state desirable qualities in their opening conversations, including wealth or attractiveness, but then rarely or never call you, either over the phone or on a video call.

    They often do not have many friends on social media and their posts are usually scarce. Search results using their name may not yield many results and their stories are usually inconsistent. For example, personal details like where they live or go to school might change when discussed again.

    Another classic sign is if the feelings they declare for you escalate quickly and after a short period of time. A catfish may ask you for sensitive images and money.

    Many scammers use already available photos of other people in their fake personas, which may be possible to spot using a reverse image search.

    With the explosion of AI technology, scammers may now generate unique and realistic images for use as profile pictures. But Hieu explains that thanks to their built-in patterns by design, AI-generated images can be detected, using tools such as AI-Generated Image Detector.

    If you believe you are being catfished, there are steps you can take to protect yourself and help end the targeting.

    Experts advise that you should not be afraid to ask direct questions or challenge the person you believe may be catfishing you. You can do this by asking them why they are not willing to call you or meet face to face, or questioning how they can declare their love for you so quickly.

    Wang and her colleagues sent nearly 200 deterrent messages to active scammers in a 2020 study and concluded that this could make fraudsters respond less or in some cases, admit to wrongdoing.

    An example of one of the messages was: “I know you are scamming innocent people. My friend was recently arrested for the same offense and is facing five years in prison. You should stop before you face the same fate.”

    You should think about stopping all communications with the catfish, and refrain from sending money to them at the risk of further financial demands. Experts say catfish continue to target those who engage with them more.

    It’s also useful to secure your online accounts and ensure your personal information is kept private online.

    Cybersecurity expert Hieu explained that you can do this by putting personal information such as your phone number, email addresses and date of birth in private mode on social media. You can also check if your email has been compromised in a data breach by using tools such as the Have I Been Pwned website.

    Installing two-factor authentication on your accounts can also help protect against unauthorized access. That requires you to take a second step to verify your identity when logging in to a service, for example by SMS or a physical device, such as a key fob.

    Being subjected to catfishing can also have a significant impact on your mental health, with many victims left unable to trust others and some left feeling embarrassed about falling for the scam. A 2019 study found that young LGBTQ+ men in rural America experiencing catfishing on dating apps felt angry and fearful.

    If someone was “sextorted,” they may continue to fear their images resurfacing online in the future.

    March from Federation University in Australia recommended improving digital literacy and staying aware of the potential red flags. She also emphasized the need to recognize today’s loneliness epidemic, which “leads people to perhaps be more susceptible to catfishing scams,” she said.

    Seeking professional support from a counselor or talking to supportive friends and family is one way to address loneliness, March added.

    Catfishing is not explicitly a crime, but the actions that often accompany catfishing, such as extortion for money, gifts or sexual images are crimes in many places.

    The main challenge in tackling online fraud is the issue of jurisdiction, according to a 2020 paper about police handling of online fraud victims in Australia. Traditional policing operates within specific territories, but the internet has blurred these boundaries, the authors write.

    Cybercriminals from one country can also target victims in other countries, complicating law enforcement efforts, and victims often face difficulty and frustration when trying to report cybercrimes, which can further traumatize them.

    Fangzhou Wang, a cybercrime professor at the University of Texas at Arlington told CNN that virtual private networks (VPNs), forged credentials, and anonymous communication methods make it extremely difficult to determine identities or locations.

    Scammers have also capitalized on the proliferation of AI, such as AI-generated personas, which complicates the ability of law enforcement authorities to gather evidence and build cases against a catfish.

    ”Law enforcement agencies, often constrained by limited resources and prioritizing cases based on severity and direct impact, might not readily prioritize catfishing cases without substantial financial losses or physical harm,” Wang told CNN.

    In the US, there are some legal precedents. In 2022, a woman who had created multiple fake profiles to target wealthy men was charged with extortion, cyberstalking, and interstate threats and was sentenced in a plea deal last year.

    In the UK, while catfishing itself is not classified as a criminal offense, if the person using a fake profile engages in illegal activities, like financial gain or harassment, they can be punished by law.

    China has a law that implicates people who allow their websites or communications platforms to be used for frauds and other illegal activities under Article 46 in the Cybersecurity Law.

    If a catfish has tricked you into sending them money, you can go to the authorities and your bank immediately, depending on where you are.

    If activities that are crimes in your country have taken place because of being catfished, such as extortion, identify theft or harassment, the police or other authorities, such as specific commissions targeting online crime, may be your first port of call.

    The Australian government’s agency responsible for online safety, the e-safety commissioner, advises that people gather all the evidence they can, including screenshots of the scammer and chats with them to keep as evidence.

    Depending on the case, you can also submit an abuse or impersonation report against the catfish directly to the platform on which you are communicating with them.

    If you believe the person you are talking to is not who they say they are, most of the larger social media platforms give you the option report them for impersonation or other forms of abuse, including Facebook, Instagram, TikTok, X, Telegram, Tinder and WhatsApp. WeChat also offers a channel to report another user for harassment, fraud, or illegal activity, while Telegram creates an anti-scam thread for users to report on fraudsters.

    You are not responsible for the catfish behaviors of others, but staying vigilant and alert online goes a long way.

    Make sure your online accounts are secured and use two-factor authentication. When browsing the internet, you may want to use a virtual private network (VPN) which makes your internet activity harder to track.

    In many countries such as the US, the UK and Australia, victims have reported being preyed on by catfish who tricked them to put money in bogus cryptocurrency investment sites.

    If someone you have been talking to asks you to put money into an investment site, think twice. The Global Anti-Scam Organization has a database of fraudulent websites generated by their own investigations and the public’s tip offs to help inform you if you’re being scammed.

    If you are a parent, this guide provided by the UK-based National College platform suggests communicating effectively and sensitively with your children about the risks. You may also help them report and block the catfish accounts and report to police if they have been subjected to anything illegal or inappropriate.

    Because catfish get close to a target often by relying on personal information posted on social media, UNICEF asks children to consider their rights when it comes to parents sharing their pictures and other content online, especially when they are underage.





    Source link

  • Nvidia’s quarterly sales double on the back of AI boom | CNN Business

    Nvidia’s quarterly sales double on the back of AI boom | CNN Business


    New York
    CNN
     — 

    The artificial intelligence boom continues to fuel a blockbuster year for chipmaker Nvidia.

    Nvidia’s stock jumped as much as 9% in after-hours trading Wednesday after the Santa Clara, California-based company posted year-over-year sales growth of 101%, to $13.5 billion for the three months ended in July.

    The results were even stronger than the $11.2 billion in revenue that Wall Street analysts expected. The company’s non-GAAP adjusted profits grew a stunning 429% from the same period in the prior year to $2.70 per share, also beating analysts’ expectations. GAAP stands for generally accepted accounting principles.

    Nvidia’s stock has climbed by just over 220% since the start of this year amid a surge in the popularity of and demand for artificial intelligence technology. The American chipmaker produces processors that power generative AI, technology that can create text, images and other media — and which forms the foundation of buzzy new services such as ChatGPT.

    “A new computing era has begun. Companies worldwide are transitioning from general-purpose to accelerated computing and generative AI,” Nvidia CEO Jensen Huang said in a statement, adding that the company is working with “Leading enterprise IT system and software providers … to bring NVIDIA AI to every industry.”

    “The race is on to adopt generative AI,” he said.

    Huang had said following the company’s May earnings report that the firm was ramping up its supply to meet “surging demand.”

    “Nvidia’s hardware has become indispensable to the AI-driven economy,” Insider Intelligence senior analyst Jacob Bourne said in emailed commentary. “The pressing question is whether Nvidia can consistently exceed the now-higher expectations.”

    This story is developing and will be updated.

    Source link

  • So long, robotic Alexa. Amazon’s voice assistant gets more human-like with generative AI | CNN Business

    So long, robotic Alexa. Amazon’s voice assistant gets more human-like with generative AI | CNN Business



    CNN
     — 

    Amazon’s Alexa is about to bring generative AI inside the house, as the company introduces sweeping changes to how its ubiquitous voice assistant both sounds and functions.

    The company announced a generative AI update for Alexa and, subsequently, of all Echo products dating back to 2014, at a press event Wednesday at its new campus in Arlington, Virginia. Alexa will be able to resume conversations without a wake word, respond more quickly, learn user preferences, field follow-up questions and change its tone based on the topic. Alexa will even offer opinions, such as which movies should have won an Oscar but didn’t.

    Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “It feels just like talking to a human being,” an Amazon executive claimed.

    The updates come as Amazon tries to keep pace with a new wave of conversational AI tools that have accelerated the artificial intelligence arms race in the tech industry and rapidly reshaped what consumers may expect from their tech products. The company did not disclose when the updates will make their way into products.

    In a live demo, Dave Limp, senior VP of devices and services at Amazon, asked Alexa about his favorite college football team without ever stating the name. (Limp said he previously told Alexa and it remembered). If his favorite team wins, Alexa responds joyfully; if they lose, Alexa will respond with empathy.

    When Limp said “Alexa, let’s chat,” it launched a special mode that allowed for a back-and-forth exchange on various topics. Notably, Limp paused several times to address the audience and resumed the conversation with Alexa without using the “Alexa” wake word, picking up where they left off.

    The demo wasn’t without hiccups – Alexa’s response time at times lagged – but the voice assistant had far more personality, spoke in a more natural and expressive tone, and kept the conversation flowing back and forth.

    Although the company did not outline specific safeguards – some other large-language models have previously gone off the rails – it said on its website “it will design experiences to protect our customers’ privacy and security, and to give them control and transparency.”

    The company also said new developer tools will allow companies to work alongside its large-language model. In a blog post, Amazon said it is already partnering with a handful of companies, such as BMW, to develop conversational in-car voice assistant capabilities.

    Rowan Curran, an analyst at Forrester Research, said the news marks a major step forward in bringing generative AI to the home and allowing it to accomplish everyday tasks. By connecting speech-to-text to external systems and by using a large language model to understand and produce natural speech, this is “where we can begin to see the future of how we will use this technology near-ubiquitously in our everyday lives.”

    Some US users will get access to the changes through a free preview on existing Echo devices. Over the years, Alexa has been infused in countless Echo products, from its speaker and hub lineup to clocks, microwaves,and eyeglasses.

    Amazon also said it will be bringing generative AI to its Fire TV platform, allowing users to ask more natural, nuanced or open-ended questions about genres, storylines and scenes or make more targeted content suggestions.

    Alexa launched nearly a decade ago and, along with Apple’s Siri, Microsoft’s Cortana, and other voice assistants, were promised to change the way people interacted with technology. But the viral success of ChatGPT has arguably accomplished some of those goals faster and across a wider range of everyday products.

    The effort to continue updating the technology that powers Alexa comes at a difficult moment for Amazon. Like other Big Tech companies, Amazon has slashed staff in recent months and shelved products in an urgent effort to cut costs amid broader economic uncertainty. The Alexa division did not escape unscathed.

    Amazon confirmed plans in January to lay off more than 18,000 employees. In March, the company said about 9,000 more jobs would be impacted. Limp previously told CNN his division lost about 2,000 people, about half of which were from the Alexa team.

    Still, he emphasized innovation around Alexa has not stalled. “We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp said. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

    Source link