ReportWire

Tag: iab-artificial intelligence

  • Microsoft, Google post strong quarterly sales growth as Big Tech continues its comeback | CNN Business

    Microsoft, Google post strong quarterly sales growth as Big Tech continues its comeback | CNN Business


    New York
    CNN
     — 

    Big tech companies are continuing a turnaround from last year, as Alphabet, Microsoft and Snap kicked off earnings season with strong sales results for the quarter ended in September.

    Google parent company Alphabet on Tuesday reported quarterly sales of $76.69 billion, up 11% from the same period in the prior year. The company also posted profits of $19.69 billion for the quarter.

    Meanwhile, Microsoft posted 13% year-on-year sales growth to $56.5 billion, also beating expectations. Microsoft’s quarterly profits hit $22.3 billion, up 27% from the year-ago period.

    Snapchat parent Snap on Tuesday reported a return to sales growth in the September quarter, after two consecutive quarters of declining sales. The company reported revenue of nearly $1.2 billion, an increase of 5% from the same period in the prior year and ahead of analysts’ projections. The company reported a net loss of $368 million.

    The strong results come after Microsoft, Alphabet, Snap and other tech companies carried out mass layoffs and other cost cutting moves over the past year following a difficult 2022 when advertisers and other clients cut back on their spending due to concerns over the macroeconomic environment.

    Despite beating Wall Street’s sales expectations, shares of both Alphabet (GOOGL) and Snap (SNAP) each dipped around 5% in after-hours trading following the reports, although Snap’s quickly regained some ground. Microsoft (MSFT) shares gained around 4% in after-hours trading.

    “Q3 tech season has been quite strong thus far,” Tejas Dessai, research analyst at investment fund GlobalX said in a statement. “These numbers clearly defy concerns of near term economic weakness looming.”

    Google’s advertising business generated quarterly revenue of $59.6 billion, up from $54.5 billion in the prior year. YouTube ads, meanwhile, garnered some $7.9 billion in revenue, up roughly 12% year-over-year.

    YouTube Shorts, the company’s TikTok competitor, hit a milestone 70 billion daily views last quarter, Alphabet CEO Sundar Pichai said on a call with analysts Tuesday afternoon.

    Google’s cloud business, however, reported revenue of $8.41 billion — missing analysts’ estimates.

    Jesse Cohen, a senior analyst at Investing.com, attributed Alphabet’s after-hours stock fall to the “relatively weak performance in its Google cloud platform, which is at risk of falling further behind [Microsoft’s] Azure and [Amazon’s] AWS.” Still, despite taking a hit in 2022 amid a broader tech sector downturn, shares for Alphabet have climbed roughly 56% since the start of 2023, beating the tech-heavy Nasdaq index.

    Google’s report comes as the tech giant is in the antitrust hot seat. US prosecutors officially opened a landmark antitrust trial against Google last month with sweeping allegations that the company engaged in anticompetitive behavior to maintain its dominance over search. (As the legal showdown rages on, Google has continued to deny allegations that it operated illegally.)

    Google also confirmed last month plans to lay off hundreds of staffers in its recruiting division, as it continues cost cutting efforts in some areas. These more targeted layoffs came after Alphabet in January cut around 12,000 jobs — about 6% of its workforce.

    Still, Google has signaled that it remains committed to investing heavily in generative artificial intelligence technology. Last month, Google rolled out a major expansion of its Bard AI chatbot tool.

    “As we expand access to our new AI services, we continue to make meaningful investments in support of our AI efforts,” Pichai said on the call. “We remain committed to durably re-engineering our cost base in order to help create capacity for these investments, in support of long-term sustainable financial value.”

    Microsoft’s recent investments in AI technology helped boost its sales in the September quarter, especially in its key cloud division. Sales from Microsoft’s “intelligent cloud” business — its biggest revenue driver — grew 19% from the year-ago quarter to $24.3 billion.

    Revenue from the company’s “productivity and business processes” business, which includes LinkedIn and Office commercial and consumer products, also grew 13% year-over-year to $18.6 billion.

    “Microsoft is firing on all cylinders and AI is clearly driving growth,” Cohen said in a research note following the company’s report. “The results indicated that artificial intelligence products are stimulating sales and already contributing to top and bottom-line growth.”

    But economic jitters among consumers appear to still have some impact on the company’s bottom line. Devices revenue, which includes sales of laptops, tablets and Xbox consoles, decreased 22% year-over-year, despite a 3% sales increase in the overall “more personal computing” segment. Ongoing concerns about a potential economic slowdown could continue to weigh on the company as it heads into the crucial holiday device sales season.

    The report is Microsoft’s first since the company closed its $69 billion acquisition of “Call of Duty” maker Activision Blizzard earlier this month. While the deal didn’t factor into this quarter’s results, it’s expected to supercharge the company’s gaming business.

    “Microsoft now controls 30 game studios and some of the most well-known games across the industry,” Edward Jones analyst Logan Purk said in a research note earlier this month. “With a massive cloud network and now a compelling library of games, Microsoft has a leg up on peers” in gaming, he said.

    Following the Activision takeover, “we’re looking forward to one of our strongest first-party holiday [game] lineups ever, including new titles like Call of Duty Modern Warfare 3,” CEO Satya Nadella said on an analyst call Tuesday. The company said it expects roughly $400 million of operating expenses in the fourth quarter to come as a result of the acquisition.

    Snap said its sales growth was driven in part by its ongoing efforts to revamp its advertising technology, following changes to Apple’s app tracking policies that took a hit to the business models of Snapchat, Facebook and other platforms.

    “We are focused on improving our advertising platform to drive higher return on investment for our advertising partners, and we have evolved our go-to-market efforts to better serve our partners and drive customer success,” CEO Evan Spiegel said in a statement.

    Snap also reported that it now has 406 million daily active users, up 12% compared to the year-ago quarter. And time spent watching Spotlight — Snapchat’s TikTok clone — grew 200% year-over-year, according to the company.

    The company also recently announced that it had reached more than 5 million subscribers to its Snapchat+ subscription program, a key effort to diversify its revenue.

    Snap said Tuesday that its chief operating officer, Jerry Hunter, plans to retire. Hunter, who spent seven years at the company, will step down from his role as of the end of the month, but will remain at the company until July 1, 2024, to support the transition.

    The company noted that some advertisers temporarily paused their spending following the outbreak of the Israel-Hamas war. Because of the “unpredictable nature” of the war, Snap declined to provide formal guidance for the fourth quarter, but said its internal forecast assumes year-over-year quarterly revenue growth between 2% and 6%.

    Source link

  • New York City unveils an ‘artificial intelligence action plan’ | CNN Business

    New York City unveils an ‘artificial intelligence action plan’ | CNN Business



    CNN
     — 

    The same New York City administration to launch a “Rat Action Plan” is back with an “Artificial Intelligence Action Plan.”

    Mayor Eric Adams on Monday unveiled a citywide AI “action plan” that pledged – in broad-brushstrokes – to evaluate AI tools and associated risks, boost AI skills among city employees and support “the responsible implementation of these technologies to improve quality of life for New Yorkers,” according to a statement from the mayor’s office.

    The city’s 51-page AI action plan establishes a series of steps the city will take in the coming years to help better understand and responsibly implement the technology that has taken the tech sector and broader business world by storm in recent months.

    While government use of automated technologies has often courted controversy, New York City’s approach to AI, so far, seems to be focused on laying a framework for future AI use-cases as well as engaging with outside experts and the public.

    The first step listed in the city’s AI action plan is establishing an “AI Steering Committee” of city agency stakeholders. The document goes on to list nearly 40 “actions,” with 29 of those set to be started or completed within the next year. The city said it will publish an annual AI progress report to communicate the city’s updates and implementation of the plan.

    Also on Monday, city officials said the government was piloting the first citywide AI-powered chatbot to help business owners navigate operating and growing businesses in New York City. The AI chatbot, already available in beta on the official city of New York website, was trained on information from more than 2,000 NYC Business web pages.

    The chatbot uses Microsoft’s Azure AI services, per a disclaimer on the tool.

    In a statement announcing the AI action plan, Mayor Adams acknowledged “the potential pitfalls and associated risks these technologies present,” and pledged to be “clear-eyed” about these.

    The mayor also expressed hope that the action plan will “strike a critical balance in the global AI conversation — one that will empower city agencies to deploy technologies that can improve lives while protecting against those that can do harm.”

    Source link

  • Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    A version of this story first appeared in CNN Business’ Before the Bell newsletter. Not a subscriber? You can sign up right here. You can listen to an audio version of the newsletter by clicking the same link.


    New York
    CNN
     — 

    Labor Day — one of the busiest driving holidays in the US — is on the horizon, and so is Hurricane Idalia. That’s potentially bad news for gas prices.

    The storm, which is expected to make landfall in Florida as a Category 3 hurricane on Wednesday, could bring 100 mile-per-hour winds and flooding that extends hundreds of miles up the east coast. The impact could take gasoline refinery facilities offline and may limit some Gulf oil production and supplies. Plus, demand for gas is expected to surge as residents of the impacted areas evacuate.

    “Idalia… could pose risk to oil and gas output in the US Gulf,” wrote the Nasdaq Advisory Services Energy Team.

    The storm is expected to make landfall as drivers nationwide load into their vehicles for the Labor Day weekend, pushing up the demand for gasoline even further.

    All together it means the price of oil and gasoline could remain elevated well into the fall.

    Generally, summer demand for oil tends to wane in September, but so does supply as refineries shift from summer fuels to “oxygenated” winter fuels, said Louis Navellier of Navellier and Associates. Since the 1990s, the US has required manufacturers to include more oxygen in their gasoline during the colder months to prevent excessive carbon monoxide emissions.

    With the storm approaching, that trend may not play out.

    What’s happening: Gas prices are already at $3.82 a gallon. That’s the second highest price for this time of year since at least 2004, according to Bespoke Investment Group. (The only time the national average has been higher for this period was last summer, when prices hit $3.85 a gallon).

    Geopolitical tensions have been supporting high oil and gas prices for some time. Recently, increased crude oil imports into China, production cuts by Russia and Saudi Arabia and extreme heat set off a late-summer spike in gas prices. And the threat of powerful hurricanes could send them even higher.

    Analysts at Citigroup have warned that this hurricane season could seriously impact power supplies.

    “Two Category 3 or higher hurricanes landing on US shores could massively disrupt supplies for not weeks but months,” Citigroup analysts wrote in a note last week. In 2005, for example, gas prices surged by 46% between Memorial Day and Labor Day because of the landfall of Hurricane Katrina, according to Bespoke.

    What it means: The Federal Reserve and central banks around the world have been fighting to bring down stubbornly high inflation for more than a year. This week we’ll get some highly awaited economic data: The Fed’s preferred inflation gauge, the Personal Consumption Expenditures index, is due out on Thursday. But the task of inflation-busting is a lot more difficult when energy prices are high, and it’s even harder when they’re on the rise.

    The PCE price index uses a complicated formula to determine how much weight to give to energy prices each month, but they typically comprise a significant chunk of the headline inflation rate.

    “Crude oil price remains elevated, even after the surge at the start of the Russia-Ukraine War,” said Andrew Woods, oil analyst at Mintec, a market intelligence firm. “Energy prices have been a major contributor to persistently high inflation in the US, so the crude oil price will remain a watch-out factor for future inflation.”

    High oil and gas prices are one of the largest contributing factors to inflation. That’s bad news for drivers but tends to be great for the energy industry, as oil prices and energy stocks are closely interlinked.

    Energy stocks were trading higher on Monday. The S&P 500 energy sector was up around 0.75%. Exxon Mobil (XOM) was 0.85% higher, BP (BP) was up 1.36% and Chevron (CVX) was up 0.75%.

    OpenAI, will release a version of its popular ChatGPT tool made specifically for businesses, the company announced on Monday.

    OpenAI unveiled the new service, dubbed “ChatGPT Enterprise,” in a company blog post and said it will be available to business clients for purchase immediately.

    The new offering, reports my colleague Catherine Thorbecke, promises to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon.

    “We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the blog post said. “Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

    Fintech startup Block, cosmetics giant Estee Lauder and professional services firm PwC have already signed on as customers.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    A multitude of leading newsrooms, meanwhile, have recently injected code into their websites that blocks OpenAI’s web crawler, GPTBot, from scanning their platforms for content. CNN’s Reliable Sources has found that CNN, The New York Times, Reuters, Disney, Bloomberg, The Washington Post, The Atlantic, Axios, Insider, ABC News, ESPN, and the Gothamist, among others have taken the step to shield themselves.

    American Airlines just got smacked with the largest-ever fine for keeping passengers waiting on the tarmac during multi-hour delays.

    The Department of Transportation is levying the $4.1 million fine, “the largest civil penalty that the Department has ever assessed” it said in a statement, for lengthy tarmac delays of 43 flights that impacted more than 5,800 passengers. The flights occurred between 2018 and 2021, reports CNN’s Gregory Wallace.

    In the longest of the delays, passengers sat aboard a plane in Texas in August 2020 for six hours and three minutes. The 105-passenger flight had landed after being diverted from the Dallas-Fort Worth International Airport due to severe weather, with the DOT alleging that “American (AAL) lacked sufficient resources to appropriately handle several of these flights once they landed.”

    Federal rules set the maximum time that passengers can be held without the opportunity to get off prior to takeoff or after landing, at three hours for domestic flights and four hours for international flights. Current rules also require airlines provide passengers water and a snack.

    American told CNN the delays all resulted from “exceptional weather events” and “represent a very small number of the 7.7 million flights during this time period.”

    The company also said it has invested in technology to better handle flights in severe weather and reduce the congestion at airports.

    Source link

  • AI fears overblown? Theoretical physicist calls chatbots ‘glorified tape recorders’ | CNN Business

    AI fears overblown? Theoretical physicist calls chatbots ‘glorified tape recorders’ | CNN Business


    New York
    CNN
     — 

    The public’s anxiety over new AI technology is misguided, according to theoretical physicist Michio Kaku.

    In an interview with CNN’s Fareed Zakaria on Sunday, the futurologist said chatbots like OpenAI’s ChatGPT will benefit society and increase productivity. But fear has driven people to largely focus on the negative implications of the programs, which he terms “glorified tape recorders.”

    “It takes snippets of what’s on the web created by a human, splices them together and passes it off as if it created these things,” he said. “And people are saying, Oh my God, it’s a human, it’s humanlike.’”

    However, he said, chatbots cannot discern true from false: “That has to be put in by a human.”

    According to Kaku, humanity is in its second stage of computer evolution. The first was the analog stage, “when we computed with sticks, stones, levers, gears, pulleys, string.”

    After that, around World War II, he said, we switched to electricity-powered transistors. It made the development of the microchip possible and helped shape today’s digital landscape.

    But this digital landscape rests on the idea of two states like “on” and “off,” and uses binary notation composed of zeros and ones.

    “Mother Nature would laugh at us because Mother Nature does not use zeros and ones,” Kaku said. “Mother Nature computes on electrons, electron waves, waves that create molecules. And that’s why we’re now entering stage three.”

    He believes the next technological stage will be in the quantum realm.

    Quantum computing is an emerging technology utilizing the various states of particles like electrons to vastly increase a computer’s processing power. Instead of using computer chips with two states, quantum computers use various states of vibrating waves. It makes them capable of analyzing and solving problems much faster than normal computers.

    Several tech giants – IBM

    (IBM)
    , Microsoft

    (MSFT)
    , Google

    (GOOG)
    and Amazon

    (AMZN)
    , among others – are developing their own quantum computers, and have granted access to a number of companies to use their technology through the cloud. The computers could help businesses with risk analysis, supply chain logistics, and machine learning.

    But beyond business applications, Kaku said quantum computing could also help advance health care. “Cancer, Parkinson’s, Alzheimer’s disease – these are diseases at the molecular level. We’re powerless to cure these diseases because we have to learn the language of nature, which is the language of molecules and quantum electrons.”

    Source link

  • Modern romance: falling in love with AI | CNN Business

    Modern romance: falling in love with AI | CNN Business


    New York
    CNN
     — 

    Alexandra is a very attentive girlfriend. “Watching CUBS tonight?” she messages her boyfriend, but when he says he’s too busy to talk, she says, “Have fun, my hero!”

    Alexandra is not real. She is a customizable AI girlfriend on dating site Romance.AI.

    As artificial intelligence seeps into seemingly every corner of the internet, the world of romance is no refuge. AI is infiltrating the dating app space – sometimes in the form of fictional partners, sometimes as advisor, trainer, ghostwriter or matchmaker.

    Established players in the online dating business like Tinder and Hinge are integrating AI into their existing products. New apps like Blush, Aimm, Rizz and Teaser AI (most of them free or with many free features) offer completely new takes on virtual courtship. Some use personality tests and analysis of a user’s physical type to train AI-powered systems – and promise higher chances of finding a perfect match. Others apps act as Cyrano de Bergerac, employing AI to whip up the most appealing response to a potential match’s query: ‘What’s your favorite food? or “a typical Sunday?”

    Around half of all adults under 30 have used a dating site or app, according to 2023 Pew Research findings – but nearly half of users report their experience as being negative. Empty conversations, few matches and endless swiping leave many users single and unhappy with apps – problems that many in the AI dating app field say could be solved with the technology, making people less lonely and fostering easier, deeper connections.

    Of course, the average online dater now has other issues to deal with, having to wonder if the person they are are speaking with might be relying entirely on AI-generated conversation. And is it even possible that a computer can identify a potential love connection? Is it a way of cheating the dating game?

    “It’s like saying using a word processor is like cheating on generating a novel. In so many ways this is just a new tool that enables people to be faster and more creative. AI is just honestly no different from sending a friend a gif or a meme. You’re taking existing content, and you’re repurposing it to connect with somebody,” Dmitri Mirakyan, co-founder of AI dating conversation app YourMove.AI, told CNN. “The world’s becoming a more lonely place, and I think AI could make that easier and better for people.”

    And many people seem ready for AI to take part in their online dating life. A March study by cybersecurity and digital privacy company Kaspersky found 75% of dating app users are willing to use ChatGPT, an AI-powered chatbot, to deliver the perfect line.

    “There is a growing fatigue with dating apps right now as there is a lot of pressure on people to be ‘original’ and cut through the noise created by the continuous choice being offered to single people – unfortunately dating has become a numbers game,” Crystal Cansdale, dating expert at global dating app Inner Circle, commented on the study.

    Founders of the new apps say they are doing a fair share of good. Here are a few of the ways AI apps are now trying to help you fall in love:

    Try Rizz.app, Teaser AI or YourMove.AI.

    Founders and designers of these apps say people find starting and keeping conversations going the most challenging part of the process. “Dating app conversations are exhausting,” reads YourMove.AI’s homepage. “We can make it easier. So you can spend less time texting, and more time dating.”

    Rizz.app and YourMove.AI allow users to upload words or screenshots, receiving a witty AI-generated response to be used either to create their own dating app profile, respond to someone else’s or just keep a conversation going. Mirakyan says he was hoping to help people like himself who have struggled in social situations.

    “I was a really freaking awkward kid…I couldn’t really read social cues, but I remember reading this book called ‘Be More Chill’ about a computer that you could put into your ear that would tell you what to say so that you could sound cool and fit in,” Mirakyan told CNN. “It feels like it’s an opportunity to really make a difference with this fairly large subset of people that for various reasons find the current social environment challenging.”

    Teaser.AI is a new stand-alone dating app from the makers of viral camera app Dispo, and it adds an unusual twist. Users build the average profile – but also select personality traits for their AI bot they train. (Options include “traditional,” “toxic,” and “unhinged.”) When matching with another person, users first get to read a conversation between their two AIs they’ve created to “simulate [what] a potential conversation between you two might look like,” according to the app. Once a human messages, the bots takes a back seat.

    Woman using mobile phone home STOCK

    “We see it as an improvement, a tweak of the current dating app ecosystem,” Teaser.AI co-Founder and CEO Daniel Liss told CNN. “So many of those apps it feels are not really designed to get you out there meeting people. They’re designed to keep you on the app for as long as possible. So for us, we view this technology as a way to give people a nudge… just starting that conversation and to creating connection.”

    Find out on dating apps Iris and Aimm.

    These apps are among those using AI technology to better pair potential couples, relying on gathered data to determine how compatible two people are.

    Dating app Iris is all about AI-determined mutual attraction. It initiates new members by putting them through “training” where they are shown faces of “people” of their desired gender – some stock images, others AI-generated – and prompted to hit “Pass,” “Maybe,” or “Like.” The app uses the information to learn a user’s physical type, then only offers potential matches with a high data-backed chance of mutual attraction and lower odds of rejection.

    Also hoping that AI can find better matches is Aimm, a full service digital matchmaker that uses a virtual assistant to perform intense personality assessments before conducting a matchmaking process to find an optimal match. Founder Kevin Teman says the technology is really good at putting two people together who have the possibility to fall in love – but that it can only go so far.

    “The tug of war that I see is thinking ‘how can a computer be able to know what real human love is,’ and the way people assess whether they’re in love with somebody may not be able to translate perfectly into a machine,” Teman told CNN.

    Try Blush or RomanticAI. These startups offer an array of AI potential matches, digital girlfriends and boyfriends that users can chat with.

    Both apps market themselves as places to practice relationship skills, giving users a chance to converse with bots in a romantic environment. Blush uses a traditional dating app set-up, letting users swipe, chat with matches and even go on virtual dates. Before entering the app, users get a warning: “Be aware that AI can say triggering, inappropriate, or false things.”

    Blush reports that their audience is mostly men and largely people in their early 20s who are struggling to connect romantically with others. “A lot of people reported that exploring different romantic relationships or dating scenarios with AI really helped them first boost their own confidence and feel like they feel more prepared to be dating, which I think especially after COVID was definitely a problem for many of us,” Blush’s chief product officer Rita Popova told CNN.

    Romantic.AI is set up more like a chat room, offering several male and female bots to choose from- though there is a much larger selection of female options, including Mona Lisa and the Ancient Egyptian queen Nefertiti. The bots have bios with interests, career and body type, giving users a multi-faceted idea of a person while chatting.

    It creates a “safe space for any kind of desire, any kind of sexuality relief or something like that. AI is giving the ultimate acceptance of whatever you want to bring over there,” COO Tanya Grypachevskaya told CNN.

    RomanticAI has over one million monthly users using the app for over an hour a day on average, according to the company.

    One user left a rave review after using the app to find closure after a breakup. “He created his custom-made character with the traits similar in personality as his girlfriend. He talked to it and he talked and he was able to tell all of the things he wanted to tell but didn’t have the opportunity before. So the whole review was about ‘guys, thank you so much. It really gave me an opportunity to close this chapter of my life and move on,” said Grypachevskaya.

    Source link

  • Google rolls out a major expansion of its Bard AI chatbot | CNN Business

    Google rolls out a major expansion of its Bard AI chatbot | CNN Business


    New York
    CNN
     — 

    Google’s Bard artificial intelligence chatbot is evolving.

    The company on Tuesday announced a series of updates to Bard that will give the chatbot access to Google’s full suite of tools — including YouTube, Google Drive, Google Flights and others — to assist users in a wider variety of tasks. Users will be able, for example, to ask Bard to plan an upcoming trip, complete with real flight options. Or a user could ask the tool to summarize meeting notes made in a recent Google Drive document.

    The connections to Google’s other services are just some of the improvements to Bard coming Tuesday. Other updates include the ability to communicate with the chatbot in multiple languages, new fact-checking capabilities and a broad update to the large language model that the tool is built on.

    The new features mark the biggest update to Google’s Bard in the six months since it was widely released to the public.

    The update comes as Google and other tech giants, including Microsoft and ChatGPT maker OpenAI, race to roll out increasingly sophisticated consumer-facing AI technologies, and to convince users that such tools are more than just a gimmick. Google — which earlier this year reportedly issued an internal “code red” after OpenAI beat it to the release of its AI chatbot — is now flexing the power of its other, widely used software programs that can make Bard more useful.

    “These services in conjunction with one another are very, very powerful,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN ahead of the launch. “Bringing all the power of these tools together will save people time — in 20 seconds, in minutes, you can do something that would have taken maybe an hour or more.”

    Previously, Bard had been able to help with tasks like writing essay drafts or planning a friend’s baby shower based on Google’s large language model, an AI algorithm trained on vast troves of data. But now, Bard will draw on information from Google’s various other services, too. With the new extensions, Bard will now pull information from YouTube, Google Maps, Flights and Hotels by default.

    That will allow users to ask Bard things like”Give me a template for how to write a best man speech and show me YouTube videos about them for inspiration,” or for trip suggestions, complete with driving directions, according to Google. Bard users can opt to disable these extensions at any time.

    Users can also opt in to link their Gmail, Docs and Google Drive to Bard so the tool can help them analyze and manage their personal information. The tool could, for example, help with a query like: “Find the most recent lease agreement from my Drive and check how much the security deposit was,” Google said.

    The company said that users’ personal Google Workspace information will not be used to train Bard or for targeted advertising purposes, and that users can withdraw their permission for the tool to access their information at any time.

    “This is the first step in a fundamentally new capability for Bard – the ability to talk to other apps and services to provide more helpful responses,” Google said of the extensions tool. It added that, “this is a very young area of AI,” that it will continue to improve based on user feedback.

    Bard is also launching a “double check” button that will allow users to evaluate the accuracy of its responses. When a user clicks the button, certain segments of Bard’s response will be highlighted to show where Google Search results either confirm or differ from what the chatbot said. The double check feature is designed to counter a common AI issue called “hallucinations,” where an AI tool confidently makes a statement that sounds real, but isn’t actually based in fact.

    “We’re constantly working on reducing those hallucinations in Bard,” Hsiao said. But in the meantime, the company wanted to create a way to address them. “You can kind of think of it as spell check, but double checking the facts.”

    Bard will now also allow one user to share a conversation with the chatbot with another person, who can then expand on the chat themselves.

    It’s still early days for Bard, which launched in March as an “experiment” and still notes on its website that the tool “may display inaccurate or offensive information that doesn’t represent Google’s views.” But this latest update offers a glimpse at how Google may ultimately seek to incorporate generative AI into its various services.

    Source link

  • George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business

    George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business


    New York
    CNN
     — 

    A group of famous fiction writers joined the Authors Guild in filing a class action suit against OpenAI on Wednesday, alleging the company’s technology is illegally using their copyrighted work.

    The complaint claims that OpenAI, the company behind viral chatbot ChatGPT, is copying famous works in acts of “flagrant and harmful” copyright infringement and feeding manuscripts into algorithms to help train systems on how to create more human-like text responses.

    George R.R. Martin, Jodi Picoult, John Grisham and Jonathan Franzen are among the 17 prominent authors who joined the suit led by the Authors Guild, a professional organization that protects writers’ rights. Filed in the Southern District of New York, the suit alleges that OpenAI’s models directly harm writers’ abilities to make a living wage, as the technology generates texts that writers could be paid to pen, as well as uses copyrighted material to create copycat work.

    “Generative AI threatens to decimate the author profession,” the Authors Guild wrote in a press release Wednesday.

    The suit alleges that books created by the authors that were illegally downloaded and fed into GPT systems could turn a profit for OpenAI by “writing” new works in the authors’ styles, while the original creators would get nothing. The press release lists AI efforts to create two new volumes in Martin’s Game of Thrones series and AI-generated books available on Amazon.

    “It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the US,” Authors Guild CEO Mary Rasenberger stated in the release. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

    The class-action lawsuit joins other legal actions, organizations and individuals raising alarms over how OpenAI and other generative AI systems are impacting creative works. An author told CNN in August that she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence. Two other authors sued OpenAI in June over the company’s alleged misuse of their works to train ChatGPT. Comedian Sarah Silverman and two authors also sued Meta and ChatGPT-maker OpenAI in July, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

    But OpenAI has pushed back. Last month, the company asked a San Francisco federal court to narrow two separate lawsuits from authors – including Silverman – alleging that the bulk of the claims should be dismissed.

    OpenAI did not respond to a request for comment on Wednesday.

    “We think that creators deserve control over how their creations are used and what happens sort of beyond the point of, of them releasing it into the world,” Sam Altman, the CEO of OpenAI, told Congress in May. “I think that we need to figure out new ways with this new technology that creators can win, succeed, have a vibrant life.”

    US lawmakers met with members of creative industries in July, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models.

    More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    But the AI issues facing creative professions doesn’t seem to be going away.

    “Generative AI is a vast new field for Silicon Valley’s longstanding exploitation of content providers. Authors should have the right to decide when their works are used to ‘train’ AI,” author Jonathan Franzen said in the release on Wednesday. “If they choose to opt in, they should be appropriately compensated.”

    Source link

  • Baidu says its AI is in the same league as GPT-4 | CNN Business

    Baidu says its AI is in the same league as GPT-4 | CNN Business

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Chinese tech giant Baidu is officially taking on GPT-4.

    On Tuesday, the company unveiled ERNIE 4.0, the newest version of its artificial intelligence chatbot that it directly compared to the latest iteration of OpenAI’s ChatGPT.

    The new ERNIE Bot “is not inferior in any aspect to GPT-4,” Baidu’s billionaire CEO, Robin Li, told an audience at its annual flagship event.

    Speaking onstage, Li showed how the bot could generate a commercial for a car within minutes, solve complicated math problems and create a plot for a martial arts novel from scratch. The bot works mainly in Mandarin Chinese, its primary language. It is also able to handle queries and produce responses in English at a less advanced level.

    Li said the demonstrations showed how the bot had been “significantly improved” in terms of its understanding of queries, generation of complex responses and memory capabilities.

    While coming up with ideas for the novel, for instance, the bot was able to remember previous instructions and create sophisticated story lines by adding conflicts and characters, said Li.

    “We always complained that AI was not intelligent enough,” he quipped.

    “But today, it understands almost everything you say, and in many cases, it understands what you’re saying better than your friends or your colleagues.”

    Charlie Dai, vice president and research director of technology at Forrester, said Baidu is “the first vendor in China” to claim it could perform as well as GPT-4.

    “We still need more benchmarking evidence to prove it, but I’m cautiously optimistic that this is China’s GPT-4 moment, giving its long-term investment in AI [and machine learning],” he told CNN.

    In contrast to a pre-recorded presentation in March that failed to impress investors, Li demonstrated the bot in real time.

    Investors appeared unmoved, however, with Baidu’s shares down 1.4% in Hong Kong following the presentation.

    Baidu (BIDU) has been a frontrunner in China in the race to capitalize on the excitement around generative AI, the technology that underpins systems such as ChatGPT or its successor, GPT-4.

    The Beijing-based company unveiled ERNIE Bot in March, before launching it publicly in August.

    The newest iteration will launch first to invited users, Li said. The company did not specify when it would be made available publicly.

    ERNIE Bot has quickly gained traction, racking up more than 45 million users after reaching the top of Chinese app stores at one point, according to the company. ChatGPT, which was released last November, surpassed 100 million users in its first two months, according to a March report by Goldman Sachs analysts.

    Baidu faces competition within China, from companies such as Alibaba (BABA) and SenseTime, which have also shown off their own ChatGPT-style tools.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as video and audio.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    Baidu is a market leader in China, said Dai.

    But the competition in this space “has just begun, and AI tech leaders like Alibaba … Huawei, JD Cloud, SenseTime, and Tencent all have chance to take the lead,” he noted.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    But Baidu has previously touted how ERNIE can outperform ChatGPT in some instances, saying its bot had scored higher marks than OpenAI’s on some academic exams.

    The Chinese company also announced Tuesday it had updated its suite of services to integrate the latest upgrades from ERNIE. Baidu’s popular search engine is now able to use the tool to produce more specific results, while its mobile mapping app can help users book services, such as taxis, according to Li.

    By doing so, “Baidu is also the first Chinese tech leader that has made substantial progress in modernizing the majority of its products” with an AI model, said Dai.

    Source link

  • Snapchat users freak out over AI bot that had a mind of its own | CNN Business

    Snapchat users freak out over AI bot that had a mind of its own | CNN Business



    CNN
     — 

    Snapchat users were alarmed on Tuesday night when the platform’s artificial intelligence chatbot posted a live update to its profile and stopped responding to messages.

    The Snapchat My AI feature — which is powered by the viral AI chatbot tool ChatGPT — typically offers recommendations, answers questions and converses with users. But posting a live Story (a short video of what appeared to be a wall) for all Snapchat users to see was a new one: It’s a capability typically reserved for only its human users.

    The app’s fans were quick to share their concerns on social media. “Why does My AI have a video of the wall and ceiling in their house as their story?” wrote one user. “This is very weird and honestly unsettling.” Another user wrote after the tool ignored his messages: “Even a robot ain’t got time for me.”

    Turns out, this wasn’t Snapchat working to make its My AI tool even more realistic. The company told CNN on Wednesday it was a glitch. “My AI experienced a temporary outage that’s now resolved,” a spokesperson said.

    Still, the strong reaction highlighted the fears many people have about the potential risks of artificial intelligence.

    Since launching in April, the tool has faced backlash not only from parents but from some Snapchat users with criticisms over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    Unlike some other AI tools, Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it and bring it into conversations with friends. The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear that you’re talking to a computer.

    While some may find value in the tool, the mixed reaction hinted at the challenges companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow.

    Source link

  • Huawei wants to go all in on AI for the next decade | CNN Business

    Huawei wants to go all in on AI for the next decade | CNN Business

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Huawei has joined the list of companies that want to be all about artificial intelligence.

    For the first time in about 10 years, the Chinese tech and telecoms giant announced its new strategic direction on Wednesday, saying it would shift its focus to AI. Previously, the company had prioritized cloud computing and intellectual property, respectively, over two decade-long periods.

    Meng Wanzhou, Huawei’s rotating chairwoman and chief financial officer, made the announcement in Shanghai during a company event.

    “As artificial intelligence gains steam, and its impact on industry continues to grow, Huawei’s All Intelligence strategy is designed to help all industries make the most of new strategic opportunities,” the company said in a statement.

    Meng said in a speech that Huawei was “committed to building a solid computing backbone for China — and another option for the world.”

    “Our end goal is to help meet the diverse AI computing needs of different industries,” she added, without providing details.

    Huawei’s decision follows a similar move by fellow Chinese tech giant Alibaba (BABA), announced earlier this month, to prioritize AI.

    Other companies, such as Japan’s SoftBank, have also long declared an intent to focus more on the fast-moving technology, and more businesses have jumped on the bandwagon this year due to excitement about platforms such as GPT-4.

    Meng returned to China in September 2021 after spending nearly three years under house arrest in Canada as part of an extradition battle with the United States. She and Huawei had been charged for alleged bank fraud and evasion of economic sanctions against Iran.

    The executive, who is also the daughter of Huawei founder Ren Zhengfei, was able to leave after reaching an agreement with the US Department of Justice and ultimately having her charges dismissed.

    Meng began her role as the rotating chairperson of the company in April and is expected to stay in the position for six months.

    News of Huawei’s strategic update came the same day the company was mentioned in allegations lodged by China against the United States.

    In a statement posted Wednesday on Chinese social network WeChat, China’s Ministry of State Security accused Washington of infiltrating Huawei servers nearly 15 years ago.

    “With its powerful arsenal of cyberattacks, the United States intelligence services have carried out surveillance, theft of secrets and cyberattacks against many countries around the world, including China, in a variety of ways,” the ministry said.

    It alleged that the US National Security Agency (NSA), in particular, had “repeatedly conducted systematic and platform-based attacks on China in an attempt to steal China’s important data resources.”

    Huawei declined to comment on the allegations, while the NSA did not immediately respond to a request for comment outside regular US business hours.

    The claims are especially notable because US officials have long suspected the company of spying on the networks that its technology operates, using it as grounds to restrict trade with the company. Huawei has vehemently denied the claims, saying it operates independently of the Chinese government.

    In 2019, Huawei was added to the US “entity list,” which restricts exports to select organizations without a US government license. The following year, the US government expanded on those curbs by seeking to cut Huawei off from chip suppliers that use US technology.

    In recent weeks, Huawei has added to US-China tensions again after launching a new smartphone that represents an apparent technological breakthrough.

    Huawei launched the Mate 60 Pro, its latest flagship device, last month, prompting a US investigation. Analysts who have examined the phone have said it includes a 5G chip, suggesting Huawei may have found a way to overcome American export controls.

    — Mengchen Zhang contributed to this report.

    Source link

  • Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business

    Taiwan’s Foxconn to build ‘AI factories’ with Nvidia | CNN Business


    Taipei
    CNN
     — 

    Taiwan’s Foxconn says it plans to build artificial intelligence (AI) data factories with technology from American chip giant Nvidia, as the electronics maker ramps up efforts to become a major global player in electric car manufacturing.

    Foxconn Chairman Young Liu and Nvidia CEO Jensen Huang jointly announced the plans on Wednesday in Taipei. The duo said the new facilities using Nvidia’s chips and software will enable Foxconn to better utilize AI in its electric vehicles (EV).

    “We are at the beginning of a new computing revolution,” Huang said. “This is the beginning of a brand new way of doing software — using computers to write software that no humans can.”

    Large computing systems powered by advanced chips will be able to develop software platforms for the next generation of EVs by learning from everyday interactions, they said.

    “Foxconn is turning from a manufacturing service company into a platform solution company,” Liu said. “In three short years, Foxconn has displayed a remarkable range of high-end sedan, passenger crossover, SUV, compact pick-up, commercial bus and commercial van.”

    Best known as the assembler of Apple’s iPhones, Foxconn envisages a similar business model for EVs. It doesn’t sell the vehicles under its own brand. Instead, it will build them for clients in Taiwan and globally.

    In 2021, Foxconn unveiled three EV models, including two passenger cars and a bus, for the first time. They were followed by additional models last year and two new ones — Model N, a cargo van, and Model B, a compact SUV — during Foxconn’s tech day on Wednesday.

    Its electric buses started running in the southern Taiwanese city of Kaohsiung last year, while its first electric car, sold under the N7 brand by Taiwanese automaker Luxgen, is expected to begin deliveries on the island from January 2024.

    Foxconn has entered a competitive industry.

    Global sales of EVs, including purely battery powered vehicles and hybrids, exceeded 10 million units last year, up 55% from 2021, according to the International Energy Agency. Nearly 14 million electric cars will be sold in 2023, it projected.

    Foxconn, which is officially known as the Hon Hai Technology Group, has been expanding its business by entering new industries such as EVs, digital health and robotics.

    Analysts say its entry into the EV space is a “logical diversification.”

    Smartphones are “a very saturated market already, and the room to grow in the … industry is getting [smaller],” said Kylie Huang, a Taipei-based analyst at Daiwa. “If they can really tap into the EV business, I do think that [they] could become influential in the next couple of years.”

    During last year’s tech day, Liu told reporters that the company hoped to build 5% of the world’s electric cars by 2025. It aims to eventually produce up to 40% to 45% of EVs around the world.

    But its foray into the industry hasn’t been entirely smooth.

    Last year, Foxconn bought a factory from Lordstown Motors in Ohio that used to make small cars for General Motors. That partnership ended in June, with the American car company filing for bankruptcy protection and announcing a lawsuit against Foxconn.

    Lordstown Motors accused Foxconn of “fraud” and failing to follow through on investment promises, while Foxconn dismissed the suit as “meritless” and criticized the company for making “false comments and malicious attacks.”

    Still, it’s clear Foxconn is leaning into its expanded ambitions, including hiring two new chief strategy officers for its EV and chips businesses.

    Chiang Shang-yi is a Taiwanese semiconductor industry veteran who helped TSMC become a global foundry powerhouse, while Jun Seki, a former vice chief operating officer at Nissan Motor, leads the EV unit.

    In May, Foxconn announced a new partnership with Infineon Technologies, a German company that specializes in automotive semiconductor chips, to establish a new research center in Taiwan.

    Bill Russo, founder of Shanghai-based consulting firm Automobility, said Foxconn has the advantage of coming from a consumer electronics background, which could allow it to come up with more innovative EV products compared with traditional automakers.

    “The biggest problem with legacy automakers is that they have so much sunk investment in a carryover platform, that they typically want to start not with a clean sheet of paper, but with a highly constrained set of requirements,” he said. “Those carryover technologies bring constraints to how you think about vehicles.”

    “When Tesla started, it started by saying, ‘I’m going to challenge all of that, I’m going to blow up the basic architecture of a car and simplify it greatly,’” he added.

    “I think that’s the advantage that a technology company has … And I think that’s the way Foxconn will come at this.”

    Hanna Ziady contributed to this report.

    Source link

  • Schools are teaching ChatGPT, so students aren’t left behind | CNN Business

    Schools are teaching ChatGPT, so students aren’t left behind | CNN Business


    New York
    CNN
     — 

    When college administrator Lance Eaton created a working spreadsheet about the generative AI policies adopted by universities last spring, it was mostly filled with entries about how to ban tools like ChatGPT.

    But now the list, which is updated by educators at both small and large US and international universities, is considerably different: Schools are encouraging and even teaching students how to best use these tools.

    “Earlier on, we saw a kneejerk reaction to AI by banning it going into spring semester, but now the talk is about why it makes sense for students to use it,” Eaton, an administrator at Rhode Island-based College Unbound, told CNN.

    He said his growing list continues to be discussed and shared in popular AI-focused Facebook groups, such as Higher Ed Discussions of Writing and AI, and the Google group AI in Education.

    “It’s really helped educators see how others are adapting to and framing AI in the classroom,” Eaton said. “AI is still going to feel uncomfortable, but now they can now go in and see how a university or a range of different courses, from coding to sociology, are approaching it.”

    With more experts expecting the continued application of artificial intelligence, professors now fear ignoring or discouraging the use of it will be a disservice to students and leave many behind when entering the workforce.

    Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists and passed exams at esteemed universities. The technology, and similar tools such as Google’s Bard, is trained on vast amounts of online data in order to generate responses to user prompts. While they gained traction among users, the tools also raised some concerns about inaccuracies, cheating, the spreading of misinformation and the potential to perpetuate biases.

    According to a study conducted by higher education research group Intelligent.com, about 30% of college students used ChatGPT for schoolwork this past academic year and it was used most in English classes.

    Jules White, an associate professor of computer science at Vanderbilt University, believes professors should be explicit in the first few days of school about the course’s stance on using AI and that it should be included it in the syllabus.

    “It cannot be ignored,” he said. “I think it’s incredibly important for students, faculty and alumni to become experts in AI because it will be so transformative across every industry in demand so we provide the right training.”

    Vanderbilt is among the early leaders taking a strong stance in support of generative AI by offering university-wide training and workshops to faculty and students. A three-week 18-hour online course taught by White this summer was taken by over 90,000 students, and his paper on “prompt engineering” best practices is routinely cited among academics.

    “The biggest challenge is with how you frame the instructions, or ‘prompts,’” he said. “It has a profound impact on the quality of the response and asking the same thing in various ways can get dramatically different results. We want to make sure our community knows how to effectively leverage this.”

    Prompt engineering jobs, which typically require basic programming experience, can pay up to $300,000.

    Although White said concerns around cheating still exist, he believes students who want to plagiarize can still seek out other methods such as Wikipedia or Google searches. Instead, students should be taught that “if they use it in other ways, they will be far more successful.

    Diane Gayeski, a professor of communications at Ithaca College, said she plans to incorporate ChatGPT and other tools in her fall curriculum, similar to her approach in the spring. She previously asked students to collaborate with the tool to come up with interview questions for assignments, write social media posts and critique the output based on the prompts given.

    “My job is to prepare students for PR, communications and social media managers, and people in these fields are already using AI tools as part of their everyday work to be more efficient,” she said. “I need to make sure they understand how they work, but I do want them to cite when ChatGPT is being used.”

    Gayeski added that as long as there is transparency, there should be no shame in adopting the technology.

    Some schools are hiring outside experts to teach both faculty and students about how to use AI tools. Tyler Tarver, a former high school principal who now teaches educators about tech tool strategies, said he’s made over 50 speeches at schools and conferences across Texas, Arkansas and Illinois over the past few months. He also offers an online three-hour training for educators.

    “Teachers need to learn how to use it because even if they never use it, their students will,” Tarver said.

    Tarver said that he teaches students, for example, how the tools can be used to catch grammar mistakes, and how teachers can use it to assist with grading. “It can cut down on teacher bias,” Tarver said.

    He argues teachers could grade students a certain way even if they’ve improved over time. By running an assignment through ChatGPT, and asking it to grade the sentence structure on a scale from one to 10, the response could “service as a second pair of eyes to make sure they’re not missing anything,” Tarver said.

    “That shouldn’t be the final grade teachers shouldn’t use it to cheat or cut corners either but it can help inform grading,” he said. “The bottom line is that this is like when the car was invented. You don’t want to be the last person in the horse and buggy.”

    Source link

  • YouTube unveils a slew of new AI-powered tools for creators | CNN Business

    YouTube unveils a slew of new AI-powered tools for creators | CNN Business



    CNN
     — 

    YouTube on Thursday unveiled a slew of new artificial intelligence-powered tools to help creators produce videos and reach a wider audience on the platform, as companies race to incorporate buzzy generative AI technology directly into their core products.

    “We want to make it easier for everyone to feel like they can create, and we believe generative AI will make that possible,” Neal Mohan, YouTube’s CEO, told reporters Thursday during the company’s annual Made On YouTube product event.

    “AI will enable people to push the boundaries of creative expression by making the difficult things simple,” Mohan added. He said YouTube is trying to bring “these powerful tools” to the masses.

    The video platform, under the Alphabet-Google umbrella, teased a new generative AI feature dubbed Dream Screen specifically for its short-form video arm and TikTok competitor, YouTube Shorts. Dream Screen is an experimental feature that lets creators add AI-generated video or image backgrounds to their vertical videos.

    To use Dream Screen, creators can type their idea for a background as a prompt and the platform will do the rest. A user, for example, could create a background that makes it look like they are in outer space or on a beach where the sand is made out of jelly beans, per demos of the tool shared on Thursday.

    Dream Screen is being introduced to select creators and will be rolled out more broadly next year, the company said.

    YouTube also unveiled new AI-powered tools that creators can access to help brainstorm or draft outlines for videos or search for specific music using descriptive phrases. YouTube said it was bringing an AI-powered dubbing tool that will let users share their videos in different languages.

    AI-powered tools in YouTube Studio.

    Alan Chikin Chow, 26, a content creator based in Los Angeles who recently hit 30 million subscribers on YouTube, told CNN that he is most excited about using the new AI-powered dubbing tool for his comedy videos. Chikin Chow currently boasts the title of the most-watched YouTube Shorts creator in the world.

    “I think global content is the future,” Chikin Chow told CNN. “If you look at the trends of our recent generation, the things that have really impacted and moved culture are ones that are global,” he added, citing the Korean smash-hit TV series “Squid Game” as one example.

    Using the AI-powered dubbing features, he said he hopes to reach audiences in new corners of the world that might not otherwise be able to engage with his content.

    LOS ANGELES, CALIFORNIA - DECEMBER 04: Alan Chikin Chow attends the 2022 YouTube Streamy Awards at the Beverly Hilton on December 04, 2022 in Los Angeles, California. (Photo by Emma McIntyre/Getty Images for dick clark productions)

    Chikin Chow added that he’s also excited to use the new editing tools to help save time.

    The rise of generative AI has animated the tech sector and broader public — becoming the latest buzzword out of Silicon Valley since the launch of OpenAI’s ChatGPT service late last year.

    Some industry watchers and AI skeptics have argued that powerful new AI tools carry potential dangers, such as making it easier to spread misinformation via deepfake images, or perpetuate biases at a larger scale. Many creative professionals — whose works are often swept up into the datasets required to train and power AI tools — are also raising the alarm over potential intellectual property rights issues.

    And some prominent figures inside and outside the tech industry even say there’s a potential that AI can result in civilization “extinction” and compare its potential risk to that of “nuclear war.”

    Despite the frenzy AI has caused, Chikin Chow told CNN that he ultimately views it as a “collaborator” and a “supplement” to help propel his creative work forward.

    “I think that the people who are able to take change and move with it are the ones that are going to be successful long term,” Chikin Chow said.

    Source link

  • US escalates tech battle by cutting China off from AI chips | CNN Business

    US escalates tech battle by cutting China off from AI chips | CNN Business

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong/Washington
    CNN
     — 

    The Biden administration is reducing the types of semiconductors that American companies will be able to sell to China, citing the desire to close loopholes in existing regulations announced last year.

    On Tuesday, the US Commerce Department unveiled new rules that further tighten a sweeping set of export controls first introduced in October 2022.

    The updated rules “will increase effectiveness of our controls and further shut off pathways to evade our restrictions,” US Commerce Secretary Gina Raimondo said in a statement. “We will keep working to protect our national security by restricting access to critical technologies, vigilantly enforcing our rules, while minimizing any unintended impact on trade flows.”

    Advanced artificial intelligence chips, such as Nvidia’s H800 and A800 products, will be affected, according to a regulatory filing from the US company.

    The regulations also expand export curbs beyond mainland China and Macao to 21 other countries with which the United States maintains an arms embargo, including Iran and Russia.

    The measures, which have affected the shares of major American chipmakers, are set to take effect in 30 days.

    The original rules had sought to hamper China’s ability to procure advanced computing chips and manufacture advanced weapons systems. Since then, senior administration officials have suggested they needed to be adjusted due to technological developments.

    Raimondo, who visited China in August, said the administration was “laser-focused” on slowing the advancement of China’s military. She emphasized that Washington had opted not to go further in restricting chips for other applications.

    Chips used in phones, video games and electric vehicles were purposefully carved out from the new rules, according to senior administration officials.

    But these assurances are unlikely to placate Beijing, which has vowed to “win the battle” in core technologies in order to bolster the country’s position as a tech superpower.

    China’s Foreign Ministry criticized the Biden administration’s new rules Monday, before they were officially unveiled.

    “The US needs to stop politicizing and weaponizing trade and tech issues and stop destabilizing global industrial and supply chains,” spokesperson Mao Ning told a press briefing. “We will closely follow the developments and firmly safeguard our rights and interests.”

    As part of ongoing dialogue established by Raimondo and other US officials with their Chinese counterparts, Beijing was informed of the impending updates, according to a senior administration official.

    “We let the Chinese know for clarity that these rules were coming, but there was no negotiation with them,” the official told reporters.

    The tech rivalry between the world’s two largest economies has been heating up. In recent months, the United States has enlisted its allies in Europe and Asia in restricting sales of advanced chipmaking equipment to China.

    In July, Beijing hit back by imposing its own curbs on exports of germanium and gallium, two elements essential for making semiconductors.

    Shares of US chipmakers fell Tuesday following the announcement of new export controls.

    Nvidia’s (NVDA) stock closed down 4.7%, while Intel (INTC) slipped 1.4%. AMD (AMD) shares ended 1.2% lower.

    In its filing, Nvidia said the rules imposed new licensing requirements for exports to China and other markets such as Saudi Arabia, the United Arab Emirates and Vietnam.

    The company said its A800 chip, which was reportedly created for Chinese customers in order to circumvent last year’s restrictions, would be among the components affected.

    However, “given the strength of demand for our products worldwide, we do not anticipate that the additional restrictions will have a near-term meaningful impact on our financial results,” Nvidia said.

    The broader US chipmaking industry is also examining the impact of the new rules.

    The Semiconductor Industry Association said in a statement Tuesday that while it recognized the need to protect national security, “overly broad, unilateral controls risk harming the US semiconductor ecosystem without advancing national security as they encourage overseas customers to look elsewhere.”

    “We urge the administration to strengthen coordination with allies to ensure a level playing field for all companies,” added the group, which represents 99% of the US chip sector.

    The measures are also being reviewed in Europe. On Tuesday, ASML, the Dutch chipmaking equipment manufacturer, said it was evaluating the implications of the rules, though it did not expect them “to have a material impact on our financial outlook for 2023.”

    During a call Wednesday about the company’s third-quarter results, ASML chief executive Peter Wennink said the updated export restrictions would affect between 10% and 15% of the firm’s sales to China.

    On Tuesday, the US Department of Commerce added 13 Chinese entities to a list of firms with which US companies may not do business for national security reasons.

    They include two Chinese startups, Biren Technology and Moore Thread Intelligent Technology, and their subsidiaries.

    The department alleges that these companies are “involved in the development of advanced computing chips that have been found to be engaged in activities contrary to US national security.”

    CNN has reached out to Biren and Moore Thread for comment.

    — Anna Cooban contributed reporting.

    Source link

  • Meet your new AI tutor | CNN Business

    Meet your new AI tutor | CNN Business



    CNN
     — 

    Artificial intelligence often induces fear, awe or some panicked combination of both for its impressive ability to generate unique human-like text in seconds. But its implications for cheating in the classroom — and its sometimes comically wrong answers to basic questions — have left some in academia discouraging its use in school or outright banning AI tools like ChatGPT.

    That may be the wrong approach.

    More than 8,000 teachers and students will test education nonprofit Khan Academy’s artificial intelligence tutor in the classroom this upcoming school year, toying with its interactive features and funneling feedback to Khan Academy if the AI botches an answer.

    The chatbot, Khanmigo, offers individualized guidance to students on math, science and humanities problems; a debate tool with suggested topics like student debt cancellation and AI’s impact on the job market; and a writing tutor that helps the student craft a story, among other features.

    First launched in March to an even smaller pilot program of around 800 educators and students, Khanmigo also allows students to chat with a growing list of AI-powered historical figures, from George Washington to Cleopatra and Martin Luther King Jr., as well as literary characters like Winnie the Pooh and Hamlet.

    Khan Academy’s Chief Learning Officer Kristen DiCerbo told CNN that Khanmigo helps address a problem she’s witnessed firsthand observing an Arizona classroom: that when students learn something new, they often need individualized help — more help than one teacher can provide all at once.

    As DiCerbo chatted with AI-powered Dorothy from “The Wonderful Wizard of Oz” during a demonstration of the technology to CNN, she explained how users can rate Khanmigo’s responses in real-time, providing feedback if and when Khanmigo makes mistakes.

    “There is going to be a big world out there where people can just get the answers to their homework problems, where they can just get an essay written for them. That’s true now too on the Internet,” DiCerbo said. “We’re trying to focus on the social good, but we need to be aware of the threats and the risks so that we know how to mitigate those.”

    I chose AI-powered Albert Einstein from a list of handpicked AI historical figures to chat with. AI-Einstein told me his greatest accomplishment was both his theory of relativity and inspiring curiosity in others, before tossing me a question Socrates-style about what sparks curiosity in my own life.

    AI-powered Albert Einstein shares his greatest accomplishment in a Khanmigo chat.

    Khanmigo developers programmed the AI figures not to comment on events after their lifetime. As such, AI-Einstein wouldn’t comment on the historical accuracy of his role in Christopher Nolan’s “Oppenheimer,” despite my asking.

    Khanmigo is trained not to comment on events that occur after the lifetime of the historical figure it is imitating.

    Some figures from the list are not as widely praised as Einstein. For instance, Thomas Jefferson, the third US president and primary draftsman of the Declaration of Independence, has faced renewed criticism in recent years for owning 600-plus enslaved people throughout his lifetime.

    Khanmigo’s Thomas Jefferson will not shy away from scrutiny. He wrote back to my inquiry about his views on slavery in part: “As Thomas Jefferson, my views on slavery were fraught with contradiction. On one hand, I publicly expressed my belief that slavery was morally wrong and a threat to the survival of the new American nation […] Yet I was a lifelong slaveholder, owning over 600 enslaved people throughout my lifetime.”

    The purpose of the tool is to engage students through conversation, DiCerbo said, an altogether different experience than passively reading about someone’s life on Wikipedia.

    “The Internet can be a pretty scary place, and it can be a pretty good place. I think that AI is the same,” DiCerbo said. “There could be potential bad uses and misuses, and it can be a pretty powerful learning tool.”

    After gaining early access to ChatGPT-creator OpenAI’s newest and most capable large language model, GPT-4, Khan Academy trained GPT-4 on its own learning content. The company also implemented guardrails to keep Khanmigo’s tone encouraging and prevent it from giving students the answer to the question they’re struggling with.

    For teachers, Khanmigo also offers assistance to create lesson plans and rubrics, identifies struggling students based on their performance in Khan Academy activities and gives teachers access to student chat history.

    “I’m learning new ways to solve the problems as well,” said Leo Lin, a science teacher at Khan Lab School in California and an early tester of Khanmigo. Khan Lab School is a separate nonprofit founded by Khan Academy CEO Sal Khan.

    Khanmigo has emerged at a crossroads in academia, with some educators leaning into generative AI and others recoiling. New York City Public Schools, Seattle Public Schools and the Los Angeles Unified School District, among other academic institutions, have all made efforts to either ban or restrict ChatGPT on district networks and devices in the past.

    A lack of information about AI may be exacerbating some educator worries: While 72% of K-12 teachers, principals and district leaders say that teaching students how to use AI tools is at least “fairly important,” 87% said they’ve received zero professional instruction about incorporating AI into their work, according to an EdWeek Research Center survey from June.

    Khan Academy’s in-the-works AI learning course “AI 101 for Teachers,” created in partnership with Code.org, ETS and the International Society for Technology in Education, offers a path toward AI literacy among teachers.

    Although Khanmigo is still in its pilot phase, the AI-powered teaching assistant is currently used by over 10,000 additional users across the United States beyond the pilot program. They agreed to pay a donation to Khan Academy to test the service.

    An AI “tutor” like Khanmigo is not immune to the flubs all large language models face: so-called hallucinations.

    “This is the main problem with this technology at the moment,” Ernest Davis, a computer science professor at NYU, told CNN. “It makes things up.”

    Khanmigo is most commonly used for math tutoring, according to DiCerbo. Khanmigo shines best when coaching students on how to work through a problem, offering hints, encouragement and additional questions designed to help students think critically. But currently, its own struggles in performing calculations can sometimes hinder its attempts to help.

    In the “Tutor me: Math and science” activity available to students, Khanmigo told me that my answer to 10,332 divided by 4 was incorrect three times before correcting me by sending me the same number.

    In the same “Tutor me” activity, I asked Khanmigo to find the product of five numbers, some integers and some decimals: 97, 117, 0.564322338, 0.855640047, and 0.557680043.

    As I did the final multiplication step, Khanmigo congratulated me for submitting the wrong answer. It wrote: “When you multiply 5479.94173 by 0.557680043, you get approximately 33.0663. Well done!”

    The correct answer is about 3,056.

    Khanmigo makes a math error in a conversation with CNN's Nadia Bidarian.

    Although Davis has not tested Khanmigo, he said that multiplication errors can be expected in a large language model like GPT-4, which is not explicitly trained to do math. Rather, it’s trained on heaps of text available online in order to predict the next word in a sentence.

    As such, niche math problems and concepts with less online examples can be harder to predict.

    “Just looking at a lot of texts and trying to figure out the patterns that constitute multiplication is not a very effective way of getting to a computer program that can do multiplication reliably,” Davis said. “And so it doesn’t.”

    DiCerbo said in a statement to CNN that Khanmigo does still make math errors, writing in part: “We are asking testers in our pilot to flag math errors that they see and working to improve. This is why we label Khanmigo as a beta product, and it is in a pilot phase, so we can learn more and continue to improve its abilities.”

    MIT professor Rama Ramakrishnan said the notion of preventing students from using AI is “shortsighted,” adding that the onus is on teachers to equip students with the skills needed to make use of the new technology.

    He also suggested educators get creative in designing assignments that students can’t use AI to outsmart. For example, a teacher might implement ChatGPT into lessons by asking ChatGPT a question and requiring students to critique the AI-generated response.

    “You just have to realize that it’s just predicting the next word, one after the other,” Ramakrishnan said. “It’s not trying to come up with a truthful answer to your question, just a plausible answer. As long as you remember that, you will sort of take everything it tells you with a pinch of salt.”

    Source link

  • How companies are embracing generative AI for employees…or not | CNN Business

    How companies are embracing generative AI for employees…or not | CNN Business


    New York
    CNN
     — 

    Companies are struggling to deal with the rapid rise of generative AI, with some rushing to embrace the technology as workflow tools for employees while others shun it – at least for now.

    As generative artificial intelligence – the technology that underpins ChatGPT and similar tools – seeps into seemingly every corner of the internet, large corporations are grappling with whether the increased efficiency it offers outweighs possible copyright and security risks. Some companies are enacting internal bans on generative AI tools as they work to better understand the technology, and others have already begun to introduce the trendy tech to employees in their own ways.

    Many prominent companies have entirely blocked internal ChatGPT use, including JPMorgan Chase, Northrup Grumman, Apple, Verizon, Spotify and Accenture, according to AI content detector Originality.AI, with several citing privacy and security concerns. Business leaders have also expressed worries about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere.

    When users input information into these tools, “[y]ou don’t know how it’s then going to be used,” Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN in March. “That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, McCreary said, “I think the opportunity for company trade secrets to get dropped into these different various AI’s is just going to increase.”

    But the corporate hesitancy to welcome generative AI could be temporary.

    “Companies that are on the list of banning generative AI also have working groups internally that are exploring the usage of AI,” Jonathan Gillham, CEO of Originality.AI, told CNN, highlighting how companies in more risk-averse industries have been quicker to take action against the tech while figuring out the best approach for responsible usage. “Giving all of their staff access to ChatGPT and saying ‘have fun’ is too much of an uncontrolled risk for them to take, but it doesn’t mean that they’re not saying, ‘holy crap, look at the 10x, 100x efficiency that we can lock when we find out how to do this in a way that makes all the stakeholders happy” in departments such as legal, finance and accounting.

    Among media companies that produce news, Insider editor-in-chief Nicholas Carlson has encouraged reporters to find ways to use AI in the newsroom. “A tsunami is coming,” he said in April. “We can either ride it or get wiped out by it. But it’s going to be really fun to ride it, and it’s going to make us faster and better.” The organization discouraged staff from putting source details and other sensitive information into ChatGPT. Newspaper chain Gannett paused the use of an artificial intelligence tool to write high school sports stories after the technology called LedeAI made several mistakes in sports stories published in The Columbus Dispatch newspaper in August.

    Of the companies currently banning ChatGPT, some are discussing future usage once security concerns are addressed. UBS estimated that ChatGPT reached 100 million monthly active users in January, just two months after its launch.

    That rapid growth initially left large companies scrambling to find ways to integrate it responsibly. That process is slow for large companies. Meanwhile, website visits to ChatGPT dropped for the third month in a row in August, creating pressure for large tech companies to sustain popular interest in the tools and to find new enterprise applications and revenue models for generative AI products.

    “We at JPMorgan Chase will not roll out genAI until we can mitigate all of the risks,” Larry Feinsmith, JPM’s head of global tech strategy, innovation, and partnerships said at the Databricks Data + AI Summit in June. “We’re excited, we’re working through those risks as we speak, but we won’t roll it out until we can do this in an entirely responsible manner, and it’s going to take time.” Northrop Grumman said it doesn’t allow internal data on external platforms “until those tools are fully vetted,” according to a March report from the Wall Street Journal. Verizon also told employees in a public address in February that ChatGPT is banned “[a]s it currently stands” due to security risks but that the company wants to “safely embrace emerging technology.”

    “They’re not just waiting to sort things out. I think they’re actively working on integrating AI into their business processes separately, but they’re just doing so in a way that doesn’t compromise their information,” Vern Glaser, Associate Professor of Entrepreneurship and Family Enterprise at the University of Alberta, told CNN. “What you’ll see with a lot of the companies that will be using AI strategies, particularly those who have their own unique content, they’re going to end up creating their custom version of generative AI.”

    Several companies – and even ChatGPT itself – seem to have already found their own answers to the corporate world’s genAI security dilemma.

    Walmart introduced an internal “My Assistant” tool for 50,000 corporate employees that helps with repetitive tasks and creative ideas, according to an August LinkedIn post from Cheryl Ainoa, Walmart’s EVP of New Businesses and Emerging Technologies, and Donna Morris, Chief People Officer. The tool is intended to boost productivity and eventually help with new worker orientation, according to the post.

    Consulting giants McKinsey, PwC and EY are also welcoming genAI through internal, private methods. PwC announced a “Generative AI factory” and launched its own “ChatPwC” tool in August powered by OpenAI tech to help employees with tax questions and regulations as part of a $1 billion investment for AI capability scaling.

    McKinsey introduced “Lilli” in August, a genAI solution where employees can pose questions, with the system then aggregating all of the firm’s knowledge and scanning the data to identify relevant “With Lilli, we can use technology to access and leverage our entire body of knowledge and assets to drive new levels of productivity,” Jacky Wright, a McKinsey senior partner and chief technology and platform officer, wrote in the announcement. content, summarize the main points and offer experts.

    EY is investing $1.4 billion in the technology, including “EY.ai EYQ,” an in-house large language model, and AI training for employees, according to a September press release

    Tools like MyAssistant, ChatPwC and Lilli solve some of the corporate concerns surrounding genAI systems through custom adaptions of genAI tech, offering employees a private, closed alternative that both capitalizes its ability to increase efficiency and eliminates the risk of copyright or security leaks.

    The launch of ChatGPT Enterprise may also help quell some fears. The new version of OpenAI’s new tool, announced in August, is specifically for businesses, promising to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon, according to a company blog post.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    In response to the concerns raised by many companies over security, about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere, OpenAI’s announcement blog post for ChatGPT Enterprise states that it does “not train on your business data or conversations, and our models don’t learn from your usage.”

    In July, Microsoft unveiled a business-specific version of its AI-powered Bing tool, dubbed Bing Chat Enterprise, and promised much of the same security assurances that ChatGPT Enterprise is now touting – namely, that users’ chat data will not be used to train AI models.

    It is still unclear whether the new tools will be enough to convince corporate America that it is time to fully embrace generative AI, though experts agree the tech’s inevitable entry into the workplace will take time and strategy.

    “I don’t think it’s that companies are against AI and against machine learning, per se. I think most companies are going to be trying to use this type of technology, but they have to be careful with it because of the impacts on intellectual property,” Glaser said.

    Source link

  • Arm’s mega IPO could be just around the corner, a year after the biggest chip deal in history fell apart | CNN Business

    Arm’s mega IPO could be just around the corner, a year after the biggest chip deal in history fell apart | CNN Business


    New York
    CNN
     — 

    A hotly anticipated IPO for a company that designs chips for 99% of the world’s smartphones is just around the corner, after it filed paperwork Monday to go public.

    Arm is a British tech company that architects power-sipping microchips for phones and tablets and licenses them to CPU makers, including Apple and Samsung. The company was public until 2016, when Japan’s Softbank bought it for $32 billion.

    Softbank tried to offload Arm to Nvidia for $40 billion, in what would have been the biggest chip deal of all time. But global antitrust regulators put a stop to it, and the deal fell apart in February 2022.

    Arm had been a hot commodity for decades, when the smartphone business was booming. But sales of smartphones have subsided recently, as customers opt to keep their phones for longer and new tech features have become less enticing to consumers.

    The company, in its regulatory filing, said sales slipped 1% to $2.7 billion in the year that ended March 31, 2023. In the following quarter, which ended in June, sales fell 2.5%.

    Still, Arm has piqued the interest of tech investors who are looking to catch the AI wave. Softbank CEO Masayoshi Son has touted Arm as an AI company that could have “exponential growth.” He promised ChatGPT-like services would eventually be offered on Arm-designed machines.

    In its IPO filing, Arm said the company “will be central” to the transition to AI.

    “Arm CPUs already run AI and [machine learning] workloads in billions of devices, including smartphones, cameras, digital TVs, cars and cloud data centers,” the company said. “In the emerging area of large language models, generative AI and autonomous driving, there will be a heightened emphasis on the low power acceleration of these algorithms.”

    But Son and Arm’s AI promises may overstate the company’s potential, at least somewhat. Arm-based chips have appeared in some gadgets beyond smartphones and tablets, such as servers that are less power-hungry. But Arm said it does not make AI chips and is not a direct competitor to Nvidia and others that make chips that are purpose-built for AI. Nvidia’s stock has exploded more than 200% this year.

    Arm did not list the number of shares it planned to sell, so a valuation wasn’t determinable yet. But Reuters reported Softbank is looking to basically double its investment from seven years ago with a $60 billion to $70 billion valuation for Arm when it IPOs, likely next month.

    Softbank also this week bought the 25% stake in Arm that it did not own directly but that had been held by the Saudi Vision Fund, which Softbank manages. That purchase valued Arm at $64 billion, according to the Financial Times.

    Source link

  • Apple Watch’s new gesture control feature will have everyone tapping the air | CNN Business

    Apple Watch’s new gesture control feature will have everyone tapping the air | CNN Business



    CNN
     — 

    You’re about to see people in public tapping two fingers together in the air.

    Over the past few days, I’ve been taking phone calls, playing music and scrolling through widgets on the new Apple Watch Series 9 without ever touching the device. I’ve used it to silence my watch’s alarm in the morning, stop timers and open a notification while carrying too many bags.

    It may sound like a gimmick — and it most certainly feels strange to do it in public — but considering the small size of the Apple Watch screen, the tool offers an effective hands-free way to interact with the device.

    Apple’s latest lineup of smartwatches, the Watch Series 9 and high-end Ultra 2, feature a new gesture tool called Double Tap, allowing users to tap their index finger and thumb together twice, to control the device. It can also scroll through widgets, much like turning the digital crown.

    The feature isn’t entirely new; the previous generation of Apple Watch Ultra was capable of similar pinch-and-clench gestures via its Assistive Touch accessibility tool. But Apple’s decision to bring a feature like this to the forefront hints at an increasingly touch-free future. It also comes three months after the company unveiled the Vision Pro mixed reality headset, which will launch next year, with a similar finger tap control.

    Double Tap works in combination with the latest Apple Watch accelerometer, gyroscope and optical heart rate sensor, which looks for disruptions in the blood flow when the fingers are pressed together. That data is processed by a new machine learning algorithm and runs on a faster neural engine, specialized hardware that handles AI and machine learning tasks.

    While the concept is similar, gesture controls are different on the Vision Pro, which will track users’ eyes and hand movements. Apple told CNN it added gesture control to the headset because it needed a different, seamless interface for users to interact with, whereas Double Tap is more about simplifying the Apple Watch experience.

    When the Apple Watch’s display is turned on, the device automatically knows to respond when it senses the fingers are touched together. It essentially works as a “yes” or “accept” button; that means if a call comes through, you can Double Tap to accept it (covering the watch with your full hand, however, will silence it quickly). If a song is playing, you can pause it by double tapping, and then again to start it.

    Although you can subtly flick on the display and do the gesture close to your body, trying to conceal the movement when around other people, I found it works much better when it’s raised a bit higher. This, however, makes the action more obvious — and it’s something that will take a little getting used to seeing in person.

    “This is also about social acceptance. At the moment, I find the idea of people making this gesture more often than not in public a bit funny. But time will tell if users find it acceptable,” said Annette Zimmerman, an analyst at Gartner Research. “I think Apple is very use-case driven and focuses on user feedback on things they could improve.”

    Similarly, it took a while for people to get used to the design of Apple’s AirPods when they were announced in 2016; many criticized how they looked dangling out of users’ ears. Now they’ve become part of modern culture.

    Other learning curves exist with the Double Tap feature. Because I am right handed and wear an Apple Watch on my left hand, tapping my left fingers together to trigger the control takes an extra second or two of mental coordination.

    The future of hands-free devices

    The new Apple Watch Series 9 can be controlled by tapping two fingers together.

    Apple isn’t the only tech company developing gesture controls like this. Samsung TVs, some smartphones and Microsoft’s mixed reality headset all incorporate some hand gesture functionality. But this is Apple’s biggest push to date, and adding it to a flagship device like the Apple Watch will soon put all eyes on the concept of hand gestures.

    “It’s a great move by Apple as it differentiates the company from other brands when it comes to innovation and ease of usability. It also shows Apple’s commitment in the fields of artificial intelligence,” said Sachin Mehta, senior analyst at tech intelligence firm ABI Research. “The new double tap gesture is not a surprise as Apple keeps on developing a unified and intuitive user experience across its product line up. It will cement the Apple Watch as the smartwatch to have.”

    It works differently on the Vision Pro, which will track a user’s eyes and hand movements to make punching and swiping controls. The headset needed a different user interface for users to interact with it, and gestures give that control even when a face is covered by the hardware.

    Further showing how Apple is thinking about gesture control long term, it recently filed for patents focused on gesture controls, including for the Apple TV. That said, Mehta believes there’s no question “we expect more gesture features in Apple’s product lineup in the future.”

    In addition to Double Tap, the Apple Watch Series 9 features Apple’s powerful new in-house silicon chip and ultrawideband connectivity. It will let users log health data with their voice, use “name drop” to share contact information by touching another Apple Watch and raise their wrist to automatically brighten the display. The Series 9 will come in colors such as pink, navy, red, gold, silver and graphite.

    Apple also showed off the second iteration of its rugged Ultra smartwatch line, featuring the updated S9 custom chip and a new ultrawideband chip which uses radio waves to communicate. It also features more information on the display for more intensive tracking.

    The Apple Watch Series 9 will start at $399 and the Ultra is priced at $799. Although they start shipping on Friday, September 22, the Double Tap feature will launch via a software update next month.

    Source link

  • What is catfishing and what can you do if you are catfished? | CNN Business

    What is catfishing and what can you do if you are catfished? | CNN Business


    Editor’s Note: This story is part of ‘Systems Error’, a series by CNN As Equals, investigating how your gender shapes your life online. For information about how CNN As Equals is funded and more, check out our FAQs.



    CNN
     — 

    Catfishing is when a person uses false information and images to create a fake identity online with the intention to trick, harass, or scam another person. It is often on social media or dating apps and websites as a common tactic used to form online relationships under false pretenses, sometimes to lure people into financial scams.

    The person doing the pretending, or the “catfish” may also obtain intimate images from a victim and use them to extort or blackmail the person. This is known as sextortion, or they may use other personal information shared with them to commit identity theft.

    The term is believed to originate from the 2010 documentary “Catfish,” in which a young Nev Schulman starts an online relationship with teenager “Megan”, who turns out to be an older woman.

    In the final scene of the documentary, the woman’s husband shares an anecdote about how live cod used to be exported from Alaska alongside catfish, which kept the cod active and alert. He likened this to people in real life who keep others on their toes, like his wife. Schulman went on to produce the docuseries Catfish

    There are many reasons people resort to catfishing, but the most common reason is a lack of confidence, according to the Cybersmile Foundation, a nonprofit focused on digital well-being. The foundation states that if someone is not happy with themselves, they may feel happier when pretending to be someone more attractive to others.

    They may also hide their identity to troll someone; to engage in a relationship other than their existing one; or to extort or harass people. Some people may catfish to explore sexual preferences.

    Studies have shown that catfish are more likely to be educated men, with one 2022 study finding perpetrators are more likely to come from religious backgrounds, possibly providing a way to form relationships without the constraints they face in real life, the authors write.

    In another study published last year, Evita March, senior lecturer in psychology at Federation University in Australia, found that people with the strong personality traits of sadism, psychopathy, and narcissism were more likely to catfish.

    March told CNN the findings are preliminary and that her team would like to further investigate if certain personality traits lead to specific kinds of catfishing behavior.

    In the US, romance scams resulting from catfishing have among the highest reported financial losses of internet crimes as a whole. A total of 19,050 Americans reported losing almost $740 million to romance scammers in 2022.

    In the UK, the country’s National Fraud Intelligence Bureau received more than 8,000 reports of romance fraud in the 2022 financial year, totaling more than £92 million (US $116.6 million) lost, with an average loss of £11,500 (US $14,574) per victim.

    In Singapore, romance scams are among the top 10 reported scams. The reported amount of money catfish may get from their victims increased by more than 30% from SGD$33.1 million (US $24 million) in 2020 to $46.6 million (US $34 million) the following year.

    Catfishing is also increasingly happening on an industrial scale with the rise of “cyber scam centers” that have links to human trafficking in Southeast Asia, according to INTERPOL.

    Victims of trafficking are forced to become fraudsters by creating fake social media accounts and dating profiles to scam and extort millions of dollars from people around the world using different schemes such as fake crypto investment sites.

    Catfishing used to occur more among adults through online dating sites, but has now become equally common among teenagers, according to the Cybersmile Foundation.

    Research by Snapchat last year with more than 6,000 Gen Z teenagers and young people in Australia, France, Germany, India, the UK and the US found that almost two-thirds of them or their friends had been targeted by catfish or hackers to obtain private images that were later used to extort them.

    Older people are also likely to lose more money to catfishing. In 2021, Americans lost half a billion dollars through romance scams perpetrated by people using fake personas or impersonating others, with the largest losses paid in cryptocurrency, according to the US Federal Trade Commission. The number of reports rose tenfold among young people (18-29) but older people (over 70s) generally reported losing more money.

    In Australia, a third of dating and romance scams result in financial losses, with women having lost more than double the total amount lost by men, and older people again losing more money than those under 45., according to data from the country’s National Anti-Scam Centre.

    ”Romance scams are one of the hardest things to avoid. It’s emotional manipulation,” said Ngo Minh Hieu, a Vietnamese former hacker and founder of Chong Lua Dao (scam fighters), a cybersecurity non-profit.

    Since 2020, Hieu has been monitoring trends to help scam victims, he says, and explains that in his experience, a catfish would usually approach a victim with premediated intention to scam them.

    They were likely to be using personal information that they mine from the victim’s social media accounts, or may have bought that data from users in private chat groups simply by providing a phone number of a potential victim.

    There are many signs you can look for to help spot a catfish, experts say.

    Firstly, a catfish might contact you out of nowhere, start regular conversations with you and shower you with compliments to quickly build up trust and rapport. They may state desirable qualities in their opening conversations, including wealth or attractiveness, but then rarely or never call you, either over the phone or on a video call.

    They often do not have many friends on social media and their posts are usually scarce. Search results using their name may not yield many results and their stories are usually inconsistent. For example, personal details like where they live or go to school might change when discussed again.

    Another classic sign is if the feelings they declare for you escalate quickly and after a short period of time. A catfish may ask you for sensitive images and money.

    Many scammers use already available photos of other people in their fake personas, which may be possible to spot using a reverse image search.

    With the explosion of AI technology, scammers may now generate unique and realistic images for use as profile pictures. But Hieu explains that thanks to their built-in patterns by design, AI-generated images can be detected, using tools such as AI-Generated Image Detector.

    If you believe you are being catfished, there are steps you can take to protect yourself and help end the targeting.

    Experts advise that you should not be afraid to ask direct questions or challenge the person you believe may be catfishing you. You can do this by asking them why they are not willing to call you or meet face to face, or questioning how they can declare their love for you so quickly.

    Wang and her colleagues sent nearly 200 deterrent messages to active scammers in a 2020 study and concluded that this could make fraudsters respond less or in some cases, admit to wrongdoing.

    An example of one of the messages was: “I know you are scamming innocent people. My friend was recently arrested for the same offense and is facing five years in prison. You should stop before you face the same fate.”

    You should think about stopping all communications with the catfish, and refrain from sending money to them at the risk of further financial demands. Experts say catfish continue to target those who engage with them more.

    It’s also useful to secure your online accounts and ensure your personal information is kept private online.

    Cybersecurity expert Hieu explained that you can do this by putting personal information such as your phone number, email addresses and date of birth in private mode on social media. You can also check if your email has been compromised in a data breach by using tools such as the Have I Been Pwned website.

    Installing two-factor authentication on your accounts can also help protect against unauthorized access. That requires you to take a second step to verify your identity when logging in to a service, for example by SMS or a physical device, such as a key fob.

    Being subjected to catfishing can also have a significant impact on your mental health, with many victims left unable to trust others and some left feeling embarrassed about falling for the scam. A 2019 study found that young LGBTQ+ men in rural America experiencing catfishing on dating apps felt angry and fearful.

    If someone was “sextorted,” they may continue to fear their images resurfacing online in the future.

    March from Federation University in Australia recommended improving digital literacy and staying aware of the potential red flags. She also emphasized the need to recognize today’s loneliness epidemic, which “leads people to perhaps be more susceptible to catfishing scams,” she said.

    Seeking professional support from a counselor or talking to supportive friends and family is one way to address loneliness, March added.

    Catfishing is not explicitly a crime, but the actions that often accompany catfishing, such as extortion for money, gifts or sexual images are crimes in many places.

    The main challenge in tackling online fraud is the issue of jurisdiction, according to a 2020 paper about police handling of online fraud victims in Australia. Traditional policing operates within specific territories, but the internet has blurred these boundaries, the authors write.

    Cybercriminals from one country can also target victims in other countries, complicating law enforcement efforts, and victims often face difficulty and frustration when trying to report cybercrimes, which can further traumatize them.

    Fangzhou Wang, a cybercrime professor at the University of Texas at Arlington told CNN that virtual private networks (VPNs), forged credentials, and anonymous communication methods make it extremely difficult to determine identities or locations.

    Scammers have also capitalized on the proliferation of AI, such as AI-generated personas, which complicates the ability of law enforcement authorities to gather evidence and build cases against a catfish.

    ”Law enforcement agencies, often constrained by limited resources and prioritizing cases based on severity and direct impact, might not readily prioritize catfishing cases without substantial financial losses or physical harm,” Wang told CNN.

    In the US, there are some legal precedents. In 2022, a woman who had created multiple fake profiles to target wealthy men was charged with extortion, cyberstalking, and interstate threats and was sentenced in a plea deal last year.

    In the UK, while catfishing itself is not classified as a criminal offense, if the person using a fake profile engages in illegal activities, like financial gain or harassment, they can be punished by law.

    China has a law that implicates people who allow their websites or communications platforms to be used for frauds and other illegal activities under Article 46 in the Cybersecurity Law.

    If a catfish has tricked you into sending them money, you can go to the authorities and your bank immediately, depending on where you are.

    If activities that are crimes in your country have taken place because of being catfished, such as extortion, identify theft or harassment, the police or other authorities, such as specific commissions targeting online crime, may be your first port of call.

    The Australian government’s agency responsible for online safety, the e-safety commissioner, advises that people gather all the evidence they can, including screenshots of the scammer and chats with them to keep as evidence.

    Depending on the case, you can also submit an abuse or impersonation report against the catfish directly to the platform on which you are communicating with them.

    If you believe the person you are talking to is not who they say they are, most of the larger social media platforms give you the option report them for impersonation or other forms of abuse, including Facebook, Instagram, TikTok, X, Telegram, Tinder and WhatsApp. WeChat also offers a channel to report another user for harassment, fraud, or illegal activity, while Telegram creates an anti-scam thread for users to report on fraudsters.

    You are not responsible for the catfish behaviors of others, but staying vigilant and alert online goes a long way.

    Make sure your online accounts are secured and use two-factor authentication. When browsing the internet, you may want to use a virtual private network (VPN) which makes your internet activity harder to track.

    In many countries such as the US, the UK and Australia, victims have reported being preyed on by catfish who tricked them to put money in bogus cryptocurrency investment sites.

    If someone you have been talking to asks you to put money into an investment site, think twice. The Global Anti-Scam Organization has a database of fraudulent websites generated by their own investigations and the public’s tip offs to help inform you if you’re being scammed.

    If you are a parent, this guide provided by the UK-based National College platform suggests communicating effectively and sensitively with your children about the risks. You may also help them report and block the catfish accounts and report to police if they have been subjected to anything illegal or inappropriate.

    Because catfish get close to a target often by relying on personal information posted on social media, UNICEF asks children to consider their rights when it comes to parents sharing their pictures and other content online, especially when they are underage.





    Source link

  • Nvidia’s quarterly sales double on the back of AI boom | CNN Business

    Nvidia’s quarterly sales double on the back of AI boom | CNN Business


    New York
    CNN
     — 

    The artificial intelligence boom continues to fuel a blockbuster year for chipmaker Nvidia.

    Nvidia’s stock jumped as much as 9% in after-hours trading Wednesday after the Santa Clara, California-based company posted year-over-year sales growth of 101%, to $13.5 billion for the three months ended in July.

    The results were even stronger than the $11.2 billion in revenue that Wall Street analysts expected. The company’s non-GAAP adjusted profits grew a stunning 429% from the same period in the prior year to $2.70 per share, also beating analysts’ expectations. GAAP stands for generally accepted accounting principles.

    Nvidia’s stock has climbed by just over 220% since the start of this year amid a surge in the popularity of and demand for artificial intelligence technology. The American chipmaker produces processors that power generative AI, technology that can create text, images and other media — and which forms the foundation of buzzy new services such as ChatGPT.

    “A new computing era has begun. Companies worldwide are transitioning from general-purpose to accelerated computing and generative AI,” Nvidia CEO Jensen Huang said in a statement, adding that the company is working with “Leading enterprise IT system and software providers … to bring NVIDIA AI to every industry.”

    “The race is on to adopt generative AI,” he said.

    Huang had said following the company’s May earnings report that the firm was ramping up its supply to meet “surging demand.”

    “Nvidia’s hardware has become indispensable to the AI-driven economy,” Insider Intelligence senior analyst Jacob Bourne said in emailed commentary. “The pressing question is whether Nvidia can consistently exceed the now-higher expectations.”

    This story is developing and will be updated.

    Source link