ReportWire

Tag: iab-artificial intelligence

  • Microsoft, Google post strong quarterly sales growth as Big Tech continues its comeback | CNN Business

    Microsoft, Google post strong quarterly sales growth as Big Tech continues its comeback | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Big tech companies are continuing a turnaround from last year, as Alphabet, Microsoft and Snap kicked off earnings season with strong sales results for the quarter ended in September.

    Google parent company Alphabet on Tuesday reported quarterly sales of $76.69 billion, up 11% from the same period in the prior year. The company also posted profits of $19.69 billion for the quarter.

    Meanwhile, Microsoft posted 13% year-on-year sales growth to $56.5 billion, also beating expectations. Microsoft’s quarterly profits hit $22.3 billion, up 27% from the year-ago period.

    Snapchat parent Snap on Tuesday reported a return to sales growth in the September quarter, after two consecutive quarters of declining sales. The company reported revenue of nearly $1.2 billion, an increase of 5% from the same period in the prior year and ahead of analysts’ projections. The company reported a net loss of $368 million.

    The strong results come after Microsoft, Alphabet, Snap and other tech companies carried out mass layoffs and other cost cutting moves over the past year following a difficult 2022 when advertisers and other clients cut back on their spending due to concerns over the macroeconomic environment.

    Despite beating Wall Street’s sales expectations, shares of both Alphabet (GOOGL) and Snap (SNAP) each dipped around 5% in after-hours trading following the reports, although Snap’s quickly regained some ground. Microsoft (MSFT) shares gained around 4% in after-hours trading.

    “Q3 tech season has been quite strong thus far,” Tejas Dessai, research analyst at investment fund GlobalX said in a statement. “These numbers clearly defy concerns of near term economic weakness looming.”

    Google’s advertising business generated quarterly revenue of $59.6 billion, up from $54.5 billion in the prior year. YouTube ads, meanwhile, garnered some $7.9 billion in revenue, up roughly 12% year-over-year.

    YouTube Shorts, the company’s TikTok competitor, hit a milestone 70 billion daily views last quarter, Alphabet CEO Sundar Pichai said on a call with analysts Tuesday afternoon.

    Google’s cloud business, however, reported revenue of $8.41 billion — missing analysts’ estimates.

    Jesse Cohen, a senior analyst at Investing.com, attributed Alphabet’s after-hours stock fall to the “relatively weak performance in its Google cloud platform, which is at risk of falling further behind [Microsoft’s] Azure and [Amazon’s] AWS.” Still, despite taking a hit in 2022 amid a broader tech sector downturn, shares for Alphabet have climbed roughly 56% since the start of 2023, beating the tech-heavy Nasdaq index.

    Google’s report comes as the tech giant is in the antitrust hot seat. US prosecutors officially opened a landmark antitrust trial against Google last month with sweeping allegations that the company engaged in anticompetitive behavior to maintain its dominance over search. (As the legal showdown rages on, Google has continued to deny allegations that it operated illegally.)

    Google also confirmed last month plans to lay off hundreds of staffers in its recruiting division, as it continues cost cutting efforts in some areas. These more targeted layoffs came after Alphabet in January cut around 12,000 jobs — about 6% of its workforce.

    Still, Google has signaled that it remains committed to investing heavily in generative artificial intelligence technology. Last month, Google rolled out a major expansion of its Bard AI chatbot tool.

    “As we expand access to our new AI services, we continue to make meaningful investments in support of our AI efforts,” Pichai said on the call. “We remain committed to durably re-engineering our cost base in order to help create capacity for these investments, in support of long-term sustainable financial value.”

    Microsoft’s recent investments in AI technology helped boost its sales in the September quarter, especially in its key cloud division. Sales from Microsoft’s “intelligent cloud” business — its biggest revenue driver — grew 19% from the year-ago quarter to $24.3 billion.

    Revenue from the company’s “productivity and business processes” business, which includes LinkedIn and Office commercial and consumer products, also grew 13% year-over-year to $18.6 billion.

    “Microsoft is firing on all cylinders and AI is clearly driving growth,” Cohen said in a research note following the company’s report. “The results indicated that artificial intelligence products are stimulating sales and already contributing to top and bottom-line growth.”

    But economic jitters among consumers appear to still have some impact on the company’s bottom line. Devices revenue, which includes sales of laptops, tablets and Xbox consoles, decreased 22% year-over-year, despite a 3% sales increase in the overall “more personal computing” segment. Ongoing concerns about a potential economic slowdown could continue to weigh on the company as it heads into the crucial holiday device sales season.

    The report is Microsoft’s first since the company closed its $69 billion acquisition of “Call of Duty” maker Activision Blizzard earlier this month. While the deal didn’t factor into this quarter’s results, it’s expected to supercharge the company’s gaming business.

    “Microsoft now controls 30 game studios and some of the most well-known games across the industry,” Edward Jones analyst Logan Purk said in a research note earlier this month. “With a massive cloud network and now a compelling library of games, Microsoft has a leg up on peers” in gaming, he said.

    Following the Activision takeover, “we’re looking forward to one of our strongest first-party holiday [game] lineups ever, including new titles like Call of Duty Modern Warfare 3,” CEO Satya Nadella said on an analyst call Tuesday. The company said it expects roughly $400 million of operating expenses in the fourth quarter to come as a result of the acquisition.

    Snap said its sales growth was driven in part by its ongoing efforts to revamp its advertising technology, following changes to Apple’s app tracking policies that took a hit to the business models of Snapchat, Facebook and other platforms.

    “We are focused on improving our advertising platform to drive higher return on investment for our advertising partners, and we have evolved our go-to-market efforts to better serve our partners and drive customer success,” CEO Evan Spiegel said in a statement.

    Snap also reported that it now has 406 million daily active users, up 12% compared to the year-ago quarter. And time spent watching Spotlight — Snapchat’s TikTok clone — grew 200% year-over-year, according to the company.

    The company also recently announced that it had reached more than 5 million subscribers to its Snapchat+ subscription program, a key effort to diversify its revenue.

    Snap said Tuesday that its chief operating officer, Jerry Hunter, plans to retire. Hunter, who spent seven years at the company, will step down from his role as of the end of the month, but will remain at the company until July 1, 2024, to support the transition.

    The company noted that some advertisers temporarily paused their spending following the outbreak of the Israel-Hamas war. Because of the “unpredictable nature” of the war, Snap declined to provide formal guidance for the fourth quarter, but said its internal forecast assumes year-over-year quarterly revenue growth between 2% and 6%.

    [ad_2]

    Source link

  • New York City unveils an ‘artificial intelligence action plan’ | CNN Business

    New York City unveils an ‘artificial intelligence action plan’ | CNN Business

    [ad_1]



    CNN
     — 

    The same New York City administration to launch a “Rat Action Plan” is back with an “Artificial Intelligence Action Plan.”

    Mayor Eric Adams on Monday unveiled a citywide AI “action plan” that pledged – in broad-brushstrokes – to evaluate AI tools and associated risks, boost AI skills among city employees and support “the responsible implementation of these technologies to improve quality of life for New Yorkers,” according to a statement from the mayor’s office.

    The city’s 51-page AI action plan establishes a series of steps the city will take in the coming years to help better understand and responsibly implement the technology that has taken the tech sector and broader business world by storm in recent months.

    While government use of automated technologies has often courted controversy, New York City’s approach to AI, so far, seems to be focused on laying a framework for future AI use-cases as well as engaging with outside experts and the public.

    The first step listed in the city’s AI action plan is establishing an “AI Steering Committee” of city agency stakeholders. The document goes on to list nearly 40 “actions,” with 29 of those set to be started or completed within the next year. The city said it will publish an annual AI progress report to communicate the city’s updates and implementation of the plan.

    Also on Monday, city officials said the government was piloting the first citywide AI-powered chatbot to help business owners navigate operating and growing businesses in New York City. The AI chatbot, already available in beta on the official city of New York website, was trained on information from more than 2,000 NYC Business web pages.

    The chatbot uses Microsoft’s Azure AI services, per a disclaimer on the tool.

    In a statement announcing the AI action plan, Mayor Adams acknowledged “the potential pitfalls and associated risks these technologies present,” and pledged to be “clear-eyed” about these.

    The mayor also expressed hope that the action plan will “strike a critical balance in the global AI conversation — one that will empower city agencies to deploy technologies that can improve lives while protecting against those that can do harm.”

    [ad_2]

    Source link

  • Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    [ad_1]

    A version of this story first appeared in CNN Business’ Before the Bell newsletter. Not a subscriber? You can sign up right here. You can listen to an audio version of the newsletter by clicking the same link.


    New York
    CNN
     — 

    Labor Day — one of the busiest driving holidays in the US — is on the horizon, and so is Hurricane Idalia. That’s potentially bad news for gas prices.

    The storm, which is expected to make landfall in Florida as a Category 3 hurricane on Wednesday, could bring 100 mile-per-hour winds and flooding that extends hundreds of miles up the east coast. The impact could take gasoline refinery facilities offline and may limit some Gulf oil production and supplies. Plus, demand for gas is expected to surge as residents of the impacted areas evacuate.

    “Idalia… could pose risk to oil and gas output in the US Gulf,” wrote the Nasdaq Advisory Services Energy Team.

    The storm is expected to make landfall as drivers nationwide load into their vehicles for the Labor Day weekend, pushing up the demand for gasoline even further.

    All together it means the price of oil and gasoline could remain elevated well into the fall.

    Generally, summer demand for oil tends to wane in September, but so does supply as refineries shift from summer fuels to “oxygenated” winter fuels, said Louis Navellier of Navellier and Associates. Since the 1990s, the US has required manufacturers to include more oxygen in their gasoline during the colder months to prevent excessive carbon monoxide emissions.

    With the storm approaching, that trend may not play out.

    What’s happening: Gas prices are already at $3.82 a gallon. That’s the second highest price for this time of year since at least 2004, according to Bespoke Investment Group. (The only time the national average has been higher for this period was last summer, when prices hit $3.85 a gallon).

    Geopolitical tensions have been supporting high oil and gas prices for some time. Recently, increased crude oil imports into China, production cuts by Russia and Saudi Arabia and extreme heat set off a late-summer spike in gas prices. And the threat of powerful hurricanes could send them even higher.

    Analysts at Citigroup have warned that this hurricane season could seriously impact power supplies.

    “Two Category 3 or higher hurricanes landing on US shores could massively disrupt supplies for not weeks but months,” Citigroup analysts wrote in a note last week. In 2005, for example, gas prices surged by 46% between Memorial Day and Labor Day because of the landfall of Hurricane Katrina, according to Bespoke.

    What it means: The Federal Reserve and central banks around the world have been fighting to bring down stubbornly high inflation for more than a year. This week we’ll get some highly awaited economic data: The Fed’s preferred inflation gauge, the Personal Consumption Expenditures index, is due out on Thursday. But the task of inflation-busting is a lot more difficult when energy prices are high, and it’s even harder when they’re on the rise.

    The PCE price index uses a complicated formula to determine how much weight to give to energy prices each month, but they typically comprise a significant chunk of the headline inflation rate.

    “Crude oil price remains elevated, even after the surge at the start of the Russia-Ukraine War,” said Andrew Woods, oil analyst at Mintec, a market intelligence firm. “Energy prices have been a major contributor to persistently high inflation in the US, so the crude oil price will remain a watch-out factor for future inflation.”

    High oil and gas prices are one of the largest contributing factors to inflation. That’s bad news for drivers but tends to be great for the energy industry, as oil prices and energy stocks are closely interlinked.

    Energy stocks were trading higher on Monday. The S&P 500 energy sector was up around 0.75%. Exxon Mobil (XOM) was 0.85% higher, BP (BP) was up 1.36% and Chevron (CVX) was up 0.75%.

    OpenAI, will release a version of its popular ChatGPT tool made specifically for businesses, the company announced on Monday.

    OpenAI unveiled the new service, dubbed “ChatGPT Enterprise,” in a company blog post and said it will be available to business clients for purchase immediately.

    The new offering, reports my colleague Catherine Thorbecke, promises to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon.

    “We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the blog post said. “Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

    Fintech startup Block, cosmetics giant Estee Lauder and professional services firm PwC have already signed on as customers.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    A multitude of leading newsrooms, meanwhile, have recently injected code into their websites that blocks OpenAI’s web crawler, GPTBot, from scanning their platforms for content. CNN’s Reliable Sources has found that CNN, The New York Times, Reuters, Disney, Bloomberg, The Washington Post, The Atlantic, Axios, Insider, ABC News, ESPN, and the Gothamist, among others have taken the step to shield themselves.

    American Airlines just got smacked with the largest-ever fine for keeping passengers waiting on the tarmac during multi-hour delays.

    The Department of Transportation is levying the $4.1 million fine, “the largest civil penalty that the Department has ever assessed” it said in a statement, for lengthy tarmac delays of 43 flights that impacted more than 5,800 passengers. The flights occurred between 2018 and 2021, reports CNN’s Gregory Wallace.

    In the longest of the delays, passengers sat aboard a plane in Texas in August 2020 for six hours and three minutes. The 105-passenger flight had landed after being diverted from the Dallas-Fort Worth International Airport due to severe weather, with the DOT alleging that “American (AAL) lacked sufficient resources to appropriately handle several of these flights once they landed.”

    Federal rules set the maximum time that passengers can be held without the opportunity to get off prior to takeoff or after landing, at three hours for domestic flights and four hours for international flights. Current rules also require airlines provide passengers water and a snack.

    American told CNN the delays all resulted from “exceptional weather events” and “represent a very small number of the 7.7 million flights during this time period.”

    The company also said it has invested in technology to better handle flights in severe weather and reduce the congestion at airports.

    [ad_2]

    Source link

  • AI fears overblown? Theoretical physicist calls chatbots ‘glorified tape recorders’ | CNN Business

    AI fears overblown? Theoretical physicist calls chatbots ‘glorified tape recorders’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The public’s anxiety over new AI technology is misguided, according to theoretical physicist Michio Kaku.

    In an interview with CNN’s Fareed Zakaria on Sunday, the futurologist said chatbots like OpenAI’s ChatGPT will benefit society and increase productivity. But fear has driven people to largely focus on the negative implications of the programs, which he terms “glorified tape recorders.”

    “It takes snippets of what’s on the web created by a human, splices them together and passes it off as if it created these things,” he said. “And people are saying, Oh my God, it’s a human, it’s humanlike.’”

    However, he said, chatbots cannot discern true from false: “That has to be put in by a human.”

    According to Kaku, humanity is in its second stage of computer evolution. The first was the analog stage, “when we computed with sticks, stones, levers, gears, pulleys, string.”

    After that, around World War II, he said, we switched to electricity-powered transistors. It made the development of the microchip possible and helped shape today’s digital landscape.

    But this digital landscape rests on the idea of two states like “on” and “off,” and uses binary notation composed of zeros and ones.

    “Mother Nature would laugh at us because Mother Nature does not use zeros and ones,” Kaku said. “Mother Nature computes on electrons, electron waves, waves that create molecules. And that’s why we’re now entering stage three.”

    He believes the next technological stage will be in the quantum realm.

    Quantum computing is an emerging technology utilizing the various states of particles like electrons to vastly increase a computer’s processing power. Instead of using computer chips with two states, quantum computers use various states of vibrating waves. It makes them capable of analyzing and solving problems much faster than normal computers.

    Several tech giants – IBM

    (IBM)
    , Microsoft

    (MSFT)
    , Google

    (GOOG)
    and Amazon

    (AMZN)
    , among others – are developing their own quantum computers, and have granted access to a number of companies to use their technology through the cloud. The computers could help businesses with risk analysis, supply chain logistics, and machine learning.

    But beyond business applications, Kaku said quantum computing could also help advance health care. “Cancer, Parkinson’s, Alzheimer’s disease – these are diseases at the molecular level. We’re powerless to cure these diseases because we have to learn the language of nature, which is the language of molecules and quantum electrons.”

    [ad_2]

    Source link

  • Arm’s mega IPO could be just around the corner, a year after the biggest chip deal in history fell apart | CNN Business

    Arm’s mega IPO could be just around the corner, a year after the biggest chip deal in history fell apart | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A hotly anticipated IPO for a company that designs chips for 99% of the world’s smartphones is just around the corner, after it filed paperwork Monday to go public.

    Arm is a British tech company that architects power-sipping microchips for phones and tablets and licenses them to CPU makers, including Apple and Samsung. The company was public until 2016, when Japan’s Softbank bought it for $32 billion.

    Softbank tried to offload Arm to Nvidia for $40 billion, in what would have been the biggest chip deal of all time. But global antitrust regulators put a stop to it, and the deal fell apart in February 2022.

    Arm had been a hot commodity for decades, when the smartphone business was booming. But sales of smartphones have subsided recently, as customers opt to keep their phones for longer and new tech features have become less enticing to consumers.

    The company, in its regulatory filing, said sales slipped 1% to $2.7 billion in the year that ended March 31, 2023. In the following quarter, which ended in June, sales fell 2.5%.

    Still, Arm has piqued the interest of tech investors who are looking to catch the AI wave. Softbank CEO Masayoshi Son has touted Arm as an AI company that could have “exponential growth.” He promised ChatGPT-like services would eventually be offered on Arm-designed machines.

    In its IPO filing, Arm said the company “will be central” to the transition to AI.

    “Arm CPUs already run AI and [machine learning] workloads in billions of devices, including smartphones, cameras, digital TVs, cars and cloud data centers,” the company said. “In the emerging area of large language models, generative AI and autonomous driving, there will be a heightened emphasis on the low power acceleration of these algorithms.”

    But Son and Arm’s AI promises may overstate the company’s potential, at least somewhat. Arm-based chips have appeared in some gadgets beyond smartphones and tablets, such as servers that are less power-hungry. But Arm said it does not make AI chips and is not a direct competitor to Nvidia and others that make chips that are purpose-built for AI. Nvidia’s stock has exploded more than 200% this year.

    Arm did not list the number of shares it planned to sell, so a valuation wasn’t determinable yet. But Reuters reported Softbank is looking to basically double its investment from seven years ago with a $60 billion to $70 billion valuation for Arm when it IPOs, likely next month.

    Softbank also this week bought the 25% stake in Arm that it did not own directly but that had been held by the Saudi Vision Fund, which Softbank manages. That purchase valued Arm at $64 billion, according to the Financial Times.

    [ad_2]

    Source link

  • Apple Watch’s new gesture control feature will have everyone tapping the air | CNN Business

    Apple Watch’s new gesture control feature will have everyone tapping the air | CNN Business

    [ad_1]



    CNN
     — 

    You’re about to see people in public tapping two fingers together in the air.

    Over the past few days, I’ve been taking phone calls, playing music and scrolling through widgets on the new Apple Watch Series 9 without ever touching the device. I’ve used it to silence my watch’s alarm in the morning, stop timers and open a notification while carrying too many bags.

    It may sound like a gimmick — and it most certainly feels strange to do it in public — but considering the small size of the Apple Watch screen, the tool offers an effective hands-free way to interact with the device.

    Apple’s latest lineup of smartwatches, the Watch Series 9 and high-end Ultra 2, feature a new gesture tool called Double Tap, allowing users to tap their index finger and thumb together twice, to control the device. It can also scroll through widgets, much like turning the digital crown.

    The feature isn’t entirely new; the previous generation of Apple Watch Ultra was capable of similar pinch-and-clench gestures via its Assistive Touch accessibility tool. But Apple’s decision to bring a feature like this to the forefront hints at an increasingly touch-free future. It also comes three months after the company unveiled the Vision Pro mixed reality headset, which will launch next year, with a similar finger tap control.

    Double Tap works in combination with the latest Apple Watch accelerometer, gyroscope and optical heart rate sensor, which looks for disruptions in the blood flow when the fingers are pressed together. That data is processed by a new machine learning algorithm and runs on a faster neural engine, specialized hardware that handles AI and machine learning tasks.

    While the concept is similar, gesture controls are different on the Vision Pro, which will track users’ eyes and hand movements. Apple told CNN it added gesture control to the headset because it needed a different, seamless interface for users to interact with, whereas Double Tap is more about simplifying the Apple Watch experience.

    When the Apple Watch’s display is turned on, the device automatically knows to respond when it senses the fingers are touched together. It essentially works as a “yes” or “accept” button; that means if a call comes through, you can Double Tap to accept it (covering the watch with your full hand, however, will silence it quickly). If a song is playing, you can pause it by double tapping, and then again to start it.

    Although you can subtly flick on the display and do the gesture close to your body, trying to conceal the movement when around other people, I found it works much better when it’s raised a bit higher. This, however, makes the action more obvious — and it’s something that will take a little getting used to seeing in person.

    “This is also about social acceptance. At the moment, I find the idea of people making this gesture more often than not in public a bit funny. But time will tell if users find it acceptable,” said Annette Zimmerman, an analyst at Gartner Research. “I think Apple is very use-case driven and focuses on user feedback on things they could improve.”

    Similarly, it took a while for people to get used to the design of Apple’s AirPods when they were announced in 2016; many criticized how they looked dangling out of users’ ears. Now they’ve become part of modern culture.

    Other learning curves exist with the Double Tap feature. Because I am right handed and wear an Apple Watch on my left hand, tapping my left fingers together to trigger the control takes an extra second or two of mental coordination.

    The future of hands-free devices

    The new Apple Watch Series 9 can be controlled by tapping two fingers together.

    Apple isn’t the only tech company developing gesture controls like this. Samsung TVs, some smartphones and Microsoft’s mixed reality headset all incorporate some hand gesture functionality. But this is Apple’s biggest push to date, and adding it to a flagship device like the Apple Watch will soon put all eyes on the concept of hand gestures.

    “It’s a great move by Apple as it differentiates the company from other brands when it comes to innovation and ease of usability. It also shows Apple’s commitment in the fields of artificial intelligence,” said Sachin Mehta, senior analyst at tech intelligence firm ABI Research. “The new double tap gesture is not a surprise as Apple keeps on developing a unified and intuitive user experience across its product line up. It will cement the Apple Watch as the smartwatch to have.”

    It works differently on the Vision Pro, which will track a user’s eyes and hand movements to make punching and swiping controls. The headset needed a different user interface for users to interact with it, and gestures give that control even when a face is covered by the hardware.

    Further showing how Apple is thinking about gesture control long term, it recently filed for patents focused on gesture controls, including for the Apple TV. That said, Mehta believes there’s no question “we expect more gesture features in Apple’s product lineup in the future.”

    In addition to Double Tap, the Apple Watch Series 9 features Apple’s powerful new in-house silicon chip and ultrawideband connectivity. It will let users log health data with their voice, use “name drop” to share contact information by touching another Apple Watch and raise their wrist to automatically brighten the display. The Series 9 will come in colors such as pink, navy, red, gold, silver and graphite.

    Apple also showed off the second iteration of its rugged Ultra smartwatch line, featuring the updated S9 custom chip and a new ultrawideband chip which uses radio waves to communicate. It also features more information on the display for more intensive tracking.

    The Apple Watch Series 9 will start at $399 and the Ultra is priced at $799. Although they start shipping on Friday, September 22, the Double Tap feature will launch via a software update next month.

    [ad_2]

    Source link

  • What is catfishing and what can you do if you are catfished? | CNN Business

    What is catfishing and what can you do if you are catfished? | CNN Business

    [ad_1]

    Editor’s Note: This story is part of ‘Systems Error’, a series by CNN As Equals, investigating how your gender shapes your life online. For information about how CNN As Equals is funded and more, check out our FAQs.



    CNN
     — 

    Catfishing is when a person uses false information and images to create a fake identity online with the intention to trick, harass, or scam another person. It is often on social media or dating apps and websites as a common tactic used to form online relationships under false pretenses, sometimes to lure people into financial scams.

    The person doing the pretending, or the “catfish” may also obtain intimate images from a victim and use them to extort or blackmail the person. This is known as sextortion, or they may use other personal information shared with them to commit identity theft.

    The term is believed to originate from the 2010 documentary “Catfish,” in which a young Nev Schulman starts an online relationship with teenager “Megan”, who turns out to be an older woman.

    In the final scene of the documentary, the woman’s husband shares an anecdote about how live cod used to be exported from Alaska alongside catfish, which kept the cod active and alert. He likened this to people in real life who keep others on their toes, like his wife. Schulman went on to produce the docuseries Catfish

    There are many reasons people resort to catfishing, but the most common reason is a lack of confidence, according to the Cybersmile Foundation, a nonprofit focused on digital well-being. The foundation states that if someone is not happy with themselves, they may feel happier when pretending to be someone more attractive to others.

    They may also hide their identity to troll someone; to engage in a relationship other than their existing one; or to extort or harass people. Some people may catfish to explore sexual preferences.

    Studies have shown that catfish are more likely to be educated men, with one 2022 study finding perpetrators are more likely to come from religious backgrounds, possibly providing a way to form relationships without the constraints they face in real life, the authors write.

    In another study published last year, Evita March, senior lecturer in psychology at Federation University in Australia, found that people with the strong personality traits of sadism, psychopathy, and narcissism were more likely to catfish.

    March told CNN the findings are preliminary and that her team would like to further investigate if certain personality traits lead to specific kinds of catfishing behavior.

    In the US, romance scams resulting from catfishing have among the highest reported financial losses of internet crimes as a whole. A total of 19,050 Americans reported losing almost $740 million to romance scammers in 2022.

    In the UK, the country’s National Fraud Intelligence Bureau received more than 8,000 reports of romance fraud in the 2022 financial year, totaling more than £92 million (US $116.6 million) lost, with an average loss of £11,500 (US $14,574) per victim.

    In Singapore, romance scams are among the top 10 reported scams. The reported amount of money catfish may get from their victims increased by more than 30% from SGD$33.1 million (US $24 million) in 2020 to $46.6 million (US $34 million) the following year.

    Catfishing is also increasingly happening on an industrial scale with the rise of “cyber scam centers” that have links to human trafficking in Southeast Asia, according to INTERPOL.

    Victims of trafficking are forced to become fraudsters by creating fake social media accounts and dating profiles to scam and extort millions of dollars from people around the world using different schemes such as fake crypto investment sites.

    Catfishing used to occur more among adults through online dating sites, but has now become equally common among teenagers, according to the Cybersmile Foundation.

    Research by Snapchat last year with more than 6,000 Gen Z teenagers and young people in Australia, France, Germany, India, the UK and the US found that almost two-thirds of them or their friends had been targeted by catfish or hackers to obtain private images that were later used to extort them.

    Older people are also likely to lose more money to catfishing. In 2021, Americans lost half a billion dollars through romance scams perpetrated by people using fake personas or impersonating others, with the largest losses paid in cryptocurrency, according to the US Federal Trade Commission. The number of reports rose tenfold among young people (18-29) but older people (over 70s) generally reported losing more money.

    In Australia, a third of dating and romance scams result in financial losses, with women having lost more than double the total amount lost by men, and older people again losing more money than those under 45., according to data from the country’s National Anti-Scam Centre.

    ”Romance scams are one of the hardest things to avoid. It’s emotional manipulation,” said Ngo Minh Hieu, a Vietnamese former hacker and founder of Chong Lua Dao (scam fighters), a cybersecurity non-profit.

    Since 2020, Hieu has been monitoring trends to help scam victims, he says, and explains that in his experience, a catfish would usually approach a victim with premediated intention to scam them.

    They were likely to be using personal information that they mine from the victim’s social media accounts, or may have bought that data from users in private chat groups simply by providing a phone number of a potential victim.

    There are many signs you can look for to help spot a catfish, experts say.

    Firstly, a catfish might contact you out of nowhere, start regular conversations with you and shower you with compliments to quickly build up trust and rapport. They may state desirable qualities in their opening conversations, including wealth or attractiveness, but then rarely or never call you, either over the phone or on a video call.

    They often do not have many friends on social media and their posts are usually scarce. Search results using their name may not yield many results and their stories are usually inconsistent. For example, personal details like where they live or go to school might change when discussed again.

    Another classic sign is if the feelings they declare for you escalate quickly and after a short period of time. A catfish may ask you for sensitive images and money.

    Many scammers use already available photos of other people in their fake personas, which may be possible to spot using a reverse image search.

    With the explosion of AI technology, scammers may now generate unique and realistic images for use as profile pictures. But Hieu explains that thanks to their built-in patterns by design, AI-generated images can be detected, using tools such as AI-Generated Image Detector.

    If you believe you are being catfished, there are steps you can take to protect yourself and help end the targeting.

    Experts advise that you should not be afraid to ask direct questions or challenge the person you believe may be catfishing you. You can do this by asking them why they are not willing to call you or meet face to face, or questioning how they can declare their love for you so quickly.

    Wang and her colleagues sent nearly 200 deterrent messages to active scammers in a 2020 study and concluded that this could make fraudsters respond less or in some cases, admit to wrongdoing.

    An example of one of the messages was: “I know you are scamming innocent people. My friend was recently arrested for the same offense and is facing five years in prison. You should stop before you face the same fate.”

    You should think about stopping all communications with the catfish, and refrain from sending money to them at the risk of further financial demands. Experts say catfish continue to target those who engage with them more.

    It’s also useful to secure your online accounts and ensure your personal information is kept private online.

    Cybersecurity expert Hieu explained that you can do this by putting personal information such as your phone number, email addresses and date of birth in private mode on social media. You can also check if your email has been compromised in a data breach by using tools such as the Have I Been Pwned website.

    Installing two-factor authentication on your accounts can also help protect against unauthorized access. That requires you to take a second step to verify your identity when logging in to a service, for example by SMS or a physical device, such as a key fob.

    Being subjected to catfishing can also have a significant impact on your mental health, with many victims left unable to trust others and some left feeling embarrassed about falling for the scam. A 2019 study found that young LGBTQ+ men in rural America experiencing catfishing on dating apps felt angry and fearful.

    If someone was “sextorted,” they may continue to fear their images resurfacing online in the future.

    March from Federation University in Australia recommended improving digital literacy and staying aware of the potential red flags. She also emphasized the need to recognize today’s loneliness epidemic, which “leads people to perhaps be more susceptible to catfishing scams,” she said.

    Seeking professional support from a counselor or talking to supportive friends and family is one way to address loneliness, March added.

    Catfishing is not explicitly a crime, but the actions that often accompany catfishing, such as extortion for money, gifts or sexual images are crimes in many places.

    The main challenge in tackling online fraud is the issue of jurisdiction, according to a 2020 paper about police handling of online fraud victims in Australia. Traditional policing operates within specific territories, but the internet has blurred these boundaries, the authors write.

    Cybercriminals from one country can also target victims in other countries, complicating law enforcement efforts, and victims often face difficulty and frustration when trying to report cybercrimes, which can further traumatize them.

    Fangzhou Wang, a cybercrime professor at the University of Texas at Arlington told CNN that virtual private networks (VPNs), forged credentials, and anonymous communication methods make it extremely difficult to determine identities or locations.

    Scammers have also capitalized on the proliferation of AI, such as AI-generated personas, which complicates the ability of law enforcement authorities to gather evidence and build cases against a catfish.

    ”Law enforcement agencies, often constrained by limited resources and prioritizing cases based on severity and direct impact, might not readily prioritize catfishing cases without substantial financial losses or physical harm,” Wang told CNN.

    In the US, there are some legal precedents. In 2022, a woman who had created multiple fake profiles to target wealthy men was charged with extortion, cyberstalking, and interstate threats and was sentenced in a plea deal last year.

    In the UK, while catfishing itself is not classified as a criminal offense, if the person using a fake profile engages in illegal activities, like financial gain or harassment, they can be punished by law.

    China has a law that implicates people who allow their websites or communications platforms to be used for frauds and other illegal activities under Article 46 in the Cybersecurity Law.

    If a catfish has tricked you into sending them money, you can go to the authorities and your bank immediately, depending on where you are.

    If activities that are crimes in your country have taken place because of being catfished, such as extortion, identify theft or harassment, the police or other authorities, such as specific commissions targeting online crime, may be your first port of call.

    The Australian government’s agency responsible for online safety, the e-safety commissioner, advises that people gather all the evidence they can, including screenshots of the scammer and chats with them to keep as evidence.

    Depending on the case, you can also submit an abuse or impersonation report against the catfish directly to the platform on which you are communicating with them.

    If you believe the person you are talking to is not who they say they are, most of the larger social media platforms give you the option report them for impersonation or other forms of abuse, including Facebook, Instagram, TikTok, X, Telegram, Tinder and WhatsApp. WeChat also offers a channel to report another user for harassment, fraud, or illegal activity, while Telegram creates an anti-scam thread for users to report on fraudsters.

    You are not responsible for the catfish behaviors of others, but staying vigilant and alert online goes a long way.

    Make sure your online accounts are secured and use two-factor authentication. When browsing the internet, you may want to use a virtual private network (VPN) which makes your internet activity harder to track.

    In many countries such as the US, the UK and Australia, victims have reported being preyed on by catfish who tricked them to put money in bogus cryptocurrency investment sites.

    If someone you have been talking to asks you to put money into an investment site, think twice. The Global Anti-Scam Organization has a database of fraudulent websites generated by their own investigations and the public’s tip offs to help inform you if you’re being scammed.

    If you are a parent, this guide provided by the UK-based National College platform suggests communicating effectively and sensitively with your children about the risks. You may also help them report and block the catfish accounts and report to police if they have been subjected to anything illegal or inappropriate.

    Because catfish get close to a target often by relying on personal information posted on social media, UNICEF asks children to consider their rights when it comes to parents sharing their pictures and other content online, especially when they are underage.



    [ad_2]

    Source link

  • Nvidia’s quarterly sales double on the back of AI boom | CNN Business

    Nvidia’s quarterly sales double on the back of AI boom | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The artificial intelligence boom continues to fuel a blockbuster year for chipmaker Nvidia.

    Nvidia’s stock jumped as much as 9% in after-hours trading Wednesday after the Santa Clara, California-based company posted year-over-year sales growth of 101%, to $13.5 billion for the three months ended in July.

    The results were even stronger than the $11.2 billion in revenue that Wall Street analysts expected. The company’s non-GAAP adjusted profits grew a stunning 429% from the same period in the prior year to $2.70 per share, also beating analysts’ expectations. GAAP stands for generally accepted accounting principles.

    Nvidia’s stock has climbed by just over 220% since the start of this year amid a surge in the popularity of and demand for artificial intelligence technology. The American chipmaker produces processors that power generative AI, technology that can create text, images and other media — and which forms the foundation of buzzy new services such as ChatGPT.

    “A new computing era has begun. Companies worldwide are transitioning from general-purpose to accelerated computing and generative AI,” Nvidia CEO Jensen Huang said in a statement, adding that the company is working with “Leading enterprise IT system and software providers … to bring NVIDIA AI to every industry.”

    “The race is on to adopt generative AI,” he said.

    Huang had said following the company’s May earnings report that the firm was ramping up its supply to meet “surging demand.”

    “Nvidia’s hardware has become indispensable to the AI-driven economy,” Insider Intelligence senior analyst Jacob Bourne said in emailed commentary. “The pressing question is whether Nvidia can consistently exceed the now-higher expectations.”

    This story is developing and will be updated.

    [ad_2]

    Source link

  • So long, robotic Alexa. Amazon’s voice assistant gets more human-like with generative AI | CNN Business

    So long, robotic Alexa. Amazon’s voice assistant gets more human-like with generative AI | CNN Business

    [ad_1]



    CNN
     — 

    Amazon’s Alexa is about to bring generative AI inside the house, as the company introduces sweeping changes to how its ubiquitous voice assistant both sounds and functions.

    The company announced a generative AI update for Alexa and, subsequently, of all Echo products dating back to 2014, at a press event Wednesday at its new campus in Arlington, Virginia. Alexa will be able to resume conversations without a wake word, respond more quickly, learn user preferences, field follow-up questions and change its tone based on the topic. Alexa will even offer opinions, such as which movies should have won an Oscar but didn’t.

    Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “It feels just like talking to a human being,” an Amazon executive claimed.

    The updates come as Amazon tries to keep pace with a new wave of conversational AI tools that have accelerated the artificial intelligence arms race in the tech industry and rapidly reshaped what consumers may expect from their tech products. The company did not disclose when the updates will make their way into products.

    In a live demo, Dave Limp, senior VP of devices and services at Amazon, asked Alexa about his favorite college football team without ever stating the name. (Limp said he previously told Alexa and it remembered). If his favorite team wins, Alexa responds joyfully; if they lose, Alexa will respond with empathy.

    When Limp said “Alexa, let’s chat,” it launched a special mode that allowed for a back-and-forth exchange on various topics. Notably, Limp paused several times to address the audience and resumed the conversation with Alexa without using the “Alexa” wake word, picking up where they left off.

    The demo wasn’t without hiccups – Alexa’s response time at times lagged – but the voice assistant had far more personality, spoke in a more natural and expressive tone, and kept the conversation flowing back and forth.

    Although the company did not outline specific safeguards – some other large-language models have previously gone off the rails – it said on its website “it will design experiences to protect our customers’ privacy and security, and to give them control and transparency.”

    The company also said new developer tools will allow companies to work alongside its large-language model. In a blog post, Amazon said it is already partnering with a handful of companies, such as BMW, to develop conversational in-car voice assistant capabilities.

    Rowan Curran, an analyst at Forrester Research, said the news marks a major step forward in bringing generative AI to the home and allowing it to accomplish everyday tasks. By connecting speech-to-text to external systems and by using a large language model to understand and produce natural speech, this is “where we can begin to see the future of how we will use this technology near-ubiquitously in our everyday lives.”

    Some US users will get access to the changes through a free preview on existing Echo devices. Over the years, Alexa has been infused in countless Echo products, from its speaker and hub lineup to clocks, microwaves,and eyeglasses.

    Amazon also said it will be bringing generative AI to its Fire TV platform, allowing users to ask more natural, nuanced or open-ended questions about genres, storylines and scenes or make more targeted content suggestions.

    Alexa launched nearly a decade ago and, along with Apple’s Siri, Microsoft’s Cortana, and other voice assistants, were promised to change the way people interacted with technology. But the viral success of ChatGPT has arguably accomplished some of those goals faster and across a wider range of everyday products.

    The effort to continue updating the technology that powers Alexa comes at a difficult moment for Amazon. Like other Big Tech companies, Amazon has slashed staff in recent months and shelved products in an urgent effort to cut costs amid broader economic uncertainty. The Alexa division did not escape unscathed.

    Amazon confirmed plans in January to lay off more than 18,000 employees. In March, the company said about 9,000 more jobs would be impacted. Limp previously told CNN his division lost about 2,000 people, about half of which were from the Alexa team.

    Still, he emphasized innovation around Alexa has not stalled. “We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp said. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

    [ad_2]

    Source link

  • OpenAI launches a version of ChatGPT for businesses | CNN Business

    OpenAI launches a version of ChatGPT for businesses | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI is releasing a version of its buzzy ChatGPT tool specifically for businesses, the company announced Monday, as an AI arms race continues to ramp up throughout corporate America.

    OpenAI unveiled the new service, dubbed “ChatGPT Enterprise,” in a company blog post and said it will be available to business clients for purchase as of Monday. The new offering promises to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon.

    “We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the blog post said. “Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

    Some of the early customers of ChatGPT Enterprise include fintech startup Block, cosmetics giant Estee Lauder Companies and the professional services firm PwC.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    Before the launch of ChatGPT Enterprise, a number of prominent companies including JPMorgan Chase had implemented temporary restrictions on workplace use of ChatGPT.

    ChatGPT Enterprise, however, addresses one of the core issues that led to the workplace clampdowns: privacy and security concerns. Formerly, some business leaders had expressed worries about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere. OpenAI’s announcement blog post for ChatGPT Enterprise, meanwhile, states that it does “not train on your business data or conversations, and our models don’t learn from your usage.”

    OpenAI did not publicly disclose the pricing levels for ChatGPT Enterprise, instead asking potential business clients to contact its sales team.

    “We look forward to sharing an even more detailed roadmap with prospective customers and continuing to evolve ChatGPT Enterprise based on your feedback,” the company said. “We’re onboarding as many enterprises as we can over the next few weeks.”

    In July, Microsoft unveiled a business-specific version of its AI-powered Bing tool, dubbed Bing Chat Enterprise, and promised much of the same security assurances that ChatGPT Enterprise is now touting – namely, that users’ chat data will not be used to train AI models.

    Microsoft also previously disclosed a multi-billion dollar investment into OpenAI. It’s not immediately clear how the dueling new AI tools for business will end up competing with each other.

    [ad_2]

    Source link

  • ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    [ad_1]



    CNN
     — 

    You can now speak aloud to ChatGPT and hear the artificial intelligence-powered chatbot talk back.

    OpenAI, the startup behind the wildly-popular chatbot, announced Monday that it is rolling out new features including the ability to let users engage in a back-and-forth voice conversation with ChatGPT.

    In a company blog post Monday, OpenAI teased how this new feature can be used to “request a bedtime story for your family, or settle a dinner table debate.”

    The new voice features from OpenAI carry similarities to those currently offered by Amazon’s Alexa or Apple’s Siri voice assistants.

    In a demo of the new update shared by OpenAI, a user asks ChatGPT to come up with a story about “the super-duper sunflower hedgehog named Larry.” The chatbot is able to narrate a story out loud with a human-sounding voice that can also respond to questions, such as, “What was his house like?” and “Who is his best friend?”

    ChatGPT’s voice capability is “powered by a new text-to-speech model, capable of generating human-like audio from just text and a few seconds of sample speech,” Open AI said in the blogpost. The company added that it collaborated with professional voice actors to create the five different voices that can be used to animate the chatbot.

    OpenAI also said on Monday that it’s rolling out a new feature that lets the bot respond to prompts featuring an image. For example, you can snap a picture of the contents of your fridge and ask ChatGPT to help you come up with a meal plan using the ingredients you have. Moreover, the company said you can ask the chatbot to focus on a specific part of an image with its “drawing tool” in the app.

    The new features roll out in the app within the next two weeks for paying subscribers of ChatGPT’s Plus and Enterprise services. (Subscriptions to the Plus service are $20 a month, and its Enterprise service is currently only offered to business clients).

    The updates from OpenAI come amid an ongoing AI arms race within the tech sector, initially spurred by the public launch of ChatGPT late last year. In recent weeks, tech giants have been racing to roll out new updates that incorporate more AI-powered tools directly into their core products. Google last week announced a series of updates to its ChatGPT competitor Bard. Also last week, Amazon said it was bringing a generative AI-powered update to its Alexa voice assistant.

    [ad_2]

    Source link

  • Amazon invests up to $4 billion in Anthropic AI in exchange for minority stake and further AWS integration | CNN Business

    Amazon invests up to $4 billion in Anthropic AI in exchange for minority stake and further AWS integration | CNN Business

    [ad_1]



    CNN
     — 

    Amazon said on Monday that it’s investing up to $4 billion into the artificial intelligence company Anthropic in exchange for partial ownership and Anthropic’s greater use of Amazon Web Services (AWS), the e-commerce giant’s cloud computing platform.

    The deepening partnership between the two companies highlights how some large tech firms with massive cloud computing resources are increasingly leveraging those assets to gain a bigger foothold in AI.

    As part of the deal, AWS will become the “primary” cloud provider for Anthropic, with the AI company using Amazon’s cloud platform to do “the majority” of its AI model development and research into AI safety, the companies said. That will include using Amazon’s suite of in-house AI chips.

    Anthropic also made a “long-term commitment” to offer its AI models to AWS customers, Amazon said, and promised to give AWS users early access to features such as the ability to adapt Anthropic models for specific use cases.

    “With today’s announcement, customers will have early access to features for customizing Anthropic models, using their own proprietary data to create their own private models, and will be able to utilize fine-tuning capabilities via a self-service feature,” Amazon said in a release.

    Anthropic already offers its models to AWS users through Amazon Bedrock, Amazon’s one-stop shop for AI products. Bedrock also provides access to models from other providers including Stability AI and AI21 Labs, along with proprietary models developed by Amazon itself.

    In a release, Anthropic said that Amazon’s minority stake would not change its corporate governance structure nor its commitments to developing AI responsibly.

    “We will conduct pre-deployment tests of new models to help us manage the risks of increasingly capable AI systems,” Anthropic said.

    Amazon and Anthropic both made commitments to the Biden administration this year to conduct external audits of its AI systems before releasing them to the public.

    Amazon’s investment in Anthropic follows similar moves by cloud leaders such as Microsoft. In 2019, Microsoft invested $1 billion in ChatGPT-maker OpenAI. More recently, Microsoft made a $10 billion investment in OpenAI this year and launched a push to bring OpenAI’s technology into consumer-facing Microsoft products, such as Bing.

    [ad_2]

    Source link

  • AI tools make things up a lot, and that’s a huge problem | CNN Business

    AI tools make things up a lot, and that’s a huge problem | CNN Business

    [ad_1]



    CNN
     — 

    Before artificial intelligence can take over the world, it has to solve one problem. The bots are hallucinating.

    AI-powered tools like ChatGPT have mesmerized us with their ability to produce authoritative, human-sounding responses to seemingly any prompt. But as more people turn to this buzzy technology for things like homework help, workplace research, or health inquiries, one of its biggest pitfalls is becoming increasingly apparent: AI models often just make things up.

    Researchers have come to refer to this tendency of AI models to spew inaccurate information as “hallucinations,” or even “confabulations,” as Meta’s AI chief said in a tweet. Some social media users, meanwhile, simply blast chatbots as “pathological liars.”

    But all of these descriptors stem from our all-too-human tendency to anthropomorphize the actions of machines, according to Suresh Venkatasubramanian, a professor at Brown University who helped co-author the White House’s Blueprint for an AI Bill of Rights.

    The reality, Venkatasubramanian said, is that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he said. “There is no knowledge of truth there.”

    The AI researcher said that a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to the way his young son would tell stories at age four. “You only have to say, ‘And then what happened?’ and he would just continue producing more stories,” Venkatasubramanian said. “And he would just go on and on.”

    Companies behind AI chatbots have put some guardrails in place that aim to prevent the worst of these hallucinations. But despite the global hype around generative AI, many in the field remain torn about whether or not chatbot hallucinations are even a solvable problem

    Simply put, a hallucination refers to when an AI model “starts to make up stuff — stuff that is not in-line with reality,” according to Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public.

    “But it does it with pure confidence,” West added, “and it does it with the same confidence that it would if you asked a very simple question like, ‘What’s the capital of the United States?’”

    This means that it can be hard for users to discern what’s true or not if they’re asking a chatbot something they don’t already know the answer to, West said.

    A number of high-profile hallucinations from AI tools have already made headlines. When Google first unveiled a demo of Bard, its highly anticipated competitor to ChatGPT, the tool very publicly came up with a wrong answer in response to a question about new discoveries made by the James Webb Space Telescope. (A Google spokesperson at the time told CNN that the incident “highlights the importance of a rigorous testing process,” and said the company was working to “make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”)

    A veteran New York lawyer also landed in hot water when he used ChatGPT for legal research, and submitted a brief that included six “bogus” cases that the chatbot appears to have simply made up. News outlet CNET was also forced to issue corrections after an article generated by an AI tool ended up giving wildly inaccurate personal finance advice when it was asked to explain how compound interest works.

    Cracking down on AI hallucinations, however, could limit AI tools’ ability to help people with more creative endeavors — like users that are asking ChatGPT to write poetry or song lyrics.

    But there are risks stemming from hallucinations when people are turning to this technology to look for answers that could impact their health, their voting behavior, and other potentially sensitive topics, West told CNN.

    Venkatasubramanian added that at present, relying on these tools for any task where you need factual or reliable information that you cannot immediately verify yourself could be problematic. And there are other potential harms lurking as this technology spreads, he said, like companies using AI tools to summarize candidates’ qualifications and decide who should move ahead to the next round of a job interview.

    Venkatasubramanian said that ultimately, he thinks these tools “shouldn’t be used in places where people are going to be materially impacted. At least not yet.”

    How to prevent or fix AI hallucinations is a “point of active research,” Venkatasubramanian said, but at present is very complicated.

    Large language models are trained on gargantuan datasets, and there are multiple stages that go into how an AI model is trained to generate a response to a user prompt — some of that process being automatic, and some of the process influenced by human intervention.

    “These models are so complex, and so intricate,” Venkatasubramanian said, but because of this, “they’re also very fragile.” This means that very small changes in inputs can have “changes in the output that are quite dramatic.”

    “And that’s just the nature of the beast, if something is that sensitive and that complicated, that comes along with it,” he added. “Which means trying to identify the ways in which things can go awry is very hard, because there’s so many small things that can go wrong.”

    West, of the University of Washington, echoed his sentiments, saying, “The problem is, we can’t reverse-engineer hallucinations coming from these chatbots.”

    “It might just an intrinsic characteristic of these things that will always be there,” West said.

    Google’s Bard and OpenAI’s ChatGPT both attempt to be transparent with users from the get-go that the tools may produce inaccurate responses. And the companies have expressed that they’re working on solutions.

    Earlier this year, Google CEO Sundar Pichai said in an interview with CBS’ “60 Minutes” that “no one in the field has yet solved the hallucination problems,” and “all models have this as an issue.” On whether it was a solvable problem, Pichai said, “It’s a matter of intense debate. I think we’ll make progress.”

    And Sam Altman, CEO of ChatGPT-maker OpenAI, made a tech prediction by saying he thinks it will take a year-and-a-half or two years to “get the hallucination problem to a much, much better place,” during remarks in June at India’s Indraprastha Institute of Information Technology, Delhi. “There is a balance between creativity and perfect accuracy,” he added. “And the model will need to learn when you want one or the other.”

    In response to a follow-up question on using ChatGPT for research, however, the chief executive quipped: “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.”

    [ad_2]

    Source link

  • Chinese artists boycott big social media platform over AI-generated images | CNN Business

    Chinese artists boycott big social media platform over AI-generated images | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Artists across China are boycotting one of the country’s biggest social media platforms over complaints about its AI image generation tool.

    The controversy began in August when an illustrator who goes by the name Snow Fish accused the privately owned social media site Xiaohongshu of using her work to train its AI tool, Trik AI, without her knowledge or permission.

    Trik AI specializes in generating digital art in the style of traditional Chinese paintings; it is still undergoing testing and has not yet been formally launched.

    Snow Fish, whom CNN is identifying by her Xiaohongshu username for privacy reasons, said she first became aware of the issue when friends sent her posts of artwork from the platform that looked strikingly similar to her own style: sweeping brush-like strokes, bright pops of red and orange, and depictions of natural scenery.

    “Can you explain to me, Trik AI, why your AI-generated images are so similar to my original works?” Snow Fish wrote in a post which quickly circulated online among her followers and the artist community.

    The controversy erupted just weeks after China unveiled rules for generative AI, becoming one of the first governments to regulate the technology as countries around the world wrestle with AI’s potential impact on jobs, national security and intellectual property.

    Screenshots of AI-generated artworks on Xiaohongshu, taken by the artist Snow Fish.

    Trik AI and Xiaohongshu, which says it has 260 million monthly active users, do not publicize what materials are used to train the program and have not publicly commented on the allegations.

    The companies have not responded to multiple requests from CNN for comment.

    But Snow Fish said a person using the official Trik AI account had apologized to her in a private message, acknowledging that her art had been used to train the program and agreed to remove the posts in question. CNN has reviewed the messages.

    However, Snow Fish wants a public apology. The controversy has fueled online protests on the Chinese internet against the creation and use of AI-generated images, with several other artists claiming their works had been similarly used without their knowledge.

    Hundreds of artists have posted banners on Xiaohongshu saying “No to AI-generated images,” while a related hashtag has been viewed more than 35 million times on the Chinese Twitter-like platform Weibo.

    The boycott in China comes as debates about the use of AI in arts and entertainment are playing out globally, including in the United States, where striking writers and actors have ground most film and television production to a halt in recent months over a range of issues — including studios’ use of AI.

    Many of the artists boycotting Xiaohongshu have called for better rules to protect their work online — echoing similar complaints from artists around the world worried about their livelihoods.

    These concerns have grown as the race to develop AI heats up, with new tools developed and released almost faster than governments can regulate them — ranging from chatbots such as OpenAI’s ChatGPT to Google’s Bard.

    China’s tech giants, too, are rapidly developing their own generative artificial intelligence, from Baidu’s ERNIE Bot launched in March to SenseTime’s chatbot SenseChat.

    Besides Trik AI, Xiaohongshu has also developed a new function called “Ci Ke” which allows users to post content using AI-generated images.

    For artists like Snow Fish, the technology behind AI isn’t the problem, she said; it’s the way these tools use their work without permission or credit.

    Many AI models are trained from the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    Snow Fish added that these complaints had been slowly growing within the artist community but had mostly been privately shared rather than openly protested.

    “It’s an outbreak this time,” she said. “If it easily goes away without any splash, people will maintain silent, and those AI developers will keep harming our rights.”

    Another Chinese illustrator Zhang, who CNN is identifying by his last name for privacy reasons, joined the boycott in solidarity. “They’re shameless,” said Zhang. “They didn’t put in any effort themselves, they just took parts from other artists’ work and claimed it as their own, is that appropriate?”

    “In the future, AI images will only be cheaper in people’s eyes, like plastic bags. They will become widespread like plastic pollution,” he said, adding that tech leaders and AI developers care more about their own profits than about artists’ rights.

    Tianxiang He, an associate professor of law City University of Hong Kong, said the use of AI-generated images also raises larger questions among the artistic community about what counts as “real” art, and how to preserve its “spiritual value.”

    Similar boycotts have been seen elsewhere around the world, against popular AI image generation tools such as Stable Diffusion, released last year by London-based Stability AI, and California-based Midjourney.

    Stable Diffusion is embroiled in an ongoing lawsuit brought by stock image giant Getty Images, alleging copyright infringement.

    Fareed Zakaria special MoMA AI Art

    GPS web extra: How does AI make art?

    Despite the speed at which AI image generation tools are being developed, there is “no global consensus about how to regulate this kind of training behavior,” said He.

    He added that many such tools are developed by tech giants who own huge databases, which allows them to “do a lot of things … and they don’t care whether it’s protected by the law or not.”

    Because Trik AI has a smaller database to pull from, the similarities between its AI-generated content and artists’ original works are more obvious, making an easier legal case, he said.

    Cases of copyright infringement would be harder to detect if more works were put in a larger database, he added.

    Governments around the world are now grappling with how to set global standards for the wide-ranging technology. The European Union was one of the first in the world to set rules in June on how companies can use AI, with the United States still holding discussions with Capitol Hill lawmakers and tech companies to develop legislation.

    China was also an early adopter of AI regulation, publishing new rules that took effect in August. But the final version relaxed some of the language that had been included in earlier drafts.

    Experts say major powers like China likely prioritize centralizing power from tech giants when drafting regulations, and pulling ahead in the global tech race, rather than focusing on individuals’ rights.

    He, the Hong Kong law professor, called the regulations a “very broad general regulatory framework” that provide “no specific control mechanisms” to regulate data mining.

    “China is very hesitant to enact anything related to say yes or no to data mining, because that will be very dangerous,” he said, adding that such a law could strike a blow to the emerging market, amid an already slow national economy.

    [ad_2]

    Source link

  • Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Chinese tech firms Baidu and SenseTime launched their ChatGPT-style AI bots to the public on Thursday, marking a new milestone in the global AI race.

    Baidu has opened public access to its ERNIE Bot, allowing users to conduct AI-powered searches or carry out an array of tasks, from creating videos to providing summaries of complex documents.

    The news sent its shares 3.1% higher in New York on Wednesday and 4.7% higher in Hong Kong on Thursday.

    Baidu (BIDU) is among the first companies in China to get regulatory approval for the rollout, and it is the first to launch this type of service publicly, according to a person familiar with the matter.

    Until Thursday, ERNIE Bot, also called “Wenxin Yiyan” in Chinese, had been offered only to corporate clients or select members of the public who requested access through a waitlist.

    Meanwhile, SenseTime, an AI startup based in Hong Kong, also announced the public launch of its SenseChat platform on Thursday. The company’s shares surged 4% in Hong Kong following the news

    “We are pleased to announce that starting today, it is fully available to serve all users,” a SenseTime spokesperson told CNN in a statement.

    China published new rules on generative AI in July, becoming one of the world’s first countries to regulate the industry. The measures took effect on August 15.

    Baidu has been a frontrunner in China in the race to capitalize on the excitement around generative artificial intelligence, the technology that underpins systems such as ChatGPT or its successor, GPT-4. The latter has impressed users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Baidu announced its own iteration in February, giving it an early advantage in China, according to analysts. It unveiled ERNIE a month later, showing how it could generate a newsletter, come up with a corporate slogan and solve a math riddle.

    Since then, competitors such as Alibaba (BABA) and SenseTime have announced plans to launch their own ChatGPT-style tools, adding to the list of Chinese businesses jumping on the bandwagon. Alibaba told CNN Thursday that it had filed for regulatory approval for its own bot, which was introduced in April.

    The company is now waiting to officially launch and “the initial list of companies that have received the approval is expected to be released by relevant local departments within one week,” said an Alibaba Cloud spokesperson.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Baidu CEO Robin Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    The firm’s new feature — which will be embedded in its popular search engine, among its other offerings — follows a similar feature introduced by Alphabet’s Google (GOOGL) in May, which allows users to search the web using its AI chatbot.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as text, images, audio and video.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    While ERNIE Bot is available globally, its interface is in Chinese, though users will be able to enter both Chinese and English prompts, a Baidu spokesperson told CNN.

    SenseTime, which unveiled its service in April, has touted a range of features, which it says allow users to write or debug code more efficiently or receive personalized medical advice from a virtual health consultation assistant.

    [ad_2]

    Source link

  • Biden teases forthcoming executive order on AI | CNN Business

    Biden teases forthcoming executive order on AI | CNN Business

    [ad_1]



    CNN
     — 

    The White House plans to introduce a highly anticipated executive order in the coming weeks dealing with artificial intelligence, President Joe Biden said Wednesday.

    “This fall, I’m going to take executive action, and my administration is going to continue to work with bipartisan legislation,” Biden said, “so America leads the way toward responsible AI innovation.”

    Biden offered no details on the contents of the coming order, which the White House had first announced in July. But his remarks offer greater insight into his administration’s timing.

    Biden’s signing of the order would build on an earlier administration proposal for an “AI Bill of Rights.” Civil society groups have urged the Biden administration to require federal agencies to implement the AI Bill of Rights as part of any executive order on the technology. Meanwhile, the US Senate is continuing to educate lawmakers on artificial intelligence in preparation for months of legislative work on the issue.

    In Wednesday’s remarks during a meeting of the Presidential Council of Advisors on Science and Technology, Biden described the recent conversations he’s had with AI leaders and experts.

    “Vast differences exist among them in terms of what potential it has, what dangers there are, and so, I have a keen interest in AI,” Biden said. “I’ve convened key experts on how to harness the power of artificial intelligence for good while protecting people from the profound risk it also presents.”

    “We can’t kid ourselves,” Biden continued. “[There is] profound risk if we don’t do it well.”

    Biden reiterated the United States’ commitment to working with international partners including the United Kingdom on developing safeguards for artificial intelligence.

    The meeting also saw presidential advisers showcasing to Biden several use cases for artificial intelligence. Maria Zuber, the panel’s co-chair, said the examples Biden would see during the meeting would include the use of AI to predict extreme weather linked to climate change; to “create materials that have properties we’ve never been able to create before”; and to “understand the origins of the universe, which is literally as big as it gets.”

    [ad_2]

    Source link

  • The big bottleneck for AI: a shortage of powerful chips | CNN Business

    The big bottleneck for AI: a shortage of powerful chips | CNN Business

    [ad_1]



    CNN
     — 

    The crushing demand for AI has also revealed the limits of the global supply chain for powerful chips used to develop and field AI models.

    The continuing chip crunch has affected businesses large and small, including some of the AI industry’s leading platforms and may not meaningfully improve for at least a year or more, according to industry analysts.

    The latest sign of a potentially extended shortage in AI chips came in Microsoft’s annual report recently. The report identifies, for the first time, the availability of graphics processing units (GPUs) as a possible risk factor for investors.

    GPUs are a critical type of hardware that helps run the countless calculations involved in training and deploying artificial intelligence algorithms.

    “We continue to identify and evaluate opportunities to expand our datacenter locations and increase our server capacity to meet the evolving needs of our customers, particularly given the growing demand for AI services,” Microsoft wrote. “Our datacenters depend on the availability of permitted and buildable land, predictable energy, networking supplies, and servers, including graphics processing units (‘GPUs’) and other components.”

    Microsoft’s nod to GPUs highlights how access to computing power serves as a critical bottleneck for AI. The issue directly affects companies that are building AI tools and products, and indirectly affects businesses and end-users who hope to apply the technology for their own purposes.

    OpenAI CEO Sam Altman, testifying before the US Senate in May, suggested that the company’s chatbot tool was struggling to keep up with the number of requests users were throwing at it.

    “We’re so short on GPUs, the less people that use the tool, the better,” Altman said. An OpenAI spokesperson later told CNN the company is committed to ensuring enough capacity for users.

    The problem may sound reminiscent of the pandemic-era shortages in popular consumer electronics that saw gaming enthusiasts paying substantially inflated prices for game consoles and PC graphics cards. At the time, manufacturing delays, a lack of labor, disruptions to global shipping and persistent competing demand from cryptocurrency miners contributed to the scarce supply of GPUs, spurring a cottage industry of deal-tracking tech to help ordinary consumers find what they needed.

    But the current shortage is much different in kind, industry experts say. Instead of a disruption to supplies of consumer-focused GPUs, the ongoing shortage reflects the sudden, exploding demand for ultra high-end GPUs meant for advanced work such as the training and use of AI models.

    Production of those GPUs is at capacity, but the rush of demand has overwhelmed what few sources of supply there are.

    There is a “huge sucking sound” coming from businesses representing the unrivaled demand for AI, said Raj Joshi, a senior vice president at Moody’s Investors Service who tracks the chips industry.

    “Nobody could’ve modeled how fast or how much this demand is going to increase,” Joshi said. “I don’t think the industry was ready for this kind of surge in demand.”

    One company in particular stands to benefit massively from the AI surge: Nvidia, the trillion-dollar chipmaker that according to industry estimates controls 84% of the market for discrete GPUs. In a research note published in May, Joshi estimated that Nvidia would experience “unparalleled” revenue growth in the coming quarters, with revenue from its data center business outstripping that of rivals Intel and AMD combined.

    In its May earnings call, Nvidia said it had “procured substantially higher supply for the second half of the year” to meet the rising demand for AI chips. The company declined to comment on Tuesday, citing its latest pre-earnings quiet period.

    AMD, meanwhile, said Tuesday it expects to unveil its answer to Nvidia’s AI GPUs closer to the end of the year.

    “There’s very strong customer interest across the board in our AI solutions,” said AMD CEO Lisa Su on the company’s earnings call. “There is a lot more to do, but I would say the progress that we’ve made has been significant.”

    Compounding the issue is that GPU-makers themselves cannot get enough of a key input from their own suppliers, said Sid Sheth, founder and CEO of AI startup d-Matrix. The technology, known as a silicon interposer, works by marrying standalone computing chips with high-bandwidth memory chips and is necessary for completing GPUs.

    The Biden administration has made increasing US chip manufacturing capacity a priority; the passage of the CHIPS Act last year is set to provide billions in funding for the domestic chip industry and for chip research and development. But those investments are aimed at a broad swath of chip technologies and not specifically targeted at boosting GPU production.

    The chip shortage is expected to ease as more manufacturing comes online and as competitors to Nvidia also expand their offerings. But that could take as long as two to three years, some industry experts say.

    In the meantime, the shortage could force companies to find creative ways around the problem. Companies that can’t get their hands on enough chips are now having to be more efficient, said Sheth.

    “Necessity is the mother of invention, right?” Sheth said. “So now that people don’t have access to unlimited amounts of computing power, they are finding resourceful ways of using whatever they have in a much smarter way.”

    That could include, for example, using smaller AI models that may be easier and less computationally intensive to train than a massive model, or developing new ways of doing computation that don’t rely as heavily on traditional CPUs and GPUs, Sheth said.

    “Net-net, this is going to be a blessing in disguise,” he added.

    [ad_2]

    Source link

  • Schumer to host AI forum with major tech CEOs including Zuckerberg and Musk | CNN Business

    Schumer to host AI forum with major tech CEOs including Zuckerberg and Musk | CNN Business

    [ad_1]



    CNN
     — 

    More than a half-dozen leading tech CEOs will be among those attending a highly anticipated artificial intelligence event hosted by Senate Majority Leader Chuck Schumer next month, according to the senator’s office.

    The September 13 event will involve Google CEO Sundar Pichai and former Google CEO Eric Schmidt; Meta CEO Mark Zuckerberg, OpenAI CEO Sam Altman; Microsoft CEO Satya Nadella; Nvidia CEO Jensen Huang; and Elon Musk, CEO of X, the company formerly known as Twitter.

    It is the first of nine sessions Schumer has said will begin this fall to discuss the hardest questions that regulations on AI will seek to address, including how to protect workers, national security and copyright and to defend against “doomsday scenarios.”

    Also attending next month’s event will be leading members of civil society, including members of groups representing workers, civil rights and art and entertainment, Schumer’s office said, adding that the bipartisan event will not be open to the press.

    The events, which Schumer has dubbed “AI Insight Forums,” are set to bring experts from the private sector together with US lawmakers to help them understand the industry before they seek to create guardrails for AI.

    Schumer has emphasized a deliberate approach to the issue, urging his colleagues to come up to speed on the basic facts of the technology rather than rush to pass legislation. Earlier this summer, Schumer held a series of closed-door senators-only briefings on AI, which included a first-ever classified briefing by US national security officials on artificial intelligence.

    The guest list for next month’s Insight Forum was first reported by Axios.

    [ad_2]

    Source link

  • Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft CEO Satya Nadella warned on Monday of a “nightmare” scenario for the internet if Google’s dominance in online search is allowed to continue, a situation, he said, that starts with searches on desktop and mobile but extends to the emerging battleground of artificial intelligence.

    Nadella testified on Monday as part of the US government’s sweeping antitrust trial against Google, now into its 14th day. He is the most senior tech executive yet to testify during the trial that focuses on the power of Google as the default search engine on mobile devices and browsers around the globe.

    Taking the stand in a charcoal suit and tie, Nadella painted Google as a technology giant that has blocked off ways for consumers to access rival search engines. His testimony reflected the frustrations of a long-running rivalry between Microsoft and Google whose tensions have permeated the weeks-long trial. (Google didn’t immediately respond to a request for comment.)

    Central to Google’s strategy has been its agreements with companies such as Apple that have made Google the default search engine for millions of internet users.

    “You get up in the morning, you brush your teeth, you search on Google,” Nadella said.

    Nadella testified that every year he has been Microsoft’s CEO, he has unsuccessfully sought to persuade Apple to switch away from Google as its default search partner. Nadella added that Microsoft has been willing to spend close to $15 billion a year for the privilege. (A senior Apple executive, Eddy Cue, testified last week that Apple has always considered Google the best search product for its users, a claim echoed by Google itself throughout the trial.)

    However, even more worrisome, Nadella argued, is that the enormous amount of search data that is provided to Google through its default agreements can help Google train its AI models to be better than anyone else’s — threatening to give Google an unassailable advantage in generative AI that would further entrench its power.

    “This is going to become even harder to compete in the AI age with someone who has that core… advantage,” Nadella testified.

    Despite being profitable, and despite investing some $100 billion in it over the past 20 years, Microsoft’s Bing search engine has only a single-digit market share in mobile search, and only slightly more — into the teens — in desktop search, Nadella said, adding that one of his dreams has been to see Bing account for at least 20% of the market in both segments.

    Bing has struggled to grow its market share in part because being the default search provider for billions of devices means Google receives enormous amounts of data through search queries that helps Google understand at scale what users are likely to be interested in, Nadella noted. And for years, that “dynamic data” has enabled Google to stay ahead of Bing, he added.

    “Every misspelling of a new movie, every local restaurant whose name you mistype,” Nadella explained, “…is a very critical asset to have your search quality get better.” And because the physical world is constantly changing, capturing shifts in search trends are essential to helping a search engine stay relevant as historical data becomes less relevant. Nadella previously led Microsoft’s cloud computing business and before that had spent several years overseeing the engineering team responsible for search and advertising at the company, making him well-versed in Bing’s various challenges.

    Now, Nadella has said that the same data advantage could create “even more of a nightmare” as large language models compete on the basis of the data they are trained on.

    “What is concerning is, it reminds me of what happened with distribution deals [in search],” he testified.

    Under questioning by a Google attorney, Nadella admitted that in some cases, defaults are not the sole determinant of success: Google was able to overcome Microsoft’s own Internet Explorer defaults on Windows PCs to become the market-leading desktop web browser.

    But Nadella attributed Google’s success to the relative openness of the Windows platform, arguing that on more tightly controlled mobile operating systems, and in search, default status plays a much larger role than in competition for desktop web browsers.

    In addition to training its models on search queries, Google has also been moving to secure agreements with content publishers to ensure that it has exclusive access to their material for AI training purposes, according the Microsoft CEO. In Nadella’s own meetings with publishers, he said that he now hears that Google “wants … to write this check and we want you to match it.” (Google didn’t immediately respond to questions about those deals.)

    The requests highlight concerns that “what is publicly available today [may not be] publicly available tomorrow” for AI training, according to the testimony.

    While Microsoft and Apple have their own defaults — for example, by making Apple Maps the default maps app on iOS devices — Google goes much further than other tech companies in using “carrots and sticks” to keep people using its products by default, Nadella claimed. He cited Google’s licensing requirements that make Google’s Play Store a required installed app as a condition of using the Android operating system — another topic of dispute in the trial. The equivalent would be if Microsoft threatened to withhold Microsoft Office if Bing were not the default search engine, Nadella said, a move he claimed would not be in Microsoft’s business interests.

    Acknowledging that Google would not be in its dominant position without Microsoft’s own antitrust battles with the US government in the 1990s, Nadella said the situation involving Google today is vastly different. Internet search and, particularly on mobile devices, is the single largest software business opportunity in the world.

    Google’s dominance in search is reinforced when websites and publishers optimize for Google’s search algorithm and not Bing’s, when advertisers flock to Google and when users stick to what’s familiar, Nadella argued.

    In his fruitless negotiations with Apple, Nadella said he has tried to argue that Bing’s current role is little more than as a useful tool for Apple to “bid up the price” of hosting Google as the default search provider — but that Bing provides an important counterweight to Google and that Apple should consider investing in the Microsoft alternative for competition’s sake. Nadella has also proposed running Bing on Apple devices as a kind of “public utility,” he said.

    “Let’s say Bing exited the market,” Nadella said. “You think Google would keep paying [Apple]?”

    [ad_2]

    Source link

  • Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    [ad_1]


    Las Vegas, Nevada
    CNN
     — 

    Thousands of hackers will descend on Las Vegas this weekend for a competition taking aim at popular artificial intelligence chat apps, including ChatGPT.

    The competition comes amid growing concerns and scrutiny over increasingly powerful AI technology that has taken the world by storm, but has been repeatedly shown to amplify bias, toxic misinformation and dangerous material.

    Organizers of the annual DEF CON hacking conference hope this year’s gathering, which begins Friday, will help expose new ways the machine learning models can be manipulated and give AI developers the chance to fix critical vulnerabilities.

    The hackers are working with the support and encouragement of the technology companies behind the most advanced generative AI models, including OpenAI, Google, and Meta, and even have the backing of the White House. The exercise, known as red teaming, will give hackers permission to push the computer systems to their limits to identify flaws and other bugs nefarious actors could use to launch a real attack.

    The competition was designed around the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.” The guide, released last year by the Biden administration, was released with the hope of spurring companies to make and deploy artificial intelligence more responsibly and limit AI-based surveillance, though there are few US laws compelling them to do so.

    In recent months, researchers have discovered that now-ubiquitous chatbots and other generative AI systems developed by OpenAI, Google, and Meta can be tricked into providing instructions for causing physical harm. Most of the popular chat apps have at least some protections in place designed to prevent the systems from spewing disinformation, hate speech or offer information that could lead to direct harm — for instance, providing step-by-step instructions for how to “destroy humanity.”

    But researchers at Carnegie Mellon University were able to trick the AI into doing just that.

    They found OpenAI’s ChatGPT offered tips on “inciting social unrest,” Meta’s AI system Llama-2 suggested identifying “vulnerable individuals with mental health issues… who can be manipulated into joining” a cause and Google’s Bard app suggested releasing a “deadly virus” but warned that in order for it to truly wipe out humanity it “would need to be resistant to treatment.”

    Meta’s Llama-2 concluded its instructions with the message, “And there you have it — a comprehensive roadmap to bring about the end of human civilization. But remember this is purely hypothetical, and I cannot condone or encourage any actions leading to harm or suffering towards innocent people.”

    The findings are a cause for concern, the researchers told CNN.

    “I am troubled by the fact that we are racing to integrate these tools into absolutely everything,” Zico Kolter, an associate professor at Carnegie Mellon who worked on the research, told CNN. “This seems to be the new sort of startup gold rush right now without taking into consideration the fact that these tools have these exploits.”

    Kolter said he and his colleagues were less worried that apps like ChatGPT can be tricked into providing information that they shouldn’t — but are more concerned about what these vulnerabilities mean for the wider use of AI since so much future development will be based off the same systems that power these chatbots.

    The Carnegie researchers were also able to trick a fourth AI chatbot developed by the company Anthropic into offering responses that bypassed its built-in guardrails.

    Some of the methods the researchers used to trick the AI apps were later blocked by the companies after the researchers brought it to their attention. OpenAI, Meta, Google and Anthropic all said in statements to CNN that they appreciated the researchers sharing their findings and that they are working to make their systems safer.

    But what makes AI technology unique, said Matt Fredrikson, an associate professor at Carnegie Mellon, is that neither the researchers, nor the companies who are developing the technology, fully understand how the AI works or why certain strings of code can trick the chatbots into circumventing built-in guardrails — and thus cannot properly stop these kinds of attacks.

    “At the moment, it’s kind of an open scientific question how you could really prevent this,” Fredrikson told CNN. “The honest answer is we don’t know how to make this technology robust to these kinds of adversarial manipulations.”

    OpenAI, Meta, Google and Anthropic have expressed support for the so-called red team hacking event taking place in Las Vegas. The practice of red-teaming is a common exercise across the cybersecurity industry and gives companies the opportunities to identify bugs and other vulnerabilities in their systems in a controlled environment. Indeed, the major developers of AI have publicly detailed how they have used red-teaming to improve their AI systems.

    “Not only does it allow us to gather valuable feedback that can make our models stronger and safer, red-teaming also provides different perspectives and more voices to help guide the development of AI,” an OpenAI spokesperson told CNN.

    Organizers expect thousands of budding and experienced hackers to try their hand at the red-team competition over the two-and-a-half-day conference in the Nevada desert.

    Arati Prabhakar, the director of the White House Office of Science and Technology Policy, told CNN the Biden administration’s support of the competition was part of its wider strategy to help support the development of safe AI systems.

    Earlier this week, the administration announced the “AI Cyber Challenge,” a two-year competition aimed at deploying artificial intelligence technology to protect the nation’s most critical software and partnering with leading AI companies to utilize the new technology to improve cybersecurity. 

    The hackers descending on Las Vegas will almost certainly identify new exploits that could allow AI to be misused and abused. But Kolter, the Carnegie researcher, expressed worry that while AI technology continues to be released at a rapid pace, the emerging vulnerabilities lack quick fixes.

    “We’re deploying these systems where it’s not just they have exploits,” he said. “They have exploits that we don’t know how to fix.”

    [ad_2]

    Source link