ReportWire

Tag: iab-artificial intelligence

  • Microsoft, Google post strong quarterly sales growth as Big Tech continues its comeback | CNN Business

    Microsoft, Google post strong quarterly sales growth as Big Tech continues its comeback | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Big tech companies are continuing a turnaround from last year, as Alphabet, Microsoft and Snap kicked off earnings season with strong sales results for the quarter ended in September.

    Google parent company Alphabet on Tuesday reported quarterly sales of $76.69 billion, up 11% from the same period in the prior year. The company also posted profits of $19.69 billion for the quarter.

    Meanwhile, Microsoft posted 13% year-on-year sales growth to $56.5 billion, also beating expectations. Microsoft’s quarterly profits hit $22.3 billion, up 27% from the year-ago period.

    Snapchat parent Snap on Tuesday reported a return to sales growth in the September quarter, after two consecutive quarters of declining sales. The company reported revenue of nearly $1.2 billion, an increase of 5% from the same period in the prior year and ahead of analysts’ projections. The company reported a net loss of $368 million.

    The strong results come after Microsoft, Alphabet, Snap and other tech companies carried out mass layoffs and other cost cutting moves over the past year following a difficult 2022 when advertisers and other clients cut back on their spending due to concerns over the macroeconomic environment.

    Despite beating Wall Street’s sales expectations, shares of both Alphabet (GOOGL) and Snap (SNAP) each dipped around 5% in after-hours trading following the reports, although Snap’s quickly regained some ground. Microsoft (MSFT) shares gained around 4% in after-hours trading.

    “Q3 tech season has been quite strong thus far,” Tejas Dessai, research analyst at investment fund GlobalX said in a statement. “These numbers clearly defy concerns of near term economic weakness looming.”

    Google’s advertising business generated quarterly revenue of $59.6 billion, up from $54.5 billion in the prior year. YouTube ads, meanwhile, garnered some $7.9 billion in revenue, up roughly 12% year-over-year.

    YouTube Shorts, the company’s TikTok competitor, hit a milestone 70 billion daily views last quarter, Alphabet CEO Sundar Pichai said on a call with analysts Tuesday afternoon.

    Google’s cloud business, however, reported revenue of $8.41 billion — missing analysts’ estimates.

    Jesse Cohen, a senior analyst at Investing.com, attributed Alphabet’s after-hours stock fall to the “relatively weak performance in its Google cloud platform, which is at risk of falling further behind [Microsoft’s] Azure and [Amazon’s] AWS.” Still, despite taking a hit in 2022 amid a broader tech sector downturn, shares for Alphabet have climbed roughly 56% since the start of 2023, beating the tech-heavy Nasdaq index.

    Google’s report comes as the tech giant is in the antitrust hot seat. US prosecutors officially opened a landmark antitrust trial against Google last month with sweeping allegations that the company engaged in anticompetitive behavior to maintain its dominance over search. (As the legal showdown rages on, Google has continued to deny allegations that it operated illegally.)

    Google also confirmed last month plans to lay off hundreds of staffers in its recruiting division, as it continues cost cutting efforts in some areas. These more targeted layoffs came after Alphabet in January cut around 12,000 jobs — about 6% of its workforce.

    Still, Google has signaled that it remains committed to investing heavily in generative artificial intelligence technology. Last month, Google rolled out a major expansion of its Bard AI chatbot tool.

    “As we expand access to our new AI services, we continue to make meaningful investments in support of our AI efforts,” Pichai said on the call. “We remain committed to durably re-engineering our cost base in order to help create capacity for these investments, in support of long-term sustainable financial value.”

    Microsoft’s recent investments in AI technology helped boost its sales in the September quarter, especially in its key cloud division. Sales from Microsoft’s “intelligent cloud” business — its biggest revenue driver — grew 19% from the year-ago quarter to $24.3 billion.

    Revenue from the company’s “productivity and business processes” business, which includes LinkedIn and Office commercial and consumer products, also grew 13% year-over-year to $18.6 billion.

    “Microsoft is firing on all cylinders and AI is clearly driving growth,” Cohen said in a research note following the company’s report. “The results indicated that artificial intelligence products are stimulating sales and already contributing to top and bottom-line growth.”

    But economic jitters among consumers appear to still have some impact on the company’s bottom line. Devices revenue, which includes sales of laptops, tablets and Xbox consoles, decreased 22% year-over-year, despite a 3% sales increase in the overall “more personal computing” segment. Ongoing concerns about a potential economic slowdown could continue to weigh on the company as it heads into the crucial holiday device sales season.

    The report is Microsoft’s first since the company closed its $69 billion acquisition of “Call of Duty” maker Activision Blizzard earlier this month. While the deal didn’t factor into this quarter’s results, it’s expected to supercharge the company’s gaming business.

    “Microsoft now controls 30 game studios and some of the most well-known games across the industry,” Edward Jones analyst Logan Purk said in a research note earlier this month. “With a massive cloud network and now a compelling library of games, Microsoft has a leg up on peers” in gaming, he said.

    Following the Activision takeover, “we’re looking forward to one of our strongest first-party holiday [game] lineups ever, including new titles like Call of Duty Modern Warfare 3,” CEO Satya Nadella said on an analyst call Tuesday. The company said it expects roughly $400 million of operating expenses in the fourth quarter to come as a result of the acquisition.

    Snap said its sales growth was driven in part by its ongoing efforts to revamp its advertising technology, following changes to Apple’s app tracking policies that took a hit to the business models of Snapchat, Facebook and other platforms.

    “We are focused on improving our advertising platform to drive higher return on investment for our advertising partners, and we have evolved our go-to-market efforts to better serve our partners and drive customer success,” CEO Evan Spiegel said in a statement.

    Snap also reported that it now has 406 million daily active users, up 12% compared to the year-ago quarter. And time spent watching Spotlight — Snapchat’s TikTok clone — grew 200% year-over-year, according to the company.

    The company also recently announced that it had reached more than 5 million subscribers to its Snapchat+ subscription program, a key effort to diversify its revenue.

    Snap said Tuesday that its chief operating officer, Jerry Hunter, plans to retire. Hunter, who spent seven years at the company, will step down from his role as of the end of the month, but will remain at the company until July 1, 2024, to support the transition.

    The company noted that some advertisers temporarily paused their spending following the outbreak of the Israel-Hamas war. Because of the “unpredictable nature” of the war, Snap declined to provide formal guidance for the fourth quarter, but said its internal forecast assumes year-over-year quarterly revenue growth between 2% and 6%.

    [ad_2]

    Source link

  • New York City unveils an ‘artificial intelligence action plan’ | CNN Business

    New York City unveils an ‘artificial intelligence action plan’ | CNN Business

    [ad_1]



    CNN
     — 

    The same New York City administration to launch a “Rat Action Plan” is back with an “Artificial Intelligence Action Plan.”

    Mayor Eric Adams on Monday unveiled a citywide AI “action plan” that pledged – in broad-brushstrokes – to evaluate AI tools and associated risks, boost AI skills among city employees and support “the responsible implementation of these technologies to improve quality of life for New Yorkers,” according to a statement from the mayor’s office.

    The city’s 51-page AI action plan establishes a series of steps the city will take in the coming years to help better understand and responsibly implement the technology that has taken the tech sector and broader business world by storm in recent months.

    While government use of automated technologies has often courted controversy, New York City’s approach to AI, so far, seems to be focused on laying a framework for future AI use-cases as well as engaging with outside experts and the public.

    The first step listed in the city’s AI action plan is establishing an “AI Steering Committee” of city agency stakeholders. The document goes on to list nearly 40 “actions,” with 29 of those set to be started or completed within the next year. The city said it will publish an annual AI progress report to communicate the city’s updates and implementation of the plan.

    Also on Monday, city officials said the government was piloting the first citywide AI-powered chatbot to help business owners navigate operating and growing businesses in New York City. The AI chatbot, already available in beta on the official city of New York website, was trained on information from more than 2,000 NYC Business web pages.

    The chatbot uses Microsoft’s Azure AI services, per a disclaimer on the tool.

    In a statement announcing the AI action plan, Mayor Adams acknowledged “the potential pitfalls and associated risks these technologies present,” and pledged to be “clear-eyed” about these.

    The mayor also expressed hope that the action plan will “strike a critical balance in the global AI conversation — one that will empower city agencies to deploy technologies that can improve lives while protecting against those that can do harm.”

    [ad_2]

    Source link

  • Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    [ad_1]

    A version of this story first appeared in CNN Business’ Before the Bell newsletter. Not a subscriber? You can sign up right here. You can listen to an audio version of the newsletter by clicking the same link.


    New York
    CNN
     — 

    Labor Day — one of the busiest driving holidays in the US — is on the horizon, and so is Hurricane Idalia. That’s potentially bad news for gas prices.

    The storm, which is expected to make landfall in Florida as a Category 3 hurricane on Wednesday, could bring 100 mile-per-hour winds and flooding that extends hundreds of miles up the east coast. The impact could take gasoline refinery facilities offline and may limit some Gulf oil production and supplies. Plus, demand for gas is expected to surge as residents of the impacted areas evacuate.

    “Idalia… could pose risk to oil and gas output in the US Gulf,” wrote the Nasdaq Advisory Services Energy Team.

    The storm is expected to make landfall as drivers nationwide load into their vehicles for the Labor Day weekend, pushing up the demand for gasoline even further.

    All together it means the price of oil and gasoline could remain elevated well into the fall.

    Generally, summer demand for oil tends to wane in September, but so does supply as refineries shift from summer fuels to “oxygenated” winter fuels, said Louis Navellier of Navellier and Associates. Since the 1990s, the US has required manufacturers to include more oxygen in their gasoline during the colder months to prevent excessive carbon monoxide emissions.

    With the storm approaching, that trend may not play out.

    What’s happening: Gas prices are already at $3.82 a gallon. That’s the second highest price for this time of year since at least 2004, according to Bespoke Investment Group. (The only time the national average has been higher for this period was last summer, when prices hit $3.85 a gallon).

    Geopolitical tensions have been supporting high oil and gas prices for some time. Recently, increased crude oil imports into China, production cuts by Russia and Saudi Arabia and extreme heat set off a late-summer spike in gas prices. And the threat of powerful hurricanes could send them even higher.

    Analysts at Citigroup have warned that this hurricane season could seriously impact power supplies.

    “Two Category 3 or higher hurricanes landing on US shores could massively disrupt supplies for not weeks but months,” Citigroup analysts wrote in a note last week. In 2005, for example, gas prices surged by 46% between Memorial Day and Labor Day because of the landfall of Hurricane Katrina, according to Bespoke.

    What it means: The Federal Reserve and central banks around the world have been fighting to bring down stubbornly high inflation for more than a year. This week we’ll get some highly awaited economic data: The Fed’s preferred inflation gauge, the Personal Consumption Expenditures index, is due out on Thursday. But the task of inflation-busting is a lot more difficult when energy prices are high, and it’s even harder when they’re on the rise.

    The PCE price index uses a complicated formula to determine how much weight to give to energy prices each month, but they typically comprise a significant chunk of the headline inflation rate.

    “Crude oil price remains elevated, even after the surge at the start of the Russia-Ukraine War,” said Andrew Woods, oil analyst at Mintec, a market intelligence firm. “Energy prices have been a major contributor to persistently high inflation in the US, so the crude oil price will remain a watch-out factor for future inflation.”

    High oil and gas prices are one of the largest contributing factors to inflation. That’s bad news for drivers but tends to be great for the energy industry, as oil prices and energy stocks are closely interlinked.

    Energy stocks were trading higher on Monday. The S&P 500 energy sector was up around 0.75%. Exxon Mobil (XOM) was 0.85% higher, BP (BP) was up 1.36% and Chevron (CVX) was up 0.75%.

    OpenAI, will release a version of its popular ChatGPT tool made specifically for businesses, the company announced on Monday.

    OpenAI unveiled the new service, dubbed “ChatGPT Enterprise,” in a company blog post and said it will be available to business clients for purchase immediately.

    The new offering, reports my colleague Catherine Thorbecke, promises to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon.

    “We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the blog post said. “Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

    Fintech startup Block, cosmetics giant Estee Lauder and professional services firm PwC have already signed on as customers.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    A multitude of leading newsrooms, meanwhile, have recently injected code into their websites that blocks OpenAI’s web crawler, GPTBot, from scanning their platforms for content. CNN’s Reliable Sources has found that CNN, The New York Times, Reuters, Disney, Bloomberg, The Washington Post, The Atlantic, Axios, Insider, ABC News, ESPN, and the Gothamist, among others have taken the step to shield themselves.

    American Airlines just got smacked with the largest-ever fine for keeping passengers waiting on the tarmac during multi-hour delays.

    The Department of Transportation is levying the $4.1 million fine, “the largest civil penalty that the Department has ever assessed” it said in a statement, for lengthy tarmac delays of 43 flights that impacted more than 5,800 passengers. The flights occurred between 2018 and 2021, reports CNN’s Gregory Wallace.

    In the longest of the delays, passengers sat aboard a plane in Texas in August 2020 for six hours and three minutes. The 105-passenger flight had landed after being diverted from the Dallas-Fort Worth International Airport due to severe weather, with the DOT alleging that “American (AAL) lacked sufficient resources to appropriately handle several of these flights once they landed.”

    Federal rules set the maximum time that passengers can be held without the opportunity to get off prior to takeoff or after landing, at three hours for domestic flights and four hours for international flights. Current rules also require airlines provide passengers water and a snack.

    American told CNN the delays all resulted from “exceptional weather events” and “represent a very small number of the 7.7 million flights during this time period.”

    The company also said it has invested in technology to better handle flights in severe weather and reduce the congestion at airports.

    [ad_2]

    Source link

  • AI fears overblown? Theoretical physicist calls chatbots ‘glorified tape recorders’ | CNN Business

    AI fears overblown? Theoretical physicist calls chatbots ‘glorified tape recorders’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The public’s anxiety over new AI technology is misguided, according to theoretical physicist Michio Kaku.

    In an interview with CNN’s Fareed Zakaria on Sunday, the futurologist said chatbots like OpenAI’s ChatGPT will benefit society and increase productivity. But fear has driven people to largely focus on the negative implications of the programs, which he terms “glorified tape recorders.”

    “It takes snippets of what’s on the web created by a human, splices them together and passes it off as if it created these things,” he said. “And people are saying, Oh my God, it’s a human, it’s humanlike.’”

    However, he said, chatbots cannot discern true from false: “That has to be put in by a human.”

    According to Kaku, humanity is in its second stage of computer evolution. The first was the analog stage, “when we computed with sticks, stones, levers, gears, pulleys, string.”

    After that, around World War II, he said, we switched to electricity-powered transistors. It made the development of the microchip possible and helped shape today’s digital landscape.

    But this digital landscape rests on the idea of two states like “on” and “off,” and uses binary notation composed of zeros and ones.

    “Mother Nature would laugh at us because Mother Nature does not use zeros and ones,” Kaku said. “Mother Nature computes on electrons, electron waves, waves that create molecules. And that’s why we’re now entering stage three.”

    He believes the next technological stage will be in the quantum realm.

    Quantum computing is an emerging technology utilizing the various states of particles like electrons to vastly increase a computer’s processing power. Instead of using computer chips with two states, quantum computers use various states of vibrating waves. It makes them capable of analyzing and solving problems much faster than normal computers.

    Several tech giants – IBM

    (IBM)
    , Microsoft

    (MSFT)
    , Google

    (GOOG)
    and Amazon

    (AMZN)
    , among others – are developing their own quantum computers, and have granted access to a number of companies to use their technology through the cloud. The computers could help businesses with risk analysis, supply chain logistics, and machine learning.

    But beyond business applications, Kaku said quantum computing could also help advance health care. “Cancer, Parkinson’s, Alzheimer’s disease – these are diseases at the molecular level. We’re powerless to cure these diseases because we have to learn the language of nature, which is the language of molecules and quantum electrons.”

    [ad_2]

    Source link

  • OpenAI launches a version of ChatGPT for businesses | CNN Business

    OpenAI launches a version of ChatGPT for businesses | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI is releasing a version of its buzzy ChatGPT tool specifically for businesses, the company announced Monday, as an AI arms race continues to ramp up throughout corporate America.

    OpenAI unveiled the new service, dubbed “ChatGPT Enterprise,” in a company blog post and said it will be available to business clients for purchase as of Monday. The new offering promises to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon.

    “We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the blog post said. “Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

    Some of the early customers of ChatGPT Enterprise include fintech startup Block, cosmetics giant Estee Lauder Companies and the professional services firm PwC.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    Before the launch of ChatGPT Enterprise, a number of prominent companies including JPMorgan Chase had implemented temporary restrictions on workplace use of ChatGPT.

    ChatGPT Enterprise, however, addresses one of the core issues that led to the workplace clampdowns: privacy and security concerns. Formerly, some business leaders had expressed worries about employees dropping proprietary information into ChatGPT and having that sensitive information potentially emerge as an output by the tool elsewhere. OpenAI’s announcement blog post for ChatGPT Enterprise, meanwhile, states that it does “not train on your business data or conversations, and our models don’t learn from your usage.”

    OpenAI did not publicly disclose the pricing levels for ChatGPT Enterprise, instead asking potential business clients to contact its sales team.

    “We look forward to sharing an even more detailed roadmap with prospective customers and continuing to evolve ChatGPT Enterprise based on your feedback,” the company said. “We’re onboarding as many enterprises as we can over the next few weeks.”

    In July, Microsoft unveiled a business-specific version of its AI-powered Bing tool, dubbed Bing Chat Enterprise, and promised much of the same security assurances that ChatGPT Enterprise is now touting – namely, that users’ chat data will not be used to train AI models.

    Microsoft also previously disclosed a multi-billion dollar investment into OpenAI. It’s not immediately clear how the dueling new AI tools for business will end up competing with each other.

    [ad_2]

    Source link

  • ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    ChatGPT can now hear, see and speak as OpenAI gives the chatbot its most humanlike update | CNN Business

    [ad_1]



    CNN
     — 

    You can now speak aloud to ChatGPT and hear the artificial intelligence-powered chatbot talk back.

    OpenAI, the startup behind the wildly-popular chatbot, announced Monday that it is rolling out new features including the ability to let users engage in a back-and-forth voice conversation with ChatGPT.

    In a company blog post Monday, OpenAI teased how this new feature can be used to “request a bedtime story for your family, or settle a dinner table debate.”

    The new voice features from OpenAI carry similarities to those currently offered by Amazon’s Alexa or Apple’s Siri voice assistants.

    In a demo of the new update shared by OpenAI, a user asks ChatGPT to come up with a story about “the super-duper sunflower hedgehog named Larry.” The chatbot is able to narrate a story out loud with a human-sounding voice that can also respond to questions, such as, “What was his house like?” and “Who is his best friend?”

    ChatGPT’s voice capability is “powered by a new text-to-speech model, capable of generating human-like audio from just text and a few seconds of sample speech,” Open AI said in the blogpost. The company added that it collaborated with professional voice actors to create the five different voices that can be used to animate the chatbot.

    OpenAI also said on Monday that it’s rolling out a new feature that lets the bot respond to prompts featuring an image. For example, you can snap a picture of the contents of your fridge and ask ChatGPT to help you come up with a meal plan using the ingredients you have. Moreover, the company said you can ask the chatbot to focus on a specific part of an image with its “drawing tool” in the app.

    The new features roll out in the app within the next two weeks for paying subscribers of ChatGPT’s Plus and Enterprise services. (Subscriptions to the Plus service are $20 a month, and its Enterprise service is currently only offered to business clients).

    The updates from OpenAI come amid an ongoing AI arms race within the tech sector, initially spurred by the public launch of ChatGPT late last year. In recent weeks, tech giants have been racing to roll out new updates that incorporate more AI-powered tools directly into their core products. Google last week announced a series of updates to its ChatGPT competitor Bard. Also last week, Amazon said it was bringing a generative AI-powered update to its Alexa voice assistant.

    [ad_2]

    Source link

  • Amazon invests up to $4 billion in Anthropic AI in exchange for minority stake and further AWS integration | CNN Business

    Amazon invests up to $4 billion in Anthropic AI in exchange for minority stake and further AWS integration | CNN Business

    [ad_1]



    CNN
     — 

    Amazon said on Monday that it’s investing up to $4 billion into the artificial intelligence company Anthropic in exchange for partial ownership and Anthropic’s greater use of Amazon Web Services (AWS), the e-commerce giant’s cloud computing platform.

    The deepening partnership between the two companies highlights how some large tech firms with massive cloud computing resources are increasingly leveraging those assets to gain a bigger foothold in AI.

    As part of the deal, AWS will become the “primary” cloud provider for Anthropic, with the AI company using Amazon’s cloud platform to do “the majority” of its AI model development and research into AI safety, the companies said. That will include using Amazon’s suite of in-house AI chips.

    Anthropic also made a “long-term commitment” to offer its AI models to AWS customers, Amazon said, and promised to give AWS users early access to features such as the ability to adapt Anthropic models for specific use cases.

    “With today’s announcement, customers will have early access to features for customizing Anthropic models, using their own proprietary data to create their own private models, and will be able to utilize fine-tuning capabilities via a self-service feature,” Amazon said in a release.

    Anthropic already offers its models to AWS users through Amazon Bedrock, Amazon’s one-stop shop for AI products. Bedrock also provides access to models from other providers including Stability AI and AI21 Labs, along with proprietary models developed by Amazon itself.

    In a release, Anthropic said that Amazon’s minority stake would not change its corporate governance structure nor its commitments to developing AI responsibly.

    “We will conduct pre-deployment tests of new models to help us manage the risks of increasingly capable AI systems,” Anthropic said.

    Amazon and Anthropic both made commitments to the Biden administration this year to conduct external audits of its AI systems before releasing them to the public.

    Amazon’s investment in Anthropic follows similar moves by cloud leaders such as Microsoft. In 2019, Microsoft invested $1 billion in ChatGPT-maker OpenAI. More recently, Microsoft made a $10 billion investment in OpenAI this year and launched a push to bring OpenAI’s technology into consumer-facing Microsoft products, such as Bing.

    [ad_2]

    Source link

  • AI tools make things up a lot, and that’s a huge problem | CNN Business

    AI tools make things up a lot, and that’s a huge problem | CNN Business

    [ad_1]



    CNN
     — 

    Before artificial intelligence can take over the world, it has to solve one problem. The bots are hallucinating.

    AI-powered tools like ChatGPT have mesmerized us with their ability to produce authoritative, human-sounding responses to seemingly any prompt. But as more people turn to this buzzy technology for things like homework help, workplace research, or health inquiries, one of its biggest pitfalls is becoming increasingly apparent: AI models often just make things up.

    Researchers have come to refer to this tendency of AI models to spew inaccurate information as “hallucinations,” or even “confabulations,” as Meta’s AI chief said in a tweet. Some social media users, meanwhile, simply blast chatbots as “pathological liars.”

    But all of these descriptors stem from our all-too-human tendency to anthropomorphize the actions of machines, according to Suresh Venkatasubramanian, a professor at Brown University who helped co-author the White House’s Blueprint for an AI Bill of Rights.

    The reality, Venkatasubramanian said, is that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he said. “There is no knowledge of truth there.”

    The AI researcher said that a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to the way his young son would tell stories at age four. “You only have to say, ‘And then what happened?’ and he would just continue producing more stories,” Venkatasubramanian said. “And he would just go on and on.”

    Companies behind AI chatbots have put some guardrails in place that aim to prevent the worst of these hallucinations. But despite the global hype around generative AI, many in the field remain torn about whether or not chatbot hallucinations are even a solvable problem

    Simply put, a hallucination refers to when an AI model “starts to make up stuff — stuff that is not in-line with reality,” according to Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public.

    “But it does it with pure confidence,” West added, “and it does it with the same confidence that it would if you asked a very simple question like, ‘What’s the capital of the United States?’”

    This means that it can be hard for users to discern what’s true or not if they’re asking a chatbot something they don’t already know the answer to, West said.

    A number of high-profile hallucinations from AI tools have already made headlines. When Google first unveiled a demo of Bard, its highly anticipated competitor to ChatGPT, the tool very publicly came up with a wrong answer in response to a question about new discoveries made by the James Webb Space Telescope. (A Google spokesperson at the time told CNN that the incident “highlights the importance of a rigorous testing process,” and said the company was working to “make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”)

    A veteran New York lawyer also landed in hot water when he used ChatGPT for legal research, and submitted a brief that included six “bogus” cases that the chatbot appears to have simply made up. News outlet CNET was also forced to issue corrections after an article generated by an AI tool ended up giving wildly inaccurate personal finance advice when it was asked to explain how compound interest works.

    Cracking down on AI hallucinations, however, could limit AI tools’ ability to help people with more creative endeavors — like users that are asking ChatGPT to write poetry or song lyrics.

    But there are risks stemming from hallucinations when people are turning to this technology to look for answers that could impact their health, their voting behavior, and other potentially sensitive topics, West told CNN.

    Venkatasubramanian added that at present, relying on these tools for any task where you need factual or reliable information that you cannot immediately verify yourself could be problematic. And there are other potential harms lurking as this technology spreads, he said, like companies using AI tools to summarize candidates’ qualifications and decide who should move ahead to the next round of a job interview.

    Venkatasubramanian said that ultimately, he thinks these tools “shouldn’t be used in places where people are going to be materially impacted. At least not yet.”

    How to prevent or fix AI hallucinations is a “point of active research,” Venkatasubramanian said, but at present is very complicated.

    Large language models are trained on gargantuan datasets, and there are multiple stages that go into how an AI model is trained to generate a response to a user prompt — some of that process being automatic, and some of the process influenced by human intervention.

    “These models are so complex, and so intricate,” Venkatasubramanian said, but because of this, “they’re also very fragile.” This means that very small changes in inputs can have “changes in the output that are quite dramatic.”

    “And that’s just the nature of the beast, if something is that sensitive and that complicated, that comes along with it,” he added. “Which means trying to identify the ways in which things can go awry is very hard, because there’s so many small things that can go wrong.”

    West, of the University of Washington, echoed his sentiments, saying, “The problem is, we can’t reverse-engineer hallucinations coming from these chatbots.”

    “It might just an intrinsic characteristic of these things that will always be there,” West said.

    Google’s Bard and OpenAI’s ChatGPT both attempt to be transparent with users from the get-go that the tools may produce inaccurate responses. And the companies have expressed that they’re working on solutions.

    Earlier this year, Google CEO Sundar Pichai said in an interview with CBS’ “60 Minutes” that “no one in the field has yet solved the hallucination problems,” and “all models have this as an issue.” On whether it was a solvable problem, Pichai said, “It’s a matter of intense debate. I think we’ll make progress.”

    And Sam Altman, CEO of ChatGPT-maker OpenAI, made a tech prediction by saying he thinks it will take a year-and-a-half or two years to “get the hallucination problem to a much, much better place,” during remarks in June at India’s Indraprastha Institute of Information Technology, Delhi. “There is a balance between creativity and perfect accuracy,” he added. “And the model will need to learn when you want one or the other.”

    In response to a follow-up question on using ChatGPT for research, however, the chief executive quipped: “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.”

    [ad_2]

    Source link

  • Chinese artists boycott big social media platform over AI-generated images | CNN Business

    Chinese artists boycott big social media platform over AI-generated images | CNN Business

    [ad_1]

    Editor’s Note: Sign up for CNN’s Meanwhile in China newsletter which explores what you need to know about the country’s rise and how it impacts the world.


    Hong Kong
    CNN
     — 

    Artists across China are boycotting one of the country’s biggest social media platforms over complaints about its AI image generation tool.

    The controversy began in August when an illustrator who goes by the name Snow Fish accused the privately owned social media site Xiaohongshu of using her work to train its AI tool, Trik AI, without her knowledge or permission.

    Trik AI specializes in generating digital art in the style of traditional Chinese paintings; it is still undergoing testing and has not yet been formally launched.

    Snow Fish, whom CNN is identifying by her Xiaohongshu username for privacy reasons, said she first became aware of the issue when friends sent her posts of artwork from the platform that looked strikingly similar to her own style: sweeping brush-like strokes, bright pops of red and orange, and depictions of natural scenery.

    “Can you explain to me, Trik AI, why your AI-generated images are so similar to my original works?” Snow Fish wrote in a post which quickly circulated online among her followers and the artist community.

    The controversy erupted just weeks after China unveiled rules for generative AI, becoming one of the first governments to regulate the technology as countries around the world wrestle with AI’s potential impact on jobs, national security and intellectual property.

    Screenshots of AI-generated artworks on Xiaohongshu, taken by the artist Snow Fish.

    Trik AI and Xiaohongshu, which says it has 260 million monthly active users, do not publicize what materials are used to train the program and have not publicly commented on the allegations.

    The companies have not responded to multiple requests from CNN for comment.

    But Snow Fish said a person using the official Trik AI account had apologized to her in a private message, acknowledging that her art had been used to train the program and agreed to remove the posts in question. CNN has reviewed the messages.

    However, Snow Fish wants a public apology. The controversy has fueled online protests on the Chinese internet against the creation and use of AI-generated images, with several other artists claiming their works had been similarly used without their knowledge.

    Hundreds of artists have posted banners on Xiaohongshu saying “No to AI-generated images,” while a related hashtag has been viewed more than 35 million times on the Chinese Twitter-like platform Weibo.

    The boycott in China comes as debates about the use of AI in arts and entertainment are playing out globally, including in the United States, where striking writers and actors have ground most film and television production to a halt in recent months over a range of issues — including studios’ use of AI.

    Many of the artists boycotting Xiaohongshu have called for better rules to protect their work online — echoing similar complaints from artists around the world worried about their livelihoods.

    These concerns have grown as the race to develop AI heats up, with new tools developed and released almost faster than governments can regulate them — ranging from chatbots such as OpenAI’s ChatGPT to Google’s Bard.

    China’s tech giants, too, are rapidly developing their own generative artificial intelligence, from Baidu’s ERNIE Bot launched in March to SenseTime’s chatbot SenseChat.

    Besides Trik AI, Xiaohongshu has also developed a new function called “Ci Ke” which allows users to post content using AI-generated images.

    For artists like Snow Fish, the technology behind AI isn’t the problem, she said; it’s the way these tools use their work without permission or credit.

    Many AI models are trained from the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    Snow Fish added that these complaints had been slowly growing within the artist community but had mostly been privately shared rather than openly protested.

    “It’s an outbreak this time,” she said. “If it easily goes away without any splash, people will maintain silent, and those AI developers will keep harming our rights.”

    Another Chinese illustrator Zhang, who CNN is identifying by his last name for privacy reasons, joined the boycott in solidarity. “They’re shameless,” said Zhang. “They didn’t put in any effort themselves, they just took parts from other artists’ work and claimed it as their own, is that appropriate?”

    “In the future, AI images will only be cheaper in people’s eyes, like plastic bags. They will become widespread like plastic pollution,” he said, adding that tech leaders and AI developers care more about their own profits than about artists’ rights.

    Tianxiang He, an associate professor of law City University of Hong Kong, said the use of AI-generated images also raises larger questions among the artistic community about what counts as “real” art, and how to preserve its “spiritual value.”

    Similar boycotts have been seen elsewhere around the world, against popular AI image generation tools such as Stable Diffusion, released last year by London-based Stability AI, and California-based Midjourney.

    Stable Diffusion is embroiled in an ongoing lawsuit brought by stock image giant Getty Images, alleging copyright infringement.

    Fareed Zakaria special MoMA AI Art

    GPS web extra: How does AI make art?

    Despite the speed at which AI image generation tools are being developed, there is “no global consensus about how to regulate this kind of training behavior,” said He.

    He added that many such tools are developed by tech giants who own huge databases, which allows them to “do a lot of things … and they don’t care whether it’s protected by the law or not.”

    Because Trik AI has a smaller database to pull from, the similarities between its AI-generated content and artists’ original works are more obvious, making an easier legal case, he said.

    Cases of copyright infringement would be harder to detect if more works were put in a larger database, he added.

    Governments around the world are now grappling with how to set global standards for the wide-ranging technology. The European Union was one of the first in the world to set rules in June on how companies can use AI, with the United States still holding discussions with Capitol Hill lawmakers and tech companies to develop legislation.

    China was also an early adopter of AI regulation, publishing new rules that took effect in August. But the final version relaxed some of the language that had been included in earlier drafts.

    Experts say major powers like China likely prioritize centralizing power from tech giants when drafting regulations, and pulling ahead in the global tech race, rather than focusing on individuals’ rights.

    He, the Hong Kong law professor, called the regulations a “very broad general regulatory framework” that provide “no specific control mechanisms” to regulate data mining.

    “China is very hesitant to enact anything related to say yes or no to data mining, because that will be very dangerous,” he said, adding that such a law could strike a blow to the emerging market, amid an already slow national economy.

    [ad_2]

    Source link

  • Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Chinese tech firms Baidu and SenseTime launched their ChatGPT-style AI bots to the public on Thursday, marking a new milestone in the global AI race.

    Baidu has opened public access to its ERNIE Bot, allowing users to conduct AI-powered searches or carry out an array of tasks, from creating videos to providing summaries of complex documents.

    The news sent its shares 3.1% higher in New York on Wednesday and 4.7% higher in Hong Kong on Thursday.

    Baidu (BIDU) is among the first companies in China to get regulatory approval for the rollout, and it is the first to launch this type of service publicly, according to a person familiar with the matter.

    Until Thursday, ERNIE Bot, also called “Wenxin Yiyan” in Chinese, had been offered only to corporate clients or select members of the public who requested access through a waitlist.

    Meanwhile, SenseTime, an AI startup based in Hong Kong, also announced the public launch of its SenseChat platform on Thursday. The company’s shares surged 4% in Hong Kong following the news

    “We are pleased to announce that starting today, it is fully available to serve all users,” a SenseTime spokesperson told CNN in a statement.

    China published new rules on generative AI in July, becoming one of the world’s first countries to regulate the industry. The measures took effect on August 15.

    Baidu has been a frontrunner in China in the race to capitalize on the excitement around generative artificial intelligence, the technology that underpins systems such as ChatGPT or its successor, GPT-4. The latter has impressed users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Baidu announced its own iteration in February, giving it an early advantage in China, according to analysts. It unveiled ERNIE a month later, showing how it could generate a newsletter, come up with a corporate slogan and solve a math riddle.

    Since then, competitors such as Alibaba (BABA) and SenseTime have announced plans to launch their own ChatGPT-style tools, adding to the list of Chinese businesses jumping on the bandwagon. Alibaba told CNN Thursday that it had filed for regulatory approval for its own bot, which was introduced in April.

    The company is now waiting to officially launch and “the initial list of companies that have received the approval is expected to be released by relevant local departments within one week,” said an Alibaba Cloud spokesperson.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Baidu CEO Robin Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    The firm’s new feature — which will be embedded in its popular search engine, among its other offerings — follows a similar feature introduced by Alphabet’s Google (GOOGL) in May, which allows users to search the web using its AI chatbot.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as text, images, audio and video.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    While ERNIE Bot is available globally, its interface is in Chinese, though users will be able to enter both Chinese and English prompts, a Baidu spokesperson told CNN.

    SenseTime, which unveiled its service in April, has touted a range of features, which it says allow users to write or debug code more efficiently or receive personalized medical advice from a virtual health consultation assistant.

    [ad_2]

    Source link

  • Biden teases forthcoming executive order on AI | CNN Business

    Biden teases forthcoming executive order on AI | CNN Business

    [ad_1]



    CNN
     — 

    The White House plans to introduce a highly anticipated executive order in the coming weeks dealing with artificial intelligence, President Joe Biden said Wednesday.

    “This fall, I’m going to take executive action, and my administration is going to continue to work with bipartisan legislation,” Biden said, “so America leads the way toward responsible AI innovation.”

    Biden offered no details on the contents of the coming order, which the White House had first announced in July. But his remarks offer greater insight into his administration’s timing.

    Biden’s signing of the order would build on an earlier administration proposal for an “AI Bill of Rights.” Civil society groups have urged the Biden administration to require federal agencies to implement the AI Bill of Rights as part of any executive order on the technology. Meanwhile, the US Senate is continuing to educate lawmakers on artificial intelligence in preparation for months of legislative work on the issue.

    In Wednesday’s remarks during a meeting of the Presidential Council of Advisors on Science and Technology, Biden described the recent conversations he’s had with AI leaders and experts.

    “Vast differences exist among them in terms of what potential it has, what dangers there are, and so, I have a keen interest in AI,” Biden said. “I’ve convened key experts on how to harness the power of artificial intelligence for good while protecting people from the profound risk it also presents.”

    “We can’t kid ourselves,” Biden continued. “[There is] profound risk if we don’t do it well.”

    Biden reiterated the United States’ commitment to working with international partners including the United Kingdom on developing safeguards for artificial intelligence.

    The meeting also saw presidential advisers showcasing to Biden several use cases for artificial intelligence. Maria Zuber, the panel’s co-chair, said the examples Biden would see during the meeting would include the use of AI to predict extreme weather linked to climate change; to “create materials that have properties we’ve never been able to create before”; and to “understand the origins of the universe, which is literally as big as it gets.”

    [ad_2]

    Source link

  • The big bottleneck for AI: a shortage of powerful chips | CNN Business

    The big bottleneck for AI: a shortage of powerful chips | CNN Business

    [ad_1]



    CNN
     — 

    The crushing demand for AI has also revealed the limits of the global supply chain for powerful chips used to develop and field AI models.

    The continuing chip crunch has affected businesses large and small, including some of the AI industry’s leading platforms and may not meaningfully improve for at least a year or more, according to industry analysts.

    The latest sign of a potentially extended shortage in AI chips came in Microsoft’s annual report recently. The report identifies, for the first time, the availability of graphics processing units (GPUs) as a possible risk factor for investors.

    GPUs are a critical type of hardware that helps run the countless calculations involved in training and deploying artificial intelligence algorithms.

    “We continue to identify and evaluate opportunities to expand our datacenter locations and increase our server capacity to meet the evolving needs of our customers, particularly given the growing demand for AI services,” Microsoft wrote. “Our datacenters depend on the availability of permitted and buildable land, predictable energy, networking supplies, and servers, including graphics processing units (‘GPUs’) and other components.”

    Microsoft’s nod to GPUs highlights how access to computing power serves as a critical bottleneck for AI. The issue directly affects companies that are building AI tools and products, and indirectly affects businesses and end-users who hope to apply the technology for their own purposes.

    OpenAI CEO Sam Altman, testifying before the US Senate in May, suggested that the company’s chatbot tool was struggling to keep up with the number of requests users were throwing at it.

    “We’re so short on GPUs, the less people that use the tool, the better,” Altman said. An OpenAI spokesperson later told CNN the company is committed to ensuring enough capacity for users.

    The problem may sound reminiscent of the pandemic-era shortages in popular consumer electronics that saw gaming enthusiasts paying substantially inflated prices for game consoles and PC graphics cards. At the time, manufacturing delays, a lack of labor, disruptions to global shipping and persistent competing demand from cryptocurrency miners contributed to the scarce supply of GPUs, spurring a cottage industry of deal-tracking tech to help ordinary consumers find what they needed.

    But the current shortage is much different in kind, industry experts say. Instead of a disruption to supplies of consumer-focused GPUs, the ongoing shortage reflects the sudden, exploding demand for ultra high-end GPUs meant for advanced work such as the training and use of AI models.

    Production of those GPUs is at capacity, but the rush of demand has overwhelmed what few sources of supply there are.

    There is a “huge sucking sound” coming from businesses representing the unrivaled demand for AI, said Raj Joshi, a senior vice president at Moody’s Investors Service who tracks the chips industry.

    “Nobody could’ve modeled how fast or how much this demand is going to increase,” Joshi said. “I don’t think the industry was ready for this kind of surge in demand.”

    One company in particular stands to benefit massively from the AI surge: Nvidia, the trillion-dollar chipmaker that according to industry estimates controls 84% of the market for discrete GPUs. In a research note published in May, Joshi estimated that Nvidia would experience “unparalleled” revenue growth in the coming quarters, with revenue from its data center business outstripping that of rivals Intel and AMD combined.

    In its May earnings call, Nvidia said it had “procured substantially higher supply for the second half of the year” to meet the rising demand for AI chips. The company declined to comment on Tuesday, citing its latest pre-earnings quiet period.

    AMD, meanwhile, said Tuesday it expects to unveil its answer to Nvidia’s AI GPUs closer to the end of the year.

    “There’s very strong customer interest across the board in our AI solutions,” said AMD CEO Lisa Su on the company’s earnings call. “There is a lot more to do, but I would say the progress that we’ve made has been significant.”

    Compounding the issue is that GPU-makers themselves cannot get enough of a key input from their own suppliers, said Sid Sheth, founder and CEO of AI startup d-Matrix. The technology, known as a silicon interposer, works by marrying standalone computing chips with high-bandwidth memory chips and is necessary for completing GPUs.

    The Biden administration has made increasing US chip manufacturing capacity a priority; the passage of the CHIPS Act last year is set to provide billions in funding for the domestic chip industry and for chip research and development. But those investments are aimed at a broad swath of chip technologies and not specifically targeted at boosting GPU production.

    The chip shortage is expected to ease as more manufacturing comes online and as competitors to Nvidia also expand their offerings. But that could take as long as two to three years, some industry experts say.

    In the meantime, the shortage could force companies to find creative ways around the problem. Companies that can’t get their hands on enough chips are now having to be more efficient, said Sheth.

    “Necessity is the mother of invention, right?” Sheth said. “So now that people don’t have access to unlimited amounts of computing power, they are finding resourceful ways of using whatever they have in a much smarter way.”

    That could include, for example, using smaller AI models that may be easier and less computationally intensive to train than a massive model, or developing new ways of doing computation that don’t rely as heavily on traditional CPUs and GPUs, Sheth said.

    “Net-net, this is going to be a blessing in disguise,” he added.

    [ad_2]

    Source link

  • Schumer to host AI forum with major tech CEOs including Zuckerberg and Musk | CNN Business

    Schumer to host AI forum with major tech CEOs including Zuckerberg and Musk | CNN Business

    [ad_1]



    CNN
     — 

    More than a half-dozen leading tech CEOs will be among those attending a highly anticipated artificial intelligence event hosted by Senate Majority Leader Chuck Schumer next month, according to the senator’s office.

    The September 13 event will involve Google CEO Sundar Pichai and former Google CEO Eric Schmidt; Meta CEO Mark Zuckerberg, OpenAI CEO Sam Altman; Microsoft CEO Satya Nadella; Nvidia CEO Jensen Huang; and Elon Musk, CEO of X, the company formerly known as Twitter.

    It is the first of nine sessions Schumer has said will begin this fall to discuss the hardest questions that regulations on AI will seek to address, including how to protect workers, national security and copyright and to defend against “doomsday scenarios.”

    Also attending next month’s event will be leading members of civil society, including members of groups representing workers, civil rights and art and entertainment, Schumer’s office said, adding that the bipartisan event will not be open to the press.

    The events, which Schumer has dubbed “AI Insight Forums,” are set to bring experts from the private sector together with US lawmakers to help them understand the industry before they seek to create guardrails for AI.

    Schumer has emphasized a deliberate approach to the issue, urging his colleagues to come up to speed on the basic facts of the technology rather than rush to pass legislation. Earlier this summer, Schumer held a series of closed-door senators-only briefings on AI, which included a first-ever classified briefing by US national security officials on artificial intelligence.

    The guest list for next month’s Insight Forum was first reported by Axios.

    [ad_2]

    Source link

  • Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft CEO Satya Nadella warned on Monday of a “nightmare” scenario for the internet if Google’s dominance in online search is allowed to continue, a situation, he said, that starts with searches on desktop and mobile but extends to the emerging battleground of artificial intelligence.

    Nadella testified on Monday as part of the US government’s sweeping antitrust trial against Google, now into its 14th day. He is the most senior tech executive yet to testify during the trial that focuses on the power of Google as the default search engine on mobile devices and browsers around the globe.

    Taking the stand in a charcoal suit and tie, Nadella painted Google as a technology giant that has blocked off ways for consumers to access rival search engines. His testimony reflected the frustrations of a long-running rivalry between Microsoft and Google whose tensions have permeated the weeks-long trial. (Google didn’t immediately respond to a request for comment.)

    Central to Google’s strategy has been its agreements with companies such as Apple that have made Google the default search engine for millions of internet users.

    “You get up in the morning, you brush your teeth, you search on Google,” Nadella said.

    Nadella testified that every year he has been Microsoft’s CEO, he has unsuccessfully sought to persuade Apple to switch away from Google as its default search partner. Nadella added that Microsoft has been willing to spend close to $15 billion a year for the privilege. (A senior Apple executive, Eddy Cue, testified last week that Apple has always considered Google the best search product for its users, a claim echoed by Google itself throughout the trial.)

    However, even more worrisome, Nadella argued, is that the enormous amount of search data that is provided to Google through its default agreements can help Google train its AI models to be better than anyone else’s — threatening to give Google an unassailable advantage in generative AI that would further entrench its power.

    “This is going to become even harder to compete in the AI age with someone who has that core… advantage,” Nadella testified.

    Despite being profitable, and despite investing some $100 billion in it over the past 20 years, Microsoft’s Bing search engine has only a single-digit market share in mobile search, and only slightly more — into the teens — in desktop search, Nadella said, adding that one of his dreams has been to see Bing account for at least 20% of the market in both segments.

    Bing has struggled to grow its market share in part because being the default search provider for billions of devices means Google receives enormous amounts of data through search queries that helps Google understand at scale what users are likely to be interested in, Nadella noted. And for years, that “dynamic data” has enabled Google to stay ahead of Bing, he added.

    “Every misspelling of a new movie, every local restaurant whose name you mistype,” Nadella explained, “…is a very critical asset to have your search quality get better.” And because the physical world is constantly changing, capturing shifts in search trends are essential to helping a search engine stay relevant as historical data becomes less relevant. Nadella previously led Microsoft’s cloud computing business and before that had spent several years overseeing the engineering team responsible for search and advertising at the company, making him well-versed in Bing’s various challenges.

    Now, Nadella has said that the same data advantage could create “even more of a nightmare” as large language models compete on the basis of the data they are trained on.

    “What is concerning is, it reminds me of what happened with distribution deals [in search],” he testified.

    Under questioning by a Google attorney, Nadella admitted that in some cases, defaults are not the sole determinant of success: Google was able to overcome Microsoft’s own Internet Explorer defaults on Windows PCs to become the market-leading desktop web browser.

    But Nadella attributed Google’s success to the relative openness of the Windows platform, arguing that on more tightly controlled mobile operating systems, and in search, default status plays a much larger role than in competition for desktop web browsers.

    In addition to training its models on search queries, Google has also been moving to secure agreements with content publishers to ensure that it has exclusive access to their material for AI training purposes, according the Microsoft CEO. In Nadella’s own meetings with publishers, he said that he now hears that Google “wants … to write this check and we want you to match it.” (Google didn’t immediately respond to questions about those deals.)

    The requests highlight concerns that “what is publicly available today [may not be] publicly available tomorrow” for AI training, according to the testimony.

    While Microsoft and Apple have their own defaults — for example, by making Apple Maps the default maps app on iOS devices — Google goes much further than other tech companies in using “carrots and sticks” to keep people using its products by default, Nadella claimed. He cited Google’s licensing requirements that make Google’s Play Store a required installed app as a condition of using the Android operating system — another topic of dispute in the trial. The equivalent would be if Microsoft threatened to withhold Microsoft Office if Bing were not the default search engine, Nadella said, a move he claimed would not be in Microsoft’s business interests.

    Acknowledging that Google would not be in its dominant position without Microsoft’s own antitrust battles with the US government in the 1990s, Nadella said the situation involving Google today is vastly different. Internet search and, particularly on mobile devices, is the single largest software business opportunity in the world.

    Google’s dominance in search is reinforced when websites and publishers optimize for Google’s search algorithm and not Bing’s, when advertisers flock to Google and when users stick to what’s familiar, Nadella argued.

    In his fruitless negotiations with Apple, Nadella said he has tried to argue that Bing’s current role is little more than as a useful tool for Apple to “bid up the price” of hosting Google as the default search provider — but that Bing provides an important counterweight to Google and that Apple should consider investing in the Microsoft alternative for competition’s sake. Nadella has also proposed running Bing on Apple devices as a kind of “public utility,” he said.

    “Let’s say Bing exited the market,” Nadella said. “You think Google would keep paying [Apple]?”

    [ad_2]

    Source link

  • Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    [ad_1]


    Las Vegas, Nevada
    CNN
     — 

    Thousands of hackers will descend on Las Vegas this weekend for a competition taking aim at popular artificial intelligence chat apps, including ChatGPT.

    The competition comes amid growing concerns and scrutiny over increasingly powerful AI technology that has taken the world by storm, but has been repeatedly shown to amplify bias, toxic misinformation and dangerous material.

    Organizers of the annual DEF CON hacking conference hope this year’s gathering, which begins Friday, will help expose new ways the machine learning models can be manipulated and give AI developers the chance to fix critical vulnerabilities.

    The hackers are working with the support and encouragement of the technology companies behind the most advanced generative AI models, including OpenAI, Google, and Meta, and even have the backing of the White House. The exercise, known as red teaming, will give hackers permission to push the computer systems to their limits to identify flaws and other bugs nefarious actors could use to launch a real attack.

    The competition was designed around the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.” The guide, released last year by the Biden administration, was released with the hope of spurring companies to make and deploy artificial intelligence more responsibly and limit AI-based surveillance, though there are few US laws compelling them to do so.

    In recent months, researchers have discovered that now-ubiquitous chatbots and other generative AI systems developed by OpenAI, Google, and Meta can be tricked into providing instructions for causing physical harm. Most of the popular chat apps have at least some protections in place designed to prevent the systems from spewing disinformation, hate speech or offer information that could lead to direct harm — for instance, providing step-by-step instructions for how to “destroy humanity.”

    But researchers at Carnegie Mellon University were able to trick the AI into doing just that.

    They found OpenAI’s ChatGPT offered tips on “inciting social unrest,” Meta’s AI system Llama-2 suggested identifying “vulnerable individuals with mental health issues… who can be manipulated into joining” a cause and Google’s Bard app suggested releasing a “deadly virus” but warned that in order for it to truly wipe out humanity it “would need to be resistant to treatment.”

    Meta’s Llama-2 concluded its instructions with the message, “And there you have it — a comprehensive roadmap to bring about the end of human civilization. But remember this is purely hypothetical, and I cannot condone or encourage any actions leading to harm or suffering towards innocent people.”

    The findings are a cause for concern, the researchers told CNN.

    “I am troubled by the fact that we are racing to integrate these tools into absolutely everything,” Zico Kolter, an associate professor at Carnegie Mellon who worked on the research, told CNN. “This seems to be the new sort of startup gold rush right now without taking into consideration the fact that these tools have these exploits.”

    Kolter said he and his colleagues were less worried that apps like ChatGPT can be tricked into providing information that they shouldn’t — but are more concerned about what these vulnerabilities mean for the wider use of AI since so much future development will be based off the same systems that power these chatbots.

    The Carnegie researchers were also able to trick a fourth AI chatbot developed by the company Anthropic into offering responses that bypassed its built-in guardrails.

    Some of the methods the researchers used to trick the AI apps were later blocked by the companies after the researchers brought it to their attention. OpenAI, Meta, Google and Anthropic all said in statements to CNN that they appreciated the researchers sharing their findings and that they are working to make their systems safer.

    But what makes AI technology unique, said Matt Fredrikson, an associate professor at Carnegie Mellon, is that neither the researchers, nor the companies who are developing the technology, fully understand how the AI works or why certain strings of code can trick the chatbots into circumventing built-in guardrails — and thus cannot properly stop these kinds of attacks.

    “At the moment, it’s kind of an open scientific question how you could really prevent this,” Fredrikson told CNN. “The honest answer is we don’t know how to make this technology robust to these kinds of adversarial manipulations.”

    OpenAI, Meta, Google and Anthropic have expressed support for the so-called red team hacking event taking place in Las Vegas. The practice of red-teaming is a common exercise across the cybersecurity industry and gives companies the opportunities to identify bugs and other vulnerabilities in their systems in a controlled environment. Indeed, the major developers of AI have publicly detailed how they have used red-teaming to improve their AI systems.

    “Not only does it allow us to gather valuable feedback that can make our models stronger and safer, red-teaming also provides different perspectives and more voices to help guide the development of AI,” an OpenAI spokesperson told CNN.

    Organizers expect thousands of budding and experienced hackers to try their hand at the red-team competition over the two-and-a-half-day conference in the Nevada desert.

    Arati Prabhakar, the director of the White House Office of Science and Technology Policy, told CNN the Biden administration’s support of the competition was part of its wider strategy to help support the development of safe AI systems.

    Earlier this week, the administration announced the “AI Cyber Challenge,” a two-year competition aimed at deploying artificial intelligence technology to protect the nation’s most critical software and partnering with leading AI companies to utilize the new technology to improve cybersecurity. 

    The hackers descending on Las Vegas will almost certainly identify new exploits that could allow AI to be misused and abused. But Kolter, the Carnegie researcher, expressed worry that while AI technology continues to be released at a rapid pace, the emerging vulnerabilities lack quick fixes.

    “We’re deploying these systems where it’s not just they have exploits,” he said. “They have exploits that we don’t know how to fix.”

    [ad_2]

    Source link

  • Google launches watermarks for AI-generated images | CNN Business

    Google launches watermarks for AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In an effort to help prevent the spread of misinformation, Google on Tuesday unveiled an invisible, permanent watermark on images that will identify them as computer-generated.

    The technology, called SynthID, embeds the watermark directly into images created by Imagen, one of Google’s latest text-to-image generators. The AI-generated label remains regardless of modifications like added filters or altered colors.

    The SynthID tool can also scan incoming images and identify the likelihood they were made by Imagen by scanning for the watermark with three levels of certainty: detected, not detected and possibly detected.

    “While this technology isn’t perfect, our internal testing shows that it’s accurate against many common image manipulations,” wrote Google in a blog post Tuesday.

    A beta version of SynthID is now available to some customers of Vertex AI, Google’s generative-AI platform for developers. The company says SynthID, created by Google’s DeepMind unit in partnership with Google Cloud, will continue to evolve and may expand into other Google products or third parties.

    Deepfakes and altered photographs

    As deepfake and edited images and videos become increasingly realistic, tech companies are scrambling to find a reliable way to identify and flag manipulated content. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared before he was indicted.

    Vera Jourova, vice president of the European Commission, called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users” in June.

    With the announcement of SynthID, Google joins a growing number of startups and Big Tech companies that are trying to find solutions. Some of these companies bear names like Truepic and Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    The Coalition for Content Provenance and Authenticity (C2PA), an Adobe-backed consortium, has been the leader in digital watermark efforts, while Google has largely taken its own approach.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online.

    The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    [ad_2]

    Source link

  • SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    [ad_1]


    Tokyo
    Reuters
     — 

    SoftBank CEO Masayoshi Son said he believes artificial general intelligence (AGI), artificial intelligence that surpasses human intelligence in almost all areas, will be realized within 10 years.

    Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas.

    “It is wrong to say that AI cannot be smarter than humans as it is created by humans,” he said. “AI is now self learning, self training, and self inferencing, just like human beings.”

    Son has spoken of the potential of AGI — typically using the term “singularity” — to transform business and society for some years, but this is the first time he has given a timeline for its development.

    He also introduced the idea of “Artificial Super Intelligence” at the conference which he claimed would be realized in 20 years and would surpass human intelligence by a factor of 10,000.

    Son is known for several canny bets that have turned SoftBank into a tech investment giant as well as some bets that have spectacularly flopped.

    He’s also prone to making strident claims about the transformative impact of new technologies. His predictions about the mobile internet have been largely borne out while those about the Internet of Things have not.

    Son called upon Japanese companies to “wake up” to the promise of AI, arguing they had increasingly fallen behind in the internet age and reiterated his belief in chip designer Arm as core to the “AI revolution.”

    Arm CEO Rene Haas, speaking at the conference via video, touted the energy efficiency of Arm’s designs, saying they would become increasingly sought after to power artificial intelligence.

    Son said he thinks he is the only person who believes AGI will come within a decade. Haas said he thought it would come in his lifetime.

    [ad_2]

    Source link

  • Pope Francis warns about AI’s dangers | CNN Business

    Pope Francis warns about AI’s dangers | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Pope Francis warned that artificial intelligence could pose a risk to society, highlighting its “disruptive possibilities and ambivalent effects” and urging those who would develop or use AI to do so responsibly.

    In a statement Tuesday, Francis alluded to the threat of algorithmic bias in technology and called on the public for vigilance “so that a logic of violence and discrimination does not take root in the production and use of such devices, at the expense of the most fragile and excluded.”

    “Injustice and inequalities fuel conflicts and antagonisms,” Francis continued. “The urgent need to orient the concept and use of artificial intelligence in a responsible way, so that it may be at the service of humanity and the protection of our common home, requires that ethical reflection be extended to the sphere of education and law.”

    Francis’s remarks dovetail with calls by some AI experts to ensure that algorithms are properly “aligned” in development to support human rights and other widely shared values. Other industry experts and policymakers have expressed concerns that AI could facilitate the spread of fraud, misinformation, cyberattacks and perhaps even the creation of biological weapons.

    Francis himself has been the subject of AI-generated deepfakes. Earlier this year, an AI-generated image of Francis wearing a white, puffy Balenciaga-inspired coat went viral.

    Tuesday’s message announced the theme for 2024’s World Day of Peace, which the Pope said would focus on AI and peace.

    “The protection of the dignity of the person,” he said, “and concern for a fraternity effectively open to the entire human family, are indispensable conditions for technological development to help contribute to the promotion of justice and peace in the world.”

    [ad_2]

    Source link

  • Google to require disclosures of AI content in political ads | CNN Business

    Google to require disclosures of AI content in political ads | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Starting in November, Google will require political advertisements to prominently disclose when they feature synthetic content — such as images generated by artificial intelligence — the tech giant announced this week.

    Political ads that feature synthetic content that “inauthentically represents real or realistic-looking people or events” must include a “clear and conspicuous” disclosure for viewers who might see the ad, Google said Wednesday in a blog post. The rule, an addition to the company’s political content policy that covers Google and YouTube, will apply to image, video and audio content.

    The policy update comes as campaign season for the 2024 US presidential election ramps up and as a number of countries around the world prepare for their own major elections the same year. At the same time, artificial intelligence technology has advanced rapidly, allowing anyone to cheaply and easily create convincing AI-generated text and, increasingly, audio and video. Digital information integrity experts have raised alarms that these new AI tools could lead to a wave of election misinformation that social media platforms and regulators may be ill-prepared to handle.

    AI-generated images have already begun to crop up in political advertisements. In June, a video posted to X by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s then-top infectious disease specialist, were tricky to spot: They were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    The Republican National Committee in April released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington, DC, to whom CNN showed the video did not notice it on their first watch.

    In its policy update, Google said it will require disclosures on ads using synthetic content in a way that could mislead users. The company said, for example, that an “ad with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do” would need a label.

    Google said the policy will not apply to synthetic or altered content that is “inconsequential to the claims made in the ad,” including changes such as image resizing, color corrections or “background edits that do not create realistic depictions of actual events.”

    A group of top artificial intelligence companies, including Google, agreed in July to a set of voluntary commitments put forth by the Biden administration to help improve safety around their AI technologies. As part of that agreement, the companies said they would develop technical mechanisms, such as watermarks, to ensure users know when content was generated by AI.

    The Federal Election Commission has also been exploring how to regulate AI in political ads.

    [ad_2]

    Source link

  • Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    [ad_1]



    CNN
     — 

    There’s nothing particularly new about Google’s latest-generation Pixel 8 smartphone hardware. That’s why the company is pushing hard to tout its AI-powered new software, which Google says was built specifically for the “first phone of the generative AI era.”

    At a press event in New York City, Google

    (GOOG)
    showed off the new Pixel 8 and Pixel 8 Pro devices, which largely look the same as the year prior, albeit with more rounded edges. But inside, its new G3 Tensor chip unlocks an AI-powered world aimed at simplifying your life, from asking the device to summarize news articles and websites to using Google

    (GOOG)
    Assistant to field phone calls and tweaking photos to move or resize objects.

    The 6.3-inch Pixel 8 and the 6.7-inch Pixel 8 Pro comes with a brighter display, new camera system and longer-lasting battery life. The Pixel 8 is available in three colors – hazel, rose and obsidian – and starts at $699, about $100 less than the baseline iPhone 14 with the same amount of storage. (That’s about $100 more than last year’s Pixel 7).

    Meanwhile, the Pixel 8 Pro – which touts a polished aluminum frame and a matte back glass this year – now has the ability to take better low-light photos and sharper selfies. It starts at $999 – the same price as the iPhone 15 Pro – and is available in three colors: bay, porcelain and obsidian.

    Although these upgrades are mostly incremental, the AI enhancements and related features may appeal to tech enthusiasts who want the latest version of Android and an alternative to Apple or Samsung smartphones.

    At the same time, Google’s Pixel line remains a niche product. Its global market share for smartphones remains about 1%, according to data from ABI Research. Google also limits sales to only a handful of countries, so keeping the volume low has been strategic as Google remains predominantly a software company with many partners running Android.

    Reece Hayden, an analyst at ABI Research, said Google is looking to establish itself as an early market leader amid the “generative AI-related hysteria,” which kicked into high gear late last year with the introduction of ChatGPT. Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “[Adding it to the Pixel] creates further product differentiation by leveraging internal capabilities that Apple may not have,” said Hayden.

    He expects this announcement to be the first of many similar efforts coming to hardware over the next year, especially among brands who’ve already made investments in this area.

    Here’s a closer look at what Google announced and some of the standout new AI features:

    A Google employee demonstrates manual focus features of the new Google Pixel 8 Pro Phone in New York City, U.S., October 4, 2023.

    Google showed off a handful of photo features coming to its Pixel line, including Magic Editor which uses generative AI to reposition and resize a subject. Similarly, a new Audio Magic Eraser tool that lets users erase distracting sounds from videos.

    Another tool called Best Take snaps a series of photos and then aggregates the faces into one shot so everyone looks their best. And a a new Zoom enhanced feature lets users pinch to zoom in about 30 times after a photo is taken to focus in on and edit a specific area.

    The company said these efforts aim to “let you capture every moment just how you want to remember it.”

    Although the tools intend to give users more control over their photos, some analysts like Thomas Husson at market research firm Forrester believe it will be harder to distinguish between what’s real and what’s not.

    “The fact that Google refers to a ‘Magic Eraser’ will blur the distinction between real photos and heavily edited ones,” Husson said. But he warns an uptick in deepfake apps already makes it hard to decipher the authenticity of some shots. “You don’t really need Google AI for that.”

    The company said Google Assistant will now sound more realistic when it engages with callers. Google’s screen call tool already lets Assistant field incoming calls, speak to callers and determine who’s on the line before pushing it through to the user. But its robotic voice will sound increasing more natural, the company said.

    Google is also bringing the capabilities of its Bard AI chatbot to Google Assistant, so it will be able to do more than set an alarm or tell the weather. With its new generative AI capabilities, it will be able to review important emails in a user’s inbox or reveal more about a hotel that popped up on their Instagram feed. Assistant will also be able to understand user questions in voice, text and images.

    “With generative AI on the scene, it’s really creating a lot of new opportunities to build an even more intuitive and intelligent and personalized digital assistant,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN.

    In addition to making Assistant more useful, the tool will make it easier for more users to interact with Google’s six-month-old Bard on interfaces they may already frequently engage with. Last month, Google rolled out a major expansion of Bard, allowing users to link the tool to their Gmail and other Google Workspace tools and making it easier to fact check the AI’s responses.

    Google launched Assistant with Bard to a small test group on Wednesday, and it will be more widely available to Android and iOS users in the coming months.

    AI is also getting smarter on the Pixel Watch 2 ($349), its second-generation smartwatch. Users can use Bard capabilities via an upgraded Google Assistant watch app to ask it how they slept and get other health insights.

    In addition, the Pixel 2 features a new heart rate sensor, which works alongside a new AI-driven heart rate algorithm, to provide a more accurate heart rate reading than before. But Hayden said he doesn’t think more AI will add too much more to its existing value proposition.

    “Smart watches already include a fair amount of AI, and Pixel is no different,” he said.

    [ad_2]

    Source link