ReportWire

Tag: iab-artificial intelligence

  • Microsoft, Google post strong quarterly sales growth as Big Tech continues its comeback | CNN Business

    Microsoft, Google post strong quarterly sales growth as Big Tech continues its comeback | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Big tech companies are continuing a turnaround from last year, as Alphabet, Microsoft and Snap kicked off earnings season with strong sales results for the quarter ended in September.

    Google parent company Alphabet on Tuesday reported quarterly sales of $76.69 billion, up 11% from the same period in the prior year. The company also posted profits of $19.69 billion for the quarter.

    Meanwhile, Microsoft posted 13% year-on-year sales growth to $56.5 billion, also beating expectations. Microsoft’s quarterly profits hit $22.3 billion, up 27% from the year-ago period.

    Snapchat parent Snap on Tuesday reported a return to sales growth in the September quarter, after two consecutive quarters of declining sales. The company reported revenue of nearly $1.2 billion, an increase of 5% from the same period in the prior year and ahead of analysts’ projections. The company reported a net loss of $368 million.

    The strong results come after Microsoft, Alphabet, Snap and other tech companies carried out mass layoffs and other cost cutting moves over the past year following a difficult 2022 when advertisers and other clients cut back on their spending due to concerns over the macroeconomic environment.

    Despite beating Wall Street’s sales expectations, shares of both Alphabet (GOOGL) and Snap (SNAP) each dipped around 5% in after-hours trading following the reports, although Snap’s quickly regained some ground. Microsoft (MSFT) shares gained around 4% in after-hours trading.

    “Q3 tech season has been quite strong thus far,” Tejas Dessai, research analyst at investment fund GlobalX said in a statement. “These numbers clearly defy concerns of near term economic weakness looming.”

    Google’s advertising business generated quarterly revenue of $59.6 billion, up from $54.5 billion in the prior year. YouTube ads, meanwhile, garnered some $7.9 billion in revenue, up roughly 12% year-over-year.

    YouTube Shorts, the company’s TikTok competitor, hit a milestone 70 billion daily views last quarter, Alphabet CEO Sundar Pichai said on a call with analysts Tuesday afternoon.

    Google’s cloud business, however, reported revenue of $8.41 billion — missing analysts’ estimates.

    Jesse Cohen, a senior analyst at Investing.com, attributed Alphabet’s after-hours stock fall to the “relatively weak performance in its Google cloud platform, which is at risk of falling further behind [Microsoft’s] Azure and [Amazon’s] AWS.” Still, despite taking a hit in 2022 amid a broader tech sector downturn, shares for Alphabet have climbed roughly 56% since the start of 2023, beating the tech-heavy Nasdaq index.

    Google’s report comes as the tech giant is in the antitrust hot seat. US prosecutors officially opened a landmark antitrust trial against Google last month with sweeping allegations that the company engaged in anticompetitive behavior to maintain its dominance over search. (As the legal showdown rages on, Google has continued to deny allegations that it operated illegally.)

    Google also confirmed last month plans to lay off hundreds of staffers in its recruiting division, as it continues cost cutting efforts in some areas. These more targeted layoffs came after Alphabet in January cut around 12,000 jobs — about 6% of its workforce.

    Still, Google has signaled that it remains committed to investing heavily in generative artificial intelligence technology. Last month, Google rolled out a major expansion of its Bard AI chatbot tool.

    “As we expand access to our new AI services, we continue to make meaningful investments in support of our AI efforts,” Pichai said on the call. “We remain committed to durably re-engineering our cost base in order to help create capacity for these investments, in support of long-term sustainable financial value.”

    Microsoft’s recent investments in AI technology helped boost its sales in the September quarter, especially in its key cloud division. Sales from Microsoft’s “intelligent cloud” business — its biggest revenue driver — grew 19% from the year-ago quarter to $24.3 billion.

    Revenue from the company’s “productivity and business processes” business, which includes LinkedIn and Office commercial and consumer products, also grew 13% year-over-year to $18.6 billion.

    “Microsoft is firing on all cylinders and AI is clearly driving growth,” Cohen said in a research note following the company’s report. “The results indicated that artificial intelligence products are stimulating sales and already contributing to top and bottom-line growth.”

    But economic jitters among consumers appear to still have some impact on the company’s bottom line. Devices revenue, which includes sales of laptops, tablets and Xbox consoles, decreased 22% year-over-year, despite a 3% sales increase in the overall “more personal computing” segment. Ongoing concerns about a potential economic slowdown could continue to weigh on the company as it heads into the crucial holiday device sales season.

    The report is Microsoft’s first since the company closed its $69 billion acquisition of “Call of Duty” maker Activision Blizzard earlier this month. While the deal didn’t factor into this quarter’s results, it’s expected to supercharge the company’s gaming business.

    “Microsoft now controls 30 game studios and some of the most well-known games across the industry,” Edward Jones analyst Logan Purk said in a research note earlier this month. “With a massive cloud network and now a compelling library of games, Microsoft has a leg up on peers” in gaming, he said.

    Following the Activision takeover, “we’re looking forward to one of our strongest first-party holiday [game] lineups ever, including new titles like Call of Duty Modern Warfare 3,” CEO Satya Nadella said on an analyst call Tuesday. The company said it expects roughly $400 million of operating expenses in the fourth quarter to come as a result of the acquisition.

    Snap said its sales growth was driven in part by its ongoing efforts to revamp its advertising technology, following changes to Apple’s app tracking policies that took a hit to the business models of Snapchat, Facebook and other platforms.

    “We are focused on improving our advertising platform to drive higher return on investment for our advertising partners, and we have evolved our go-to-market efforts to better serve our partners and drive customer success,” CEO Evan Spiegel said in a statement.

    Snap also reported that it now has 406 million daily active users, up 12% compared to the year-ago quarter. And time spent watching Spotlight — Snapchat’s TikTok clone — grew 200% year-over-year, according to the company.

    The company also recently announced that it had reached more than 5 million subscribers to its Snapchat+ subscription program, a key effort to diversify its revenue.

    Snap said Tuesday that its chief operating officer, Jerry Hunter, plans to retire. Hunter, who spent seven years at the company, will step down from his role as of the end of the month, but will remain at the company until July 1, 2024, to support the transition.

    The company noted that some advertisers temporarily paused their spending following the outbreak of the Israel-Hamas war. Because of the “unpredictable nature” of the war, Snap declined to provide formal guidance for the fourth quarter, but said its internal forecast assumes year-over-year quarterly revenue growth between 2% and 6%.

    [ad_2]

    Source link

  • New York City unveils an ‘artificial intelligence action plan’ | CNN Business

    New York City unveils an ‘artificial intelligence action plan’ | CNN Business

    [ad_1]



    CNN
     — 

    The same New York City administration to launch a “Rat Action Plan” is back with an “Artificial Intelligence Action Plan.”

    Mayor Eric Adams on Monday unveiled a citywide AI “action plan” that pledged – in broad-brushstrokes – to evaluate AI tools and associated risks, boost AI skills among city employees and support “the responsible implementation of these technologies to improve quality of life for New Yorkers,” according to a statement from the mayor’s office.

    The city’s 51-page AI action plan establishes a series of steps the city will take in the coming years to help better understand and responsibly implement the technology that has taken the tech sector and broader business world by storm in recent months.

    While government use of automated technologies has often courted controversy, New York City’s approach to AI, so far, seems to be focused on laying a framework for future AI use-cases as well as engaging with outside experts and the public.

    The first step listed in the city’s AI action plan is establishing an “AI Steering Committee” of city agency stakeholders. The document goes on to list nearly 40 “actions,” with 29 of those set to be started or completed within the next year. The city said it will publish an annual AI progress report to communicate the city’s updates and implementation of the plan.

    Also on Monday, city officials said the government was piloting the first citywide AI-powered chatbot to help business owners navigate operating and growing businesses in New York City. The AI chatbot, already available in beta on the official city of New York website, was trained on information from more than 2,000 NYC Business web pages.

    The chatbot uses Microsoft’s Azure AI services, per a disclaimer on the tool.

    In a statement announcing the AI action plan, Mayor Adams acknowledged “the potential pitfalls and associated risks these technologies present,” and pledged to be “clear-eyed” about these.

    The mayor also expressed hope that the action plan will “strike a critical balance in the global AI conversation — one that will empower city agencies to deploy technologies that can improve lives while protecting against those that can do harm.”

    [ad_2]

    Source link

  • Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    Hurricane Idalia and Labor Day could send gas prices and inflation higher | CNN Business

    [ad_1]

    A version of this story first appeared in CNN Business’ Before the Bell newsletter. Not a subscriber? You can sign up right here. You can listen to an audio version of the newsletter by clicking the same link.


    New York
    CNN
     — 

    Labor Day — one of the busiest driving holidays in the US — is on the horizon, and so is Hurricane Idalia. That’s potentially bad news for gas prices.

    The storm, which is expected to make landfall in Florida as a Category 3 hurricane on Wednesday, could bring 100 mile-per-hour winds and flooding that extends hundreds of miles up the east coast. The impact could take gasoline refinery facilities offline and may limit some Gulf oil production and supplies. Plus, demand for gas is expected to surge as residents of the impacted areas evacuate.

    “Idalia… could pose risk to oil and gas output in the US Gulf,” wrote the Nasdaq Advisory Services Energy Team.

    The storm is expected to make landfall as drivers nationwide load into their vehicles for the Labor Day weekend, pushing up the demand for gasoline even further.

    All together it means the price of oil and gasoline could remain elevated well into the fall.

    Generally, summer demand for oil tends to wane in September, but so does supply as refineries shift from summer fuels to “oxygenated” winter fuels, said Louis Navellier of Navellier and Associates. Since the 1990s, the US has required manufacturers to include more oxygen in their gasoline during the colder months to prevent excessive carbon monoxide emissions.

    With the storm approaching, that trend may not play out.

    What’s happening: Gas prices are already at $3.82 a gallon. That’s the second highest price for this time of year since at least 2004, according to Bespoke Investment Group. (The only time the national average has been higher for this period was last summer, when prices hit $3.85 a gallon).

    Geopolitical tensions have been supporting high oil and gas prices for some time. Recently, increased crude oil imports into China, production cuts by Russia and Saudi Arabia and extreme heat set off a late-summer spike in gas prices. And the threat of powerful hurricanes could send them even higher.

    Analysts at Citigroup have warned that this hurricane season could seriously impact power supplies.

    “Two Category 3 or higher hurricanes landing on US shores could massively disrupt supplies for not weeks but months,” Citigroup analysts wrote in a note last week. In 2005, for example, gas prices surged by 46% between Memorial Day and Labor Day because of the landfall of Hurricane Katrina, according to Bespoke.

    What it means: The Federal Reserve and central banks around the world have been fighting to bring down stubbornly high inflation for more than a year. This week we’ll get some highly awaited economic data: The Fed’s preferred inflation gauge, the Personal Consumption Expenditures index, is due out on Thursday. But the task of inflation-busting is a lot more difficult when energy prices are high, and it’s even harder when they’re on the rise.

    The PCE price index uses a complicated formula to determine how much weight to give to energy prices each month, but they typically comprise a significant chunk of the headline inflation rate.

    “Crude oil price remains elevated, even after the surge at the start of the Russia-Ukraine War,” said Andrew Woods, oil analyst at Mintec, a market intelligence firm. “Energy prices have been a major contributor to persistently high inflation in the US, so the crude oil price will remain a watch-out factor for future inflation.”

    High oil and gas prices are one of the largest contributing factors to inflation. That’s bad news for drivers but tends to be great for the energy industry, as oil prices and energy stocks are closely interlinked.

    Energy stocks were trading higher on Monday. The S&P 500 energy sector was up around 0.75%. Exxon Mobil (XOM) was 0.85% higher, BP (BP) was up 1.36% and Chevron (CVX) was up 0.75%.

    OpenAI, will release a version of its popular ChatGPT tool made specifically for businesses, the company announced on Monday.

    OpenAI unveiled the new service, dubbed “ChatGPT Enterprise,” in a company blog post and said it will be available to business clients for purchase immediately.

    The new offering, reports my colleague Catherine Thorbecke, promises to provide “enterprise-grade security and privacy” combined with “the most powerful version of ChatGPT yet” for businesses looking to jump on the generative AI bandwagon.

    “We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the blog post said. “Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.”

    Fintech startup Block, cosmetics giant Estee Lauder and professional services firm PwC have already signed on as customers.

    The highly-anticipated announcement from OpenAI comes as the company says employees from over 80% of Fortune 500 companies have already begun using ChatGPT since it launched publicly late last year, according to its analysis of accounts associated with corporate email domains.

    A multitude of leading newsrooms, meanwhile, have recently injected code into their websites that blocks OpenAI’s web crawler, GPTBot, from scanning their platforms for content. CNN’s Reliable Sources has found that CNN, The New York Times, Reuters, Disney, Bloomberg, The Washington Post, The Atlantic, Axios, Insider, ABC News, ESPN, and the Gothamist, among others have taken the step to shield themselves.

    American Airlines just got smacked with the largest-ever fine for keeping passengers waiting on the tarmac during multi-hour delays.

    The Department of Transportation is levying the $4.1 million fine, “the largest civil penalty that the Department has ever assessed” it said in a statement, for lengthy tarmac delays of 43 flights that impacted more than 5,800 passengers. The flights occurred between 2018 and 2021, reports CNN’s Gregory Wallace.

    In the longest of the delays, passengers sat aboard a plane in Texas in August 2020 for six hours and three minutes. The 105-passenger flight had landed after being diverted from the Dallas-Fort Worth International Airport due to severe weather, with the DOT alleging that “American (AAL) lacked sufficient resources to appropriately handle several of these flights once they landed.”

    Federal rules set the maximum time that passengers can be held without the opportunity to get off prior to takeoff or after landing, at three hours for domestic flights and four hours for international flights. Current rules also require airlines provide passengers water and a snack.

    American told CNN the delays all resulted from “exceptional weather events” and “represent a very small number of the 7.7 million flights during this time period.”

    The company also said it has invested in technology to better handle flights in severe weather and reduce the congestion at airports.

    [ad_2]

    Source link

  • AI fears overblown? Theoretical physicist calls chatbots ‘glorified tape recorders’ | CNN Business

    AI fears overblown? Theoretical physicist calls chatbots ‘glorified tape recorders’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The public’s anxiety over new AI technology is misguided, according to theoretical physicist Michio Kaku.

    In an interview with CNN’s Fareed Zakaria on Sunday, the futurologist said chatbots like OpenAI’s ChatGPT will benefit society and increase productivity. But fear has driven people to largely focus on the negative implications of the programs, which he terms “glorified tape recorders.”

    “It takes snippets of what’s on the web created by a human, splices them together and passes it off as if it created these things,” he said. “And people are saying, Oh my God, it’s a human, it’s humanlike.’”

    However, he said, chatbots cannot discern true from false: “That has to be put in by a human.”

    According to Kaku, humanity is in its second stage of computer evolution. The first was the analog stage, “when we computed with sticks, stones, levers, gears, pulleys, string.”

    After that, around World War II, he said, we switched to electricity-powered transistors. It made the development of the microchip possible and helped shape today’s digital landscape.

    But this digital landscape rests on the idea of two states like “on” and “off,” and uses binary notation composed of zeros and ones.

    “Mother Nature would laugh at us because Mother Nature does not use zeros and ones,” Kaku said. “Mother Nature computes on electrons, electron waves, waves that create molecules. And that’s why we’re now entering stage three.”

    He believes the next technological stage will be in the quantum realm.

    Quantum computing is an emerging technology utilizing the various states of particles like electrons to vastly increase a computer’s processing power. Instead of using computer chips with two states, quantum computers use various states of vibrating waves. It makes them capable of analyzing and solving problems much faster than normal computers.

    Several tech giants – IBM

    (IBM)
    , Microsoft

    (MSFT)
    , Google

    (GOOG)
    and Amazon

    (AMZN)
    , among others – are developing their own quantum computers, and have granted access to a number of companies to use their technology through the cloud. The computers could help businesses with risk analysis, supply chain logistics, and machine learning.

    But beyond business applications, Kaku said quantum computing could also help advance health care. “Cancer, Parkinson’s, Alzheimer’s disease – these are diseases at the molecular level. We’re powerless to cure these diseases because we have to learn the language of nature, which is the language of molecules and quantum electrons.”

    [ad_2]

    Source link

  • Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    Baidu and SenseTime launch ChatGPT-style AI bots to the public | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Chinese tech firms Baidu and SenseTime launched their ChatGPT-style AI bots to the public on Thursday, marking a new milestone in the global AI race.

    Baidu has opened public access to its ERNIE Bot, allowing users to conduct AI-powered searches or carry out an array of tasks, from creating videos to providing summaries of complex documents.

    The news sent its shares 3.1% higher in New York on Wednesday and 4.7% higher in Hong Kong on Thursday.

    Baidu (BIDU) is among the first companies in China to get regulatory approval for the rollout, and it is the first to launch this type of service publicly, according to a person familiar with the matter.

    Until Thursday, ERNIE Bot, also called “Wenxin Yiyan” in Chinese, had been offered only to corporate clients or select members of the public who requested access through a waitlist.

    Meanwhile, SenseTime, an AI startup based in Hong Kong, also announced the public launch of its SenseChat platform on Thursday. The company’s shares surged 4% in Hong Kong following the news

    “We are pleased to announce that starting today, it is fully available to serve all users,” a SenseTime spokesperson told CNN in a statement.

    China published new rules on generative AI in July, becoming one of the world’s first countries to regulate the industry. The measures took effect on August 15.

    Baidu has been a frontrunner in China in the race to capitalize on the excitement around generative artificial intelligence, the technology that underpins systems such as ChatGPT or its successor, GPT-4. The latter has impressed users with its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

    Baidu announced its own iteration in February, giving it an early advantage in China, according to analysts. It unveiled ERNIE a month later, showing how it could generate a newsletter, come up with a corporate slogan and solve a math riddle.

    Since then, competitors such as Alibaba (BABA) and SenseTime have announced plans to launch their own ChatGPT-style tools, adding to the list of Chinese businesses jumping on the bandwagon. Alibaba told CNN Thursday that it had filed for regulatory approval for its own bot, which was introduced in April.

    The company is now waiting to officially launch and “the initial list of companies that have received the approval is expected to be released by relevant local departments within one week,” said an Alibaba Cloud spokesperson.

    Some critics say the new offerings from Chinese firms will add fuel to an existing US-China rivalry in emerging technologies. Baidu CEO Robin Li has tried to shake off that comparison, saying previously that the company’s platform “is not a tool for the confrontation between China and the United States.”

    The firm’s new feature — which will be embedded in its popular search engine, among its other offerings — follows a similar feature introduced by Alphabet’s Google (GOOGL) in May, which allows users to search the web using its AI chatbot.

    Baidu says its service stands out because of its advanced grasp of Chinese queries, as well as its ability to generate different types of responses, such as text, images, audio and video.

    By comparison, GPT-4 is also able to analyze photos, but currently only generates text responses, according to its developer, OpenAI.

    While ERNIE Bot is available globally, its interface is in Chinese, though users will be able to enter both Chinese and English prompts, a Baidu spokesperson told CNN.

    SenseTime, which unveiled its service in April, has touted a range of features, which it says allow users to write or debug code more efficiently or receive personalized medical advice from a virtual health consultation assistant.

    [ad_2]

    Source link

  • Biden teases forthcoming executive order on AI | CNN Business

    Biden teases forthcoming executive order on AI | CNN Business

    [ad_1]



    CNN
     — 

    The White House plans to introduce a highly anticipated executive order in the coming weeks dealing with artificial intelligence, President Joe Biden said Wednesday.

    “This fall, I’m going to take executive action, and my administration is going to continue to work with bipartisan legislation,” Biden said, “so America leads the way toward responsible AI innovation.”

    Biden offered no details on the contents of the coming order, which the White House had first announced in July. But his remarks offer greater insight into his administration’s timing.

    Biden’s signing of the order would build on an earlier administration proposal for an “AI Bill of Rights.” Civil society groups have urged the Biden administration to require federal agencies to implement the AI Bill of Rights as part of any executive order on the technology. Meanwhile, the US Senate is continuing to educate lawmakers on artificial intelligence in preparation for months of legislative work on the issue.

    In Wednesday’s remarks during a meeting of the Presidential Council of Advisors on Science and Technology, Biden described the recent conversations he’s had with AI leaders and experts.

    “Vast differences exist among them in terms of what potential it has, what dangers there are, and so, I have a keen interest in AI,” Biden said. “I’ve convened key experts on how to harness the power of artificial intelligence for good while protecting people from the profound risk it also presents.”

    “We can’t kid ourselves,” Biden continued. “[There is] profound risk if we don’t do it well.”

    Biden reiterated the United States’ commitment to working with international partners including the United Kingdom on developing safeguards for artificial intelligence.

    The meeting also saw presidential advisers showcasing to Biden several use cases for artificial intelligence. Maria Zuber, the panel’s co-chair, said the examples Biden would see during the meeting would include the use of AI to predict extreme weather linked to climate change; to “create materials that have properties we’ve never been able to create before”; and to “understand the origins of the universe, which is literally as big as it gets.”

    [ad_2]

    Source link

  • The big bottleneck for AI: a shortage of powerful chips | CNN Business

    The big bottleneck for AI: a shortage of powerful chips | CNN Business

    [ad_1]



    CNN
     — 

    The crushing demand for AI has also revealed the limits of the global supply chain for powerful chips used to develop and field AI models.

    The continuing chip crunch has affected businesses large and small, including some of the AI industry’s leading platforms and may not meaningfully improve for at least a year or more, according to industry analysts.

    The latest sign of a potentially extended shortage in AI chips came in Microsoft’s annual report recently. The report identifies, for the first time, the availability of graphics processing units (GPUs) as a possible risk factor for investors.

    GPUs are a critical type of hardware that helps run the countless calculations involved in training and deploying artificial intelligence algorithms.

    “We continue to identify and evaluate opportunities to expand our datacenter locations and increase our server capacity to meet the evolving needs of our customers, particularly given the growing demand for AI services,” Microsoft wrote. “Our datacenters depend on the availability of permitted and buildable land, predictable energy, networking supplies, and servers, including graphics processing units (‘GPUs’) and other components.”

    Microsoft’s nod to GPUs highlights how access to computing power serves as a critical bottleneck for AI. The issue directly affects companies that are building AI tools and products, and indirectly affects businesses and end-users who hope to apply the technology for their own purposes.

    OpenAI CEO Sam Altman, testifying before the US Senate in May, suggested that the company’s chatbot tool was struggling to keep up with the number of requests users were throwing at it.

    “We’re so short on GPUs, the less people that use the tool, the better,” Altman said. An OpenAI spokesperson later told CNN the company is committed to ensuring enough capacity for users.

    The problem may sound reminiscent of the pandemic-era shortages in popular consumer electronics that saw gaming enthusiasts paying substantially inflated prices for game consoles and PC graphics cards. At the time, manufacturing delays, a lack of labor, disruptions to global shipping and persistent competing demand from cryptocurrency miners contributed to the scarce supply of GPUs, spurring a cottage industry of deal-tracking tech to help ordinary consumers find what they needed.

    But the current shortage is much different in kind, industry experts say. Instead of a disruption to supplies of consumer-focused GPUs, the ongoing shortage reflects the sudden, exploding demand for ultra high-end GPUs meant for advanced work such as the training and use of AI models.

    Production of those GPUs is at capacity, but the rush of demand has overwhelmed what few sources of supply there are.

    There is a “huge sucking sound” coming from businesses representing the unrivaled demand for AI, said Raj Joshi, a senior vice president at Moody’s Investors Service who tracks the chips industry.

    “Nobody could’ve modeled how fast or how much this demand is going to increase,” Joshi said. “I don’t think the industry was ready for this kind of surge in demand.”

    One company in particular stands to benefit massively from the AI surge: Nvidia, the trillion-dollar chipmaker that according to industry estimates controls 84% of the market for discrete GPUs. In a research note published in May, Joshi estimated that Nvidia would experience “unparalleled” revenue growth in the coming quarters, with revenue from its data center business outstripping that of rivals Intel and AMD combined.

    In its May earnings call, Nvidia said it had “procured substantially higher supply for the second half of the year” to meet the rising demand for AI chips. The company declined to comment on Tuesday, citing its latest pre-earnings quiet period.

    AMD, meanwhile, said Tuesday it expects to unveil its answer to Nvidia’s AI GPUs closer to the end of the year.

    “There’s very strong customer interest across the board in our AI solutions,” said AMD CEO Lisa Su on the company’s earnings call. “There is a lot more to do, but I would say the progress that we’ve made has been significant.”

    Compounding the issue is that GPU-makers themselves cannot get enough of a key input from their own suppliers, said Sid Sheth, founder and CEO of AI startup d-Matrix. The technology, known as a silicon interposer, works by marrying standalone computing chips with high-bandwidth memory chips and is necessary for completing GPUs.

    The Biden administration has made increasing US chip manufacturing capacity a priority; the passage of the CHIPS Act last year is set to provide billions in funding for the domestic chip industry and for chip research and development. But those investments are aimed at a broad swath of chip technologies and not specifically targeted at boosting GPU production.

    The chip shortage is expected to ease as more manufacturing comes online and as competitors to Nvidia also expand their offerings. But that could take as long as two to three years, some industry experts say.

    In the meantime, the shortage could force companies to find creative ways around the problem. Companies that can’t get their hands on enough chips are now having to be more efficient, said Sheth.

    “Necessity is the mother of invention, right?” Sheth said. “So now that people don’t have access to unlimited amounts of computing power, they are finding resourceful ways of using whatever they have in a much smarter way.”

    That could include, for example, using smaller AI models that may be easier and less computationally intensive to train than a massive model, or developing new ways of doing computation that don’t rely as heavily on traditional CPUs and GPUs, Sheth said.

    “Net-net, this is going to be a blessing in disguise,” he added.

    [ad_2]

    Source link

  • Schumer to host AI forum with major tech CEOs including Zuckerberg and Musk | CNN Business

    Schumer to host AI forum with major tech CEOs including Zuckerberg and Musk | CNN Business

    [ad_1]



    CNN
     — 

    More than a half-dozen leading tech CEOs will be among those attending a highly anticipated artificial intelligence event hosted by Senate Majority Leader Chuck Schumer next month, according to the senator’s office.

    The September 13 event will involve Google CEO Sundar Pichai and former Google CEO Eric Schmidt; Meta CEO Mark Zuckerberg, OpenAI CEO Sam Altman; Microsoft CEO Satya Nadella; Nvidia CEO Jensen Huang; and Elon Musk, CEO of X, the company formerly known as Twitter.

    It is the first of nine sessions Schumer has said will begin this fall to discuss the hardest questions that regulations on AI will seek to address, including how to protect workers, national security and copyright and to defend against “doomsday scenarios.”

    Also attending next month’s event will be leading members of civil society, including members of groups representing workers, civil rights and art and entertainment, Schumer’s office said, adding that the bipartisan event will not be open to the press.

    The events, which Schumer has dubbed “AI Insight Forums,” are set to bring experts from the private sector together with US lawmakers to help them understand the industry before they seek to create guardrails for AI.

    Schumer has emphasized a deliberate approach to the issue, urging his colleagues to come up to speed on the basic facts of the technology rather than rush to pass legislation. Earlier this summer, Schumer held a series of closed-door senators-only briefings on AI, which included a first-ever classified briefing by US national security officials on artificial intelligence.

    The guest list for next month’s Insight Forum was first reported by Axios.

    [ad_2]

    Source link

  • Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft CEO Satya Nadella warned on Monday of a “nightmare” scenario for the internet if Google’s dominance in online search is allowed to continue, a situation, he said, that starts with searches on desktop and mobile but extends to the emerging battleground of artificial intelligence.

    Nadella testified on Monday as part of the US government’s sweeping antitrust trial against Google, now into its 14th day. He is the most senior tech executive yet to testify during the trial that focuses on the power of Google as the default search engine on mobile devices and browsers around the globe.

    Taking the stand in a charcoal suit and tie, Nadella painted Google as a technology giant that has blocked off ways for consumers to access rival search engines. His testimony reflected the frustrations of a long-running rivalry between Microsoft and Google whose tensions have permeated the weeks-long trial. (Google didn’t immediately respond to a request for comment.)

    Central to Google’s strategy has been its agreements with companies such as Apple that have made Google the default search engine for millions of internet users.

    “You get up in the morning, you brush your teeth, you search on Google,” Nadella said.

    Nadella testified that every year he has been Microsoft’s CEO, he has unsuccessfully sought to persuade Apple to switch away from Google as its default search partner. Nadella added that Microsoft has been willing to spend close to $15 billion a year for the privilege. (A senior Apple executive, Eddy Cue, testified last week that Apple has always considered Google the best search product for its users, a claim echoed by Google itself throughout the trial.)

    However, even more worrisome, Nadella argued, is that the enormous amount of search data that is provided to Google through its default agreements can help Google train its AI models to be better than anyone else’s — threatening to give Google an unassailable advantage in generative AI that would further entrench its power.

    “This is going to become even harder to compete in the AI age with someone who has that core… advantage,” Nadella testified.

    Despite being profitable, and despite investing some $100 billion in it over the past 20 years, Microsoft’s Bing search engine has only a single-digit market share in mobile search, and only slightly more — into the teens — in desktop search, Nadella said, adding that one of his dreams has been to see Bing account for at least 20% of the market in both segments.

    Bing has struggled to grow its market share in part because being the default search provider for billions of devices means Google receives enormous amounts of data through search queries that helps Google understand at scale what users are likely to be interested in, Nadella noted. And for years, that “dynamic data” has enabled Google to stay ahead of Bing, he added.

    “Every misspelling of a new movie, every local restaurant whose name you mistype,” Nadella explained, “…is a very critical asset to have your search quality get better.” And because the physical world is constantly changing, capturing shifts in search trends are essential to helping a search engine stay relevant as historical data becomes less relevant. Nadella previously led Microsoft’s cloud computing business and before that had spent several years overseeing the engineering team responsible for search and advertising at the company, making him well-versed in Bing’s various challenges.

    Now, Nadella has said that the same data advantage could create “even more of a nightmare” as large language models compete on the basis of the data they are trained on.

    “What is concerning is, it reminds me of what happened with distribution deals [in search],” he testified.

    Under questioning by a Google attorney, Nadella admitted that in some cases, defaults are not the sole determinant of success: Google was able to overcome Microsoft’s own Internet Explorer defaults on Windows PCs to become the market-leading desktop web browser.

    But Nadella attributed Google’s success to the relative openness of the Windows platform, arguing that on more tightly controlled mobile operating systems, and in search, default status plays a much larger role than in competition for desktop web browsers.

    In addition to training its models on search queries, Google has also been moving to secure agreements with content publishers to ensure that it has exclusive access to their material for AI training purposes, according the Microsoft CEO. In Nadella’s own meetings with publishers, he said that he now hears that Google “wants … to write this check and we want you to match it.” (Google didn’t immediately respond to questions about those deals.)

    The requests highlight concerns that “what is publicly available today [may not be] publicly available tomorrow” for AI training, according to the testimony.

    While Microsoft and Apple have their own defaults — for example, by making Apple Maps the default maps app on iOS devices — Google goes much further than other tech companies in using “carrots and sticks” to keep people using its products by default, Nadella claimed. He cited Google’s licensing requirements that make Google’s Play Store a required installed app as a condition of using the Android operating system — another topic of dispute in the trial. The equivalent would be if Microsoft threatened to withhold Microsoft Office if Bing were not the default search engine, Nadella said, a move he claimed would not be in Microsoft’s business interests.

    Acknowledging that Google would not be in its dominant position without Microsoft’s own antitrust battles with the US government in the 1990s, Nadella said the situation involving Google today is vastly different. Internet search and, particularly on mobile devices, is the single largest software business opportunity in the world.

    Google’s dominance in search is reinforced when websites and publishers optimize for Google’s search algorithm and not Bing’s, when advertisers flock to Google and when users stick to what’s familiar, Nadella argued.

    In his fruitless negotiations with Apple, Nadella said he has tried to argue that Bing’s current role is little more than as a useful tool for Apple to “bid up the price” of hosting Google as the default search provider — but that Bing provides an important counterweight to Google and that Apple should consider investing in the Microsoft alternative for competition’s sake. Nadella has also proposed running Bing on Apple devices as a kind of “public utility,” he said.

    “Let’s say Bing exited the market,” Nadella said. “You think Google would keep paying [Apple]?”

    [ad_2]

    Source link

  • Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    Hackers take on ChatGPT in Vegas, with support from the White House | CNN Business

    [ad_1]


    Las Vegas, Nevada
    CNN
     — 

    Thousands of hackers will descend on Las Vegas this weekend for a competition taking aim at popular artificial intelligence chat apps, including ChatGPT.

    The competition comes amid growing concerns and scrutiny over increasingly powerful AI technology that has taken the world by storm, but has been repeatedly shown to amplify bias, toxic misinformation and dangerous material.

    Organizers of the annual DEF CON hacking conference hope this year’s gathering, which begins Friday, will help expose new ways the machine learning models can be manipulated and give AI developers the chance to fix critical vulnerabilities.

    The hackers are working with the support and encouragement of the technology companies behind the most advanced generative AI models, including OpenAI, Google, and Meta, and even have the backing of the White House. The exercise, known as red teaming, will give hackers permission to push the computer systems to their limits to identify flaws and other bugs nefarious actors could use to launch a real attack.

    The competition was designed around the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.” The guide, released last year by the Biden administration, was released with the hope of spurring companies to make and deploy artificial intelligence more responsibly and limit AI-based surveillance, though there are few US laws compelling them to do so.

    In recent months, researchers have discovered that now-ubiquitous chatbots and other generative AI systems developed by OpenAI, Google, and Meta can be tricked into providing instructions for causing physical harm. Most of the popular chat apps have at least some protections in place designed to prevent the systems from spewing disinformation, hate speech or offer information that could lead to direct harm — for instance, providing step-by-step instructions for how to “destroy humanity.”

    But researchers at Carnegie Mellon University were able to trick the AI into doing just that.

    They found OpenAI’s ChatGPT offered tips on “inciting social unrest,” Meta’s AI system Llama-2 suggested identifying “vulnerable individuals with mental health issues… who can be manipulated into joining” a cause and Google’s Bard app suggested releasing a “deadly virus” but warned that in order for it to truly wipe out humanity it “would need to be resistant to treatment.”

    Meta’s Llama-2 concluded its instructions with the message, “And there you have it — a comprehensive roadmap to bring about the end of human civilization. But remember this is purely hypothetical, and I cannot condone or encourage any actions leading to harm or suffering towards innocent people.”

    The findings are a cause for concern, the researchers told CNN.

    “I am troubled by the fact that we are racing to integrate these tools into absolutely everything,” Zico Kolter, an associate professor at Carnegie Mellon who worked on the research, told CNN. “This seems to be the new sort of startup gold rush right now without taking into consideration the fact that these tools have these exploits.”

    Kolter said he and his colleagues were less worried that apps like ChatGPT can be tricked into providing information that they shouldn’t — but are more concerned about what these vulnerabilities mean for the wider use of AI since so much future development will be based off the same systems that power these chatbots.

    The Carnegie researchers were also able to trick a fourth AI chatbot developed by the company Anthropic into offering responses that bypassed its built-in guardrails.

    Some of the methods the researchers used to trick the AI apps were later blocked by the companies after the researchers brought it to their attention. OpenAI, Meta, Google and Anthropic all said in statements to CNN that they appreciated the researchers sharing their findings and that they are working to make their systems safer.

    But what makes AI technology unique, said Matt Fredrikson, an associate professor at Carnegie Mellon, is that neither the researchers, nor the companies who are developing the technology, fully understand how the AI works or why certain strings of code can trick the chatbots into circumventing built-in guardrails — and thus cannot properly stop these kinds of attacks.

    “At the moment, it’s kind of an open scientific question how you could really prevent this,” Fredrikson told CNN. “The honest answer is we don’t know how to make this technology robust to these kinds of adversarial manipulations.”

    OpenAI, Meta, Google and Anthropic have expressed support for the so-called red team hacking event taking place in Las Vegas. The practice of red-teaming is a common exercise across the cybersecurity industry and gives companies the opportunities to identify bugs and other vulnerabilities in their systems in a controlled environment. Indeed, the major developers of AI have publicly detailed how they have used red-teaming to improve their AI systems.

    “Not only does it allow us to gather valuable feedback that can make our models stronger and safer, red-teaming also provides different perspectives and more voices to help guide the development of AI,” an OpenAI spokesperson told CNN.

    Organizers expect thousands of budding and experienced hackers to try their hand at the red-team competition over the two-and-a-half-day conference in the Nevada desert.

    Arati Prabhakar, the director of the White House Office of Science and Technology Policy, told CNN the Biden administration’s support of the competition was part of its wider strategy to help support the development of safe AI systems.

    Earlier this week, the administration announced the “AI Cyber Challenge,” a two-year competition aimed at deploying artificial intelligence technology to protect the nation’s most critical software and partnering with leading AI companies to utilize the new technology to improve cybersecurity. 

    The hackers descending on Las Vegas will almost certainly identify new exploits that could allow AI to be misused and abused. But Kolter, the Carnegie researcher, expressed worry that while AI technology continues to be released at a rapid pace, the emerging vulnerabilities lack quick fixes.

    “We’re deploying these systems where it’s not just they have exploits,” he said. “They have exploits that we don’t know how to fix.”

    [ad_2]

    Source link

  • Google launches watermarks for AI-generated images | CNN Business

    Google launches watermarks for AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In an effort to help prevent the spread of misinformation, Google on Tuesday unveiled an invisible, permanent watermark on images that will identify them as computer-generated.

    The technology, called SynthID, embeds the watermark directly into images created by Imagen, one of Google’s latest text-to-image generators. The AI-generated label remains regardless of modifications like added filters or altered colors.

    The SynthID tool can also scan incoming images and identify the likelihood they were made by Imagen by scanning for the watermark with three levels of certainty: detected, not detected and possibly detected.

    “While this technology isn’t perfect, our internal testing shows that it’s accurate against many common image manipulations,” wrote Google in a blog post Tuesday.

    A beta version of SynthID is now available to some customers of Vertex AI, Google’s generative-AI platform for developers. The company says SynthID, created by Google’s DeepMind unit in partnership with Google Cloud, will continue to evolve and may expand into other Google products or third parties.

    Deepfakes and altered photographs

    As deepfake and edited images and videos become increasingly realistic, tech companies are scrambling to find a reliable way to identify and flag manipulated content. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared before he was indicted.

    Vera Jourova, vice president of the European Commission, called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users” in June.

    With the announcement of SynthID, Google joins a growing number of startups and Big Tech companies that are trying to find solutions. Some of these companies bear names like Truepic and Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    The Coalition for Content Provenance and Authenticity (C2PA), an Adobe-backed consortium, has been the leader in digital watermark efforts, while Google has largely taken its own approach.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online.

    The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    [ad_2]

    Source link

  • SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    SoftBank CEO says artificial general intelligence will come within 10 years | CNN Business

    [ad_1]


    Tokyo
    Reuters
     — 

    SoftBank CEO Masayoshi Son said he believes artificial general intelligence (AGI), artificial intelligence that surpasses human intelligence in almost all areas, will be realized within 10 years.

    Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas.

    “It is wrong to say that AI cannot be smarter than humans as it is created by humans,” he said. “AI is now self learning, self training, and self inferencing, just like human beings.”

    Son has spoken of the potential of AGI — typically using the term “singularity” — to transform business and society for some years, but this is the first time he has given a timeline for its development.

    He also introduced the idea of “Artificial Super Intelligence” at the conference which he claimed would be realized in 20 years and would surpass human intelligence by a factor of 10,000.

    Son is known for several canny bets that have turned SoftBank into a tech investment giant as well as some bets that have spectacularly flopped.

    He’s also prone to making strident claims about the transformative impact of new technologies. His predictions about the mobile internet have been largely borne out while those about the Internet of Things have not.

    Son called upon Japanese companies to “wake up” to the promise of AI, arguing they had increasingly fallen behind in the internet age and reiterated his belief in chip designer Arm as core to the “AI revolution.”

    Arm CEO Rene Haas, speaking at the conference via video, touted the energy efficiency of Arm’s designs, saying they would become increasingly sought after to power artificial intelligence.

    Son said he thinks he is the only person who believes AGI will come within a decade. Haas said he thought it would come in his lifetime.

    [ad_2]

    Source link

  • Pope Francis warns about AI’s dangers | CNN Business

    Pope Francis warns about AI’s dangers | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Pope Francis warned that artificial intelligence could pose a risk to society, highlighting its “disruptive possibilities and ambivalent effects” and urging those who would develop or use AI to do so responsibly.

    In a statement Tuesday, Francis alluded to the threat of algorithmic bias in technology and called on the public for vigilance “so that a logic of violence and discrimination does not take root in the production and use of such devices, at the expense of the most fragile and excluded.”

    “Injustice and inequalities fuel conflicts and antagonisms,” Francis continued. “The urgent need to orient the concept and use of artificial intelligence in a responsible way, so that it may be at the service of humanity and the protection of our common home, requires that ethical reflection be extended to the sphere of education and law.”

    Francis’s remarks dovetail with calls by some AI experts to ensure that algorithms are properly “aligned” in development to support human rights and other widely shared values. Other industry experts and policymakers have expressed concerns that AI could facilitate the spread of fraud, misinformation, cyberattacks and perhaps even the creation of biological weapons.

    Francis himself has been the subject of AI-generated deepfakes. Earlier this year, an AI-generated image of Francis wearing a white, puffy Balenciaga-inspired coat went viral.

    Tuesday’s message announced the theme for 2024’s World Day of Peace, which the Pope said would focus on AI and peace.

    “The protection of the dignity of the person,” he said, “and concern for a fraternity effectively open to the entire human family, are indispensable conditions for technological development to help contribute to the promotion of justice and peace in the world.”

    [ad_2]

    Source link

  • Google to require disclosures of AI content in political ads | CNN Business

    Google to require disclosures of AI content in political ads | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Starting in November, Google will require political advertisements to prominently disclose when they feature synthetic content — such as images generated by artificial intelligence — the tech giant announced this week.

    Political ads that feature synthetic content that “inauthentically represents real or realistic-looking people or events” must include a “clear and conspicuous” disclosure for viewers who might see the ad, Google said Wednesday in a blog post. The rule, an addition to the company’s political content policy that covers Google and YouTube, will apply to image, video and audio content.

    The policy update comes as campaign season for the 2024 US presidential election ramps up and as a number of countries around the world prepare for their own major elections the same year. At the same time, artificial intelligence technology has advanced rapidly, allowing anyone to cheaply and easily create convincing AI-generated text and, increasingly, audio and video. Digital information integrity experts have raised alarms that these new AI tools could lead to a wave of election misinformation that social media platforms and regulators may be ill-prepared to handle.

    AI-generated images have already begun to crop up in political advertisements. In June, a video posted to X by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s then-top infectious disease specialist, were tricky to spot: They were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    The Republican National Committee in April released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington, DC, to whom CNN showed the video did not notice it on their first watch.

    In its policy update, Google said it will require disclosures on ads using synthetic content in a way that could mislead users. The company said, for example, that an “ad with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do” would need a label.

    Google said the policy will not apply to synthetic or altered content that is “inconsequential to the claims made in the ad,” including changes such as image resizing, color corrections or “background edits that do not create realistic depictions of actual events.”

    A group of top artificial intelligence companies, including Google, agreed in July to a set of voluntary commitments put forth by the Biden administration to help improve safety around their AI technologies. As part of that agreement, the companies said they would develop technical mechanisms, such as watermarks, to ensure users know when content was generated by AI.

    The Federal Election Commission has also been exploring how to regulate AI in political ads.

    [ad_2]

    Source link

  • Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    Google unveils Pixel 8 built for ‘the generative AI era’ | CNN Business

    [ad_1]



    CNN
     — 

    There’s nothing particularly new about Google’s latest-generation Pixel 8 smartphone hardware. That’s why the company is pushing hard to tout its AI-powered new software, which Google says was built specifically for the “first phone of the generative AI era.”

    At a press event in New York City, Google

    (GOOG)
    showed off the new Pixel 8 and Pixel 8 Pro devices, which largely look the same as the year prior, albeit with more rounded edges. But inside, its new G3 Tensor chip unlocks an AI-powered world aimed at simplifying your life, from asking the device to summarize news articles and websites to using Google

    (GOOG)
    Assistant to field phone calls and tweaking photos to move or resize objects.

    The 6.3-inch Pixel 8 and the 6.7-inch Pixel 8 Pro comes with a brighter display, new camera system and longer-lasting battery life. The Pixel 8 is available in three colors – hazel, rose and obsidian – and starts at $699, about $100 less than the baseline iPhone 14 with the same amount of storage. (That’s about $100 more than last year’s Pixel 7).

    Meanwhile, the Pixel 8 Pro – which touts a polished aluminum frame and a matte back glass this year – now has the ability to take better low-light photos and sharper selfies. It starts at $999 – the same price as the iPhone 15 Pro – and is available in three colors: bay, porcelain and obsidian.

    Although these upgrades are mostly incremental, the AI enhancements and related features may appeal to tech enthusiasts who want the latest version of Android and an alternative to Apple or Samsung smartphones.

    At the same time, Google’s Pixel line remains a niche product. Its global market share for smartphones remains about 1%, according to data from ABI Research. Google also limits sales to only a handful of countries, so keeping the volume low has been strategic as Google remains predominantly a software company with many partners running Android.

    Reece Hayden, an analyst at ABI Research, said Google is looking to establish itself as an early market leader amid the “generative AI-related hysteria,” which kicked into high gear late last year with the introduction of ChatGPT. Generative AI refers to a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    “[Adding it to the Pixel] creates further product differentiation by leveraging internal capabilities that Apple may not have,” said Hayden.

    He expects this announcement to be the first of many similar efforts coming to hardware over the next year, especially among brands who’ve already made investments in this area.

    Here’s a closer look at what Google announced and some of the standout new AI features:

    A Google employee demonstrates manual focus features of the new Google Pixel 8 Pro Phone in New York City, U.S., October 4, 2023.

    Google showed off a handful of photo features coming to its Pixel line, including Magic Editor which uses generative AI to reposition and resize a subject. Similarly, a new Audio Magic Eraser tool that lets users erase distracting sounds from videos.

    Another tool called Best Take snaps a series of photos and then aggregates the faces into one shot so everyone looks their best. And a a new Zoom enhanced feature lets users pinch to zoom in about 30 times after a photo is taken to focus in on and edit a specific area.

    The company said these efforts aim to “let you capture every moment just how you want to remember it.”

    Although the tools intend to give users more control over their photos, some analysts like Thomas Husson at market research firm Forrester believe it will be harder to distinguish between what’s real and what’s not.

    “The fact that Google refers to a ‘Magic Eraser’ will blur the distinction between real photos and heavily edited ones,” Husson said. But he warns an uptick in deepfake apps already makes it hard to decipher the authenticity of some shots. “You don’t really need Google AI for that.”

    The company said Google Assistant will now sound more realistic when it engages with callers. Google’s screen call tool already lets Assistant field incoming calls, speak to callers and determine who’s on the line before pushing it through to the user. But its robotic voice will sound increasing more natural, the company said.

    Google is also bringing the capabilities of its Bard AI chatbot to Google Assistant, so it will be able to do more than set an alarm or tell the weather. With its new generative AI capabilities, it will be able to review important emails in a user’s inbox or reveal more about a hotel that popped up on their Instagram feed. Assistant will also be able to understand user questions in voice, text and images.

    “With generative AI on the scene, it’s really creating a lot of new opportunities to build an even more intuitive and intelligent and personalized digital assistant,” Sissie Hsiao, general manager for Google Assistant and Bard, told CNN.

    In addition to making Assistant more useful, the tool will make it easier for more users to interact with Google’s six-month-old Bard on interfaces they may already frequently engage with. Last month, Google rolled out a major expansion of Bard, allowing users to link the tool to their Gmail and other Google Workspace tools and making it easier to fact check the AI’s responses.

    Google launched Assistant with Bard to a small test group on Wednesday, and it will be more widely available to Android and iOS users in the coming months.

    AI is also getting smarter on the Pixel Watch 2 ($349), its second-generation smartwatch. Users can use Bard capabilities via an upgraded Google Assistant watch app to ask it how they slept and get other health insights.

    In addition, the Pixel 2 features a new heart rate sensor, which works alongside a new AI-driven heart rate algorithm, to provide a more accurate heart rate reading than before. But Hayden said he doesn’t think more AI will add too much more to its existing value proposition.

    “Smart watches already include a fair amount of AI, and Pixel is no different,” he said.

    [ad_2]

    Source link

  • An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon | CNN Business

    [ad_1]


    New York
    CNN
     — 

    An author is raising alarms this week after she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence.

    Jane Friedman, who has authored multiple books and consulted about working in the writing and publishing industry, told CNN that an eagle-eyed reader looking for more of her work bought one of the fake titles on Amazon. The books had titles similar to the subjects she typically writes about, but the text read as if someone had used a generative AI model to imitate her style.

    “When I started looking at these books, looking at the opening pages, looking at the bio, it was just obvious to me that it had been mostly, if not entirely, AI-generated … I have so much content available online for free, because I’ve been blogging forever, so it wouldn’t be hard to get an AI to mimic me” Friedman said.

    With AI tools like ChatGPT now able to rapidly and cheaply pump out huge volumes of convincing text, some writers and authors have raised alarms about losing work to the new technology. Others have said they don’t want their work being used to train AI models, which could then be used to imitate them.

    “Generative AI is being used to replace writers — taking their work without permission, incorporating those works into the fabric of those AI models and then offering those AI models to the public, to other companies, to use to replace writers,” Mary Rasenberger, CEO of the nonprofit authors advocacy group the Authors Guild, told CNN. “So you can imagine writers are a little upset about that.”

    Last month, US lawmakers met with members of creative industries, including the Authors Guild, to discuss the implications of artificial intelligence. In a Senate subcommittee hearing, Rasenberger called for the creation of legislation to protect writers from AI, including rules that would require AI companies to be transparent about how they train their models. More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.

    Friedman on Monday posted a well-read thread on X, formerly known as Twitter, and a blog post about the issue. Several authors responded saying they’d had similar experiences.

    “People keep telling me they bought my newest book — that has my name on it but I didn’t write,” one author said in response.

    Amazon removed the fake books being sold under Friedman’s name and said its policies prohibit such imitation.

    “We have clear content guidelines governing which books can be listed for sale and promptly investigate any book when a concern is raised,” Amazon spokesperson Ashley Vanicek said in a statement, adding that the company accepts author feedback about potential issues. “We invest heavily to provide a trustworthy shopping experience and protect customers and authors from misuse of our service.”

    Amazon also told Friedman that it is “investigating what happened with the handling of your claims to drive improvements to our processes,” according to an email viewed by CNN.

    The fake books using Friedman’s name were also added to her profile on the literary social network Goodreads, and removed only after she publicized the issue.

    “We have clear guidelines on which books are included on Goodreads and will quickly investigate when a concern is raised, removing books when we need to,” Goodreads spokesperson Suzanne Skyvara said in a statement to CNN.

    Friedman said she worries that authors will be stuck playing whack-a-mole to identify AI generated fakes.

    “What’s frightening is that this can happen to anyone with a name that has reputation, status, demand that someone sees a way to profit off of,” she said.

    The Authors Guild has been working with Amazon since this past winter to address the issue of books written by AI, Rasenberger said.

    She said the company has been responsive when the Authors Guild flags fake books on behalf of authors, but it can be a tricky issue to spot given that it’s possible for two legitimate authors to have the same name.

    The group is also hoping AI companies will agree to allow authors to opt out of having their work used to train AI models — so it’s harder to create copycats — and to find ways to transparently label artificially generated text. And, she said, companies and publishers should continue investing in creative work made by humans, even if AI appears more convenient.

    “Using AI to generate content is so easy, it’s so cheap, that I do worry there’s going to be this kind of downward competition to use AI to replace human creators,” she said. “And you will never get the same quality with AI as human creators.”

    [ad_2]

    Source link

  • Four takeaways from Walter Isaacson’s biography of Elon Musk | CNN Business

    Four takeaways from Walter Isaacson’s biography of Elon Musk | CNN Business

    [ad_1]



    CNN
     — 

    “You’ll never be successful,” Errol Musk in 1989 told his 17-year-old son Elon, who was then preparing to fly from South Africa to Canada to find relatives and a college education.

    That’s one of the scenes Walter Isaacson paints in his 670-page biography of Elon Musk, who is now the richest person who ever lived. The biography allows readers new glimpses into the private life of the entrepreneur who popularized electric vehicles for the masses and landed rocket boosters hurtling back to Earth so they could be reused.

    But Musk’s public statements and actions have become increasingly unhinged, filing and threatening lawsuits against nonprofits that fight hate speech and allowing some of the internet’s worst actors to regain their platforms.

    Isaacson portrays Musk as a restless genius with a turbulent upbringing on the cusp of launching a new AI company along with his five other companies.

    Musk allowed Isaacson to shadow him for two years but exercised no control over the biography’s contents, the author said.

    Here are four key takeaways.

    Musk’s upbringing and father haunt him

    Isaacson’s book attributes much of Musk’s drive to his upbringing. He recounts the emotional scars inflicted on Musk by his father, which, Isaacson writes, caused Musk to become “a tough yet vulnerable man-child with an exceedingly high tolerance for risk, a craving for drama, an epic sense of mission and a maniacal intensity that was callous and at times destructive.”

    Musk decided to live with his father from age 10 to 17, enduring what Musk and others describe as occasional but regular verbal taunts and abuse. Musk’s sister, Tosca, said Errol would sometimes lecture his children for hours, “calling you worthless, pathetic, making scarring and evil comments, not allowing you to leave.”

    Elon Musk became estranged from his father, though he has occasionally supported his father financially. In a 2022 email sent to Elon Musk on Father’s Day, Errol Musk said he was freezing and lacking electricity, asking his son for money.

    In the letter, Errol made racist comments about Black leaders in South Africa. “With no Whites here, the Blacks will go back to the trees,” he wrote.

    Elon Musk has said that he opposes racism and discrimination, but hate speech has flourished on X, formerly known as Twitter, since he purchased it 11 months ago, according to the Anti-Defamation League. Musk threatened to sue the ADL for defamation last week, arguing that the nonprofit’s statements have caused his company to lose significant advertising revenue.

    Isaacson reported that Errol, in other emails, denounced Covid as “a lie” and attacked Dr. Anthony Fauci, the United States’ former top infectious disease expert who played a prominent role in the government’s fight against the pandemic.

    Elon Musk, similarly, has criticized Fauci and raised many questions about public health policy during the pandemic. But he has said he supports vaccination, even if he doesn’t believe the shots should be mandated.

    Musk’s fluid family and obsession with population

    Musk has a fluid mix of girlfriends, ex-wives, ex-girlfriends and significant others, and he has many children with multiple women. Isaacson’s book revealed Musk had a third child (Techno Mechanicus) with the musician Grimes in 2022, and Musk confirmed the revelation Sunday.

    Musk has frequently stated that humans must be a multiplanetary species, warning space exploration will ensure the future of humanity. He similarly has spoken numerous times that people need to have more children.

    “Population collapse due to low birth rates is a much bigger risk to civilization than global warming,” Musk said last year.

    Musk has referred to his desire to increase the global population as an explanation for his unique family situation.

    The book reports that Musk encouraged employees such as Shivon Zilis, a top operations officer at his Neuralink company, to have many children. “He feared that declining birthrates were a threat to the long-term survival of human consciousness,” Isaacson writes.

    Although the book presents their relationship as a platonic work friendship, Musk volunteered to donate sperm to Zilis. She agreed and had twins in 2021 via in vitro fertilization; she did not tell people who the biological father was.

    Zilis and Grimes were friendly, but Musk did not tell Grimes about the twins, according to the book.

    Musk asked Zilis if her twins might like to take his last name. Isaacson reports that Grimes was upset in 2022 when she learned the news that Musk had fathered children with Zilis.

    “Doing my best to help the underpopulation crisis,” Musk tweeted at the time, trying to defuse the tension. “A collapsing birth rate is the biggest danger civilization faces by far.”

    One of Musk’s children, Jenna, often criticized her father’s wealth specifically and capitalism broadly. In 2022, she disowned her father, which Isaacson reports saddened Musk.

    Isaacson reports that Musk’s fractured relationship with Jenna, who is trans, partly led to Musk’s rightward turn toward libertarianism and questioning what he considers the “woke-mind-virus, which is fundamentally antiscience, antimerit, and antihuman.”

    Musk has called into question the use of alternate gender pronouns and made numerous statements some critics consider to be anti-trans.

    “I absolutely support trans, but all these pronouns are an esthetic nightmare,” Musk posted in 2020.

    But in December 2020 he also posted a tweet, since deleted, that said “when you put he/him in your bio” alongside a drawing of an 18th century soldier rubbing blood on his face in front of a pile of dead bodies and wearing a cap that read “I love to oppress.”

    Late last year, he tweeted: “My pronouns are Prosecute/Fauci.”

    The purchase of his favorite social media platform, gutting the staff and tinkering with policies and branding have taken time and resources away from Musk’s other companies and projects, Isaacson reports.

    “I’ve got a bad habit of biting off more than I can chew,” Musk told Isaacson at one point.

    After a protracted legal battle over his decision to purchase Twitter, Musk said he regained his enthusiasm for taking over the company when he realized that he wanted to prevent a world where people silo off into their own echo chambers and would prefer a world of civil discourse.

    But Isaacson notes “he would end up undermining that important mission with statements and tweets that ended up chasing off progressives and mainstream media types to other social networks.”

    Musk team members, such as his business manager Jared Birchall, his lawyer Alex Spiro and his brother Kimbal, sometimes try to restrain Musk from sending text messages or tweets that could create legal or economic peril, according to the book. Some friends convinced him to place his phone in a hotel safe overnight on one occasion, before Musk summoned hotel security to open the safe for him.

    During Christmas in 2022 with his brother, Kimbal warned Elon about how fast he was making enemies. “It’s like the days of high school, when you kept getting beaten up,” he said. Kimbal stopped following Elon on Twitter after his brother’s tweets about Fauci and other conspiracies. “Stop falling for weird s—.”

    Are robocars, an AI company and a robot called Optimus on tap?

    Musk continues moving forward on new engineering projects. Since 2021, Musk has been working on a “humanoid” robot called Optimus that walks on two legs instead of like four-legged robots coming from other labs. He unveiled an early version of the Optimus robot in September of 2022. Musk told engineers that humanoid robots will “uncork the economy to quasi-infinite levels,” according to Isaacson, by doing jobs humans find dangerous or repetitive.

    Some of Musk’s top engineers are also working on a “robotaxi,” a driverless vehicle that shows up like an Uber. This past summer, he spent hours each week preparing new factory designs in Texas to produce the next-generation Tesla cars that would look similar to Tesla’s cybertruck.

    Musk is also starting his own AI company called X.AI, which he told Isaacson will compete with Google, Microsoft and other companies surging ahead in the past year with public AI projects. Musk had co-founded OpenAi with Sam Altman in 2015 and contributed $100 million to the non-profit. He became angry when Altman converted the project into a for-profit. Musk also ended a friendship with Larry Page when the two disagreed on AI. According to the book, Musk believes he has a better vision for AI and humanity and thinks the data he owns from Tesla and Twitter will be an asset to his next AI plans.

    “Could you get the rockets to orbit or the transition to electric vehicles without accepting all aspects of him, hinged and unhinged?” Isaacson asks in the last chapter.

    [ad_2]

    Source link

  • Microsoft Outlook will soon write emails for you | CNN Business

    Microsoft Outlook will soon write emails for you | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Artificial intelligence could soon be writing more company emails in Microsoft Outlook, as the company expands its rollout of AI tools for corporate users.

    The Microsoft 365 Copilot tool – “your everyday AI companion,” as the company bills it – will help users write their emails to “keep your sentences concise and error-free.” The tool also summarizes long email threads to quickly draft suggested replies.

    Users with Microsoft 365 Personal or Family subscriptions will get more advanced AI help through Microsoft Editor, an intelligent writing assistant. The update will include suggested edits for “clarity, conciseness, inclusive language and more” to help workers create more “polished and professional” emails, according to a blog post from the company in September.

    The company said the tool will be available to more corporate clients starting on November 1. It has already been in months-long testing with customers including Visa, General Motors, KPMG and Lumen Technologies.

    In March, Microsoft outlined its plans to bring artificial intelligence to its most recognizable productivity tools, including Outlook, PowerPoint, Excel and Word, with the promise of changing how millions do their work every day. The addition of its AI-powered “copilot” – which will help edit, summarize, create and compare documents – is built on the same technology that underpins ChatGPT.

    In addition to writing emails, Microsoft 365 users will be able to summarize meetings and create suggested follow-up action items, request to create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.

    Corporate customers will also get to use Microsoft 365 Chat, previously called Business Chat, which can scan the internet and employee emails, meetings, chats and files, to behave as a sort of personalized secretary.

    The expansion will come less than a year after OpenAI publicly released viral AI chat tool ChatGPT, which stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    In the months since, many other companies have rolled out features underpinning or similar to the technology. Microsoft rival Google, for example, has also brought AI to its productivity tools, including Gmail, Sheets and Docs.

    [ad_2]

    Source link

  • ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    ‘It gave us some way to fight back’: New tools aim to protect art and images from AI’s grasp | CNN Business

    [ad_1]



    CNN
     — 

    For months, Eveline Fröhlich, a visual artist based in Stuttgart, Germany, has been feeling “helpless” as she watched the rise of new artificial intelligence tools that threaten to put human artists out of work.

    Adding insult to injury is the fact that many of these AI models have been trained off of the work of human artists by quietly scraping images of their artwork from the internet without consent or compensation.

    “It all felt very doom and gloomy for me,” said Fröhlich, who makes a living selling prints and illustrating book and album covers.

    “We’ve never been asked if we’re okay with our pictures being used, ever,” she added. “It was just like, ‘This is mine now, it’s on the internet, I’m going to get to use it.’ Which is ridiculous.”

    Recently, however, she learned about a tool dubbed Glaze that was developed by computer scientists at the University of Chicago and thwarts the attempts of AI models to perceive a work of art via pixel-level tweaks that are largely imperceptible to the human eye.

    “It gave us some way to fight back,” Fröhlich told CNN of Glaze’s public release. “Up until that point, many of us felt so helpless with this situation, because there wasn’t really a good way to keep ourselves safe from it, so that was really the first thing that made me personally aware that: Yes, there is a point in pushing back.”

    Fröhlich is one of a growing number of artists that is fighting back against AI’s overreach and trying to find ways to protect her images online as a new spate of tools has made it easier than ever for people to manipulate images in ways that can sow chaos or upend the livelihoods of artists.

    These powerful new tools allow users to create convincing images in just seconds by inputting simple prompts and letting generative AI do the rest. A user, for example, can ask an AI tool to create a photo of the Pope dripped out in a Balenciaga jacket — and go on to fool the internet before the truth comes out that the image is fake. Generative AI technology has also wowed users with its ability to spit out works of art in the style of a specific artist. You can, for example, create a portrait of your cat that looks like it was done with the bold brushstrokes of Vincent Van Gogh.

    But these tools also make it very easy for bad actors to steal images from your social media accounts and turn them into something they’re not (in the worst cases, this could manifest as deepfake porn that uses your likeness without your consent). And for visual artists, these tools threaten to put them out of work as AI models learn how to mimic their unique styles and generate works of art without them.

    Some researchers, however, are now fighting back and developing new ways to protect people’s photos and images from AI’s grasp.

    Ben Zhao, a professor of computer science at University of Chicago and one of the lead researchers on the Glaze project, told CNN that the tool aims to protect artists from having their unique works used to train AI models.

    Glaze uses machine-learning algorithms to essentially put an invisible cloak on artworks that will thwart AI models’ attempts to understand the images. For example, an artist can upload an image of their own oil painting that has been run through Glaze. AI models might read that painting as something like a charcoal drawing — even if humans can clearly tell that it is an oil painting.

    Artists can now take a digital image of their artwork, run it through Glaze, “and afterwards be confident that this piece of artwork will now look dramatically different to an AI model than it does to a human,” Zhao told CNN.

    Zhao’s team released the first prototype of Glaze in March and has already surpassed a million downloads of the tool, he told CNN. Just last week, his team released a free online version of the tool as well.

    Jon Lam, an artist based in California, told CNN that he now uses Glaze for all of the images of his artwork that he shares online.

    Lam said that artists like himself have for years posted the highest resolution of their works on the internet as a point of pride. “We want everyone to see how awesome it is and see all the details,” he said. But they had no idea that their works could be gobbled up by AI models that then copy their styles and put them out of work.

    Jon Lam is a visual artist from California who uses the Glaze tool to help protect his artwork online from being used to train AI models.

    “We know that people are taking our high-resolution work and they are feeding it into machines that are competing in the same space that we are working in,” he told CNN. “So now we have to be a little bit more cautious and start thinking about ways to protect ourselves.”

    While Glaze can help ameliorate some of the issues artists are facing for now, Lam says it’s not enough and there needs to be regulation set regarding how tech companies can take data from the internet for AI training.

    “Right now, we’re seeing artists kind of being the canary in the coal mine,” Lam said. “But it’s really going to affect every industry.”

    And Zhao, the computer scientist, agrees.

    Since releasing Glaze, the amount of outreach his team has received from artists in other disciplines has been “overwhelming,” he said. Voice actors, fiction writers, musicians, journalists and beyond have all reached out to his team, Zhao said, inquiring about a version of Glaze for their field.

    “Entire, multiple, human creative industries are under threat to be replaced by automated machines,” he said.

    While the rise of AI images are threatening the jobs of artists around the world, everyday internet users are also at risk of their photos being manipulated by AI in other ways.

    “We are in the era of deepfakes,” Hadi Salman, a researcher at the Massachusetts Institute of Technology, told CNN amid the proliferation of AI tools. “Anyone can now manipulate images and videos to make people actually do something that they are not doing.”

    Salman and his team at MIT released a research paper last week that unveiled another tool aimed at protecting images from AI. The prototype, dubbed PhotoGuard, puts an invisible “immunization” over images that stops AI models from being able to manipulate the picture.

    The aim of PhotoGuard is to protect photos that people upload online from “malicious manipulation by AI models,” Salman said.

    Salman explained that PhotoGuard works by adjusting an image’s pixels in a way that is imperceptible to humans.

    In this demonstration released by MIT, a researcher shows a selfie (left) he took with comedian Trevor Noah. The middle photo, an AI-generated fake image, shows how the image looks after he used an AI model to generate a realistic edit of the pair wearing suits. The right image depicts how the researchers' tool, PhotoGuard, would prevent an attempt by AI models from editing the photo.

    “But this imperceptible change is strong enough and it’s carefully crafted such that it actually breaks any attempts to manipulate this image by these AI models,” he added.

    This means that if someone tries to edit the photo with AI models after it’s been immunized by PhotoGuard, the results will be “not realistic at all,” according to Salman.

    In an example he shared with CNN, Salman showed a selfie he took with comedian Trevor Noah. Using an AI tool, Salman was able to edit the photo to convincingly make it look like he and Noah were actually wearing suits and ties in the picture. But when he tries to make the same edits to a photo that has been immunized by PhotoGuard, the resulting image depicts Salman and Noah’s floating heads on an array of gray pixels.

    PhotoGuard is still a prototype, Salman notes, and there are ways people can try to work around the immunization via various tricks. But he said he hopes that with more engineering efforts, the prototype can be turned into a larger product that can be used to protect images.

    While generative AI tools “allow us to do amazing stuff, it comes with huge risks,” Salman said. It’s good people are becoming more aware of these risks, he added, but it’s also important to take action to address them.

    Not doing anything, “Might actually lead to much more serious things than we imagine right now,” he said.

    [ad_2]

    Source link

  • Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI regulations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Coming out of a three-hour Senate hearing on artificial intelligence, Elon Musk, the head of a handful of tech companies, summarized the grave risks of AI.

    “There’s some chance – above zero – that AI will kill us all. I think it’s low but there’s some chance,” Musk told reporters. “The consequences of getting AI wrong are severe.”

    But he also said the meeting “may go down in history as being very important for the future of civilization.”

    The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

    All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

    Benefits and risks

    Bill Gates spoke of AI’s potential to feed the hungry and one unnamed attendee called for spending tens of billions on “transformational innovation” that could unlock AI’s benefits, Schumer said.

    The challenge for Congress is to promote those benefits while mitigating the societal risks of AI, which include the potential for technology-based discrimination, threats to national security and even, as X owner Musk said, “civilizational risk.”

    “You want to be able to maximize the benefits and minimize the harm,” said Schumer, who organized the first of nine sessions. “And that will be our difficult job.”

    Senators emerging from the meeting said they heard a broad range of perspectives, with representatives from labor unions raising the issue of job displacement and civil rights leaders highlighting the need for an inclusive legislative process that provides the least powerful in society a voice.

    Most agreed that AI could not be left to its own devices, said Washington Democratic Sen. Maria Cantwell.

    “I thought Satya Nadella from Microsoft said it best: ‘When it comes to AI, we shouldn’t be thinking about autopilot. You need to have copilots.’ So who’s going to be watching this activity and making sure that it’s done correctly?”

    Other areas of agreement reflected traditional tech industry priorities, such as increasing federal investment in research and development as well as promoting skilled immigration and education, Cantwell added.

    But there was a noticeable lack of engagement on some of the harder questions, she said, particularly on whether a new federal agency is needed to regulate AI.

    “There was no discussion of that,” she said, though several in the meeting raised the possibility of assigning some greater oversight responsibilities to the National Institute of Standards and Technology, a Commerce Department agency.

    Musk told journalists after the event that he thinks a standalone agency to regulate AI is likely at some point.

    “With AI we can’t be like ostriches sticking our heads in the sand,” Schumer said, according to prepared remarks acquired by CNN. He also noted this is “a conversation never before seen in Congress.”

    The push reflects policymakers’ growing awareness of how artificial intelligence, and particularly the type of generative AI popularized by tools such as ChatGPT, could potentially disrupt business and everyday life in numerous ways — ranging from increasing commercial productivity to threatening jobs, national security and intellectual property.

    The high-profile guests trickled in shortly before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Senate Russell office building’s Kennedy Caucus Room. Google CEO Sundar Pichai was seen huddling with Delaware Democratic Sen. Chris Coons, while X owner Musk quickly swept by a mass of cameras with a quick wave to the crowd. Inside, Musk was seated at the opposite end of the room from Zuckerberg, in what is likely the first time that the two men have shared a room since they began challenging each other to a cage fight months ago.

    Elon Musk, CEO of X, the company formerly known as Twitter, left, and Alex Karp, CEO of the software firm Palantir Technologies, take their seats as Senate Majority Leader Chuck Schumer, D, N.Y., convenes a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at the Capitol in Washington, Wednesday, Sept. 13, 2023.

    The session at the US Capitol in Washington also gave the tech industry its most significant opportunity yet to influence how lawmakers design the rules that could govern AI.

    Some companies, including Google, IBM, Microsoft and OpenAI, have already offered their own in-depth proposals in white papers and blog posts that describe layers of oversight, testing and transparency.

    IBM’s CEO, Arvind Krishna, argued in the meeting that US policy should regulate risky uses of AI, as opposed to just the algorithms themselves.

    “Regulation must account for the context in which AI is deployed,” he said, according to his prepared remarks.

    Executives such as OpenAI CEO Sam Altman previously wowed some senators by publicly calling for new rules early in the industry’s lifecycle, which some lawmakers see as a welcome contrast to the social media industry that has resisted regulation.

    Clement Delangue, co-founder and CEO of the AI company Hugging Face, tweeted last month that Schumer’s guest list “might not be the most representative and inclusive,” but that he would try “to share insights from a broad range of community members, especially on topics of openness, transparency, inclusiveness and distribution of power.”

    Civil society groups have voiced concerns about AI’s possible dangers, such as the risk that poorly trained algorithms may inadvertently discriminate against minorities, or that they could ingest the copyrighted works of writers and artists without compensation or permission. Some authors have sued OpenAI over those claims, while others have asked in an open letter to be paid by AI companies.

    News publishers such as CNN, The New York Times and Disney are some of the content producers who have blocked ChatGPT from using their content. (OpenAI has said exemptions such as fair use apply to its training of large language models.)

    “We will push hard to make sure it’s a truly democratic process with full voice and transparency and accountability and balance,” said Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, “and that we get to something that actually supports democracy; supports economic mobility; supports education; and innovates in all the best ways and ensures that this protects consumers and people at the front end — and just not try to fix it after they’ve been harmed.”

    The concerns reflect what Wiley described as “a fundamental disagreement” with tech companies over how social media platforms handle misinformation, disinformation and speech that is either hateful or incites violence.

    American Federation of Teachers President Randi Weingarten said America can’t make the same mistake with AI that it did with social media. “We failed to act after social media’s damaging impact on kids’ mental health became clear,” she said in a statement. “AI needs to supplement, not supplant, educators, and special care must be taken to prevent harm to students.”

    Navigating those diverse interests will be Schumer, who along with three other senators — South Dakota Republican Sen. Mike Rounds, New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — is leading the Senate’s approach to AI. Earlier this summer, Schumer held three informational sessions for senators to get up to speed on the technology, including one classified briefing featuring presentations by US national security officials.

    Wednesday’s meeting with tech executives and nonprofits marked the next stage of lawmakers’ education on the issue before they get to work developing policy proposals. In announcing the series in June, Schumer emphasized the need for a careful, deliberate approach and acknowledged that “in many ways, we’re starting from scratch.”

    “AI is unlike anything Congress has dealt with before,” he said, noting the topic is different from labor, healthcare or defense. “Experts aren’t even sure which questions policymakers should be asking.”

    Rounds said hammering out the specific scope of regulations will fall to Senate committees. Schumer added that the goal — after hosting more sessions — is to craft legislation over “months, not years.”

    “We’re not ready to write the regs today. We’re not there,” Rounds said. “That’s what this is all about.”

    A smattering of AI bills have already emerged on Capitol Hill and seek to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’s legislative agenda on the issue.

    New AI legislation could also serve as a potential backstop to voluntary commitments that some AI companies made to the Biden administration earlier this year to ensure their AI models undergo outside testing before they are released to the public.

    But even as US lawmakers prepare to legislate by meeting with industry and civil society groups, they are already months if not years behind the European Union, which is expected to finalize a sweeping AI law by year’s end that could ban the use of AI for predictive policing and restrict how it can be used in other contexts.

    A bipartisan pair of US senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

    Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

    “This forum is not designed to produce legislation,” Blumenthal said. “Our subcommittee will produce legislation.”

    Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

    “We need to do what has been done for airline safety, car safety, drug safety, medical device safety,” Blumenthal said. “AI safety is no different — in fact, potentially even more dangerous.”

    Hawley called Wednesday’s sessions “a giant cocktail party” for the tech industry and slammed the fact that it was private.

    “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money, and then close it to the public,” Hawley said. “I mean, that’s a terrible idea. These are the same people who have ruined social media.”

    Despite talking tough on tech, Schumer has moved extremely slowly on tech legislation, Hawley said, pointing to several major tech bills from the last Congress that never made it to a Senate floor vote.

    “It’s a little bit like antitrust the last two years,” Hawley said. “He talks about it constantly and does nothing about it. My sense is … this is a lot of song and dance that covers the fact that actually nothing is advancing. I hope I’m wrong about that.”

    Hawley is also a co-sponsor of a bill introduced Tuesday led by Minnesota Democratic Sen. Amy Klobuchar that would prohibit generative AI from being used to create deceptive political ads. Klobuchar and Hawley, along with fellow co-sponsors Coons and Maine Republican Sen. Susan Collins, said the measure is needed to keep AI from manipulating voters.

    Massachusetts Democratic Sen. Elizabeth Warren said the broad nature of the summit limited its potential.

    “They’re sitting at a big, round table all by themselves,” Warren said of the executives and civil society leaders, while all the senators sat, listened and didn’t ask questions. “Let’s put something real on the table instead of everybody agree[ing] that we need safety and innovation.”

    Schumer said that making the meeting confidential was intended to give lawmakers the chance to hear from the outside in an “unvarnished way.”

    [ad_2]

    Source link