ReportWire

Tag: google inc

  • EU officials accuse Google of antitrust violations in its ad tech business | CNN Business

    EU officials accuse Google of antitrust violations in its ad tech business | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Google’s advertising business should be broken up, European Union officials said Wednesday, alleging that the tech giant’s involvement in multiple parts of the digital advertising supply chain creates “inherent conflicts of interest” that risk harming competition.

    The formal accusations mark the latest antitrust challenge to Google over its sprawling ad tech business, following a lawsuit by the US Justice Department in January that also called for a breakup of the company.

    The EU Commission has submitted its allegations to Google in writing, officials said, kicking off a legal process that could potentially end in billions of dollars in fines in addition to a possible breakup that could impact part of its core advertising business.

    The commission alleges that since 2014, Google has unfairly boosted its own proprietary ad exchange — the online auction house known as AdX that matches advertisers and publishers — through its simultaneous ownership of some of the most popular ad tools for publishers and advertisers.

    For example, the commission claims, advertisers who used Google’s ad buying tools frequently had their purchases routed to AdX instead of to rival ad exchanges.

    Meanwhile, Google’s publisher-facing tools unfairly gave AdX a leg up over rival ad exchanges, the commission alleged, because Google’s publisher tools gave AdX competitive bidding information that the exchange could use to help advertisers win an auction.

    One proposed solution by the commission would spin off Google’s ad exchange and publisher tools from the ad-buying tools it provides to advertisers.

    “@Google controls both sides of the #adtech market: sell & buy,” tweeted Margrethe Vestager, the commission’s top competition official. “We are concerned that it may have abused its dominance to favour its own #AdX platform. If confirmed, this is illegal.”

    In a statement, Dan Taylor, Google’s vice president of global ads, said the EU’s probe “focuses on a narrow aspect of our advertising business,” that the company opposes the commission’s preliminary conclusions and that Google plans to “respond accordingly.”

    “Our advertising technology tools help websites and apps fund their content, and enable businesses of all sizes to effectively reach new customers. Google remains committed to creating value for our publisher and advertiser partners in this highly competitive sector,” Taylor said.

    A Google spokesperson told CNN Wednesday that the company has only just received the commission’s complaint and that it will take time to review the commission’s claims. Google also added that it will oppose calls for a breakup.

    [ad_2]

    Source link

  • Google workers in London stage walkout over job cuts | CNN Business

    Google workers in London stage walkout over job cuts | CNN Business

    [ad_1]



    Reuters
     — 

    Hundreds of Google employees staged a walkout at the company’s London offices on Tuesday, following a dispute over layoffs.

    In January, Google’s parent company Alphabet announced it was laying off 12,000 employees worldwide, equivalent to 6% of its global workforce.

    The move came amid a wave of job cuts across corporate America, particularly in the tech sector, which has so far seen companies shed more than 290,000 workers since the start of the year, according to tracking site Layoffs.fyi.

    Trade union Unite, which counts hundreds of Google’s UK employees among its members, said the company had ignored concerns put forward by employees.

    “Our members are clear: Google needs to listen to its own advice of not being evil,” said Unite regional officer Matt Whaley.

    “They and Unite will not back down until Google allows workers full union representation, engages properly with the consultation process and treats its staff with the respect and dignity they deserve.”

    A Google employee attending the protest, who asked not to be named for fear of retaliation, told Reuters that talks between employees and management had been “extremely frustrating.”

    “It has been difficult for those involved. We have a redundancy process for a reason, so that employees can make their voice heard,” they said. “But it feels as if our concerns have fallen on deaf ears.”

    Google’s senior management has been engaged in redundancy talks in many parts of Europe, in line with local employment laws.

    Last month, workers at the company’s Zurich office in Switzerland staged a similar walkout, with employee representatives claiming Google had rejected their proposals to reduce job cuts.

    “As we said on January 20, we’ve made the difficult decision to reduce our workforce by approximately 12,000 roles globally. We know this is a very challenging time for our employees,” a Google spokesperson said.

    “In the UK, we have been constructively engaging and listening to our employees through numerous meetings, and are working hard to bring them clarity and share updates as soon as we can in adherence with all UK processes and legal requirements.”

    Google employs more than 5,000 people in the United Kingdom.

    [ad_2]

    Source link

  • Google is using AI to change how you shop | CNN Business

    Google is using AI to change how you shop | CNN Business

    [ad_1]



    CNN
     — 

    Google wants to make it easier for online shoppers to know how clothing will look on them before making a purchase.

    The company on Wednesday announced a new virtual try-on feature that uses generative AI, the same technology underpinning a new crop of chatbots and image creation tools, to show clothes on a wide selection of body types.

    With the feature, shoppers can see how an item would drape, fold, cling, stretch or form wrinkles and shadows on a diverse set of models in various poses, according to the company.

    Google is also launching a feature that helps users find similar clothing pieces in different colors, patterns or styles, from merchants across the web, using a visual matching algorithm powered by AI.

    These efforts are part of Google’s bigger push to defend its search engine from the threat posed by a wave of new AI-powered tools in the wake of the viral success of ChatGPT. At the Google I/O developer conference last month, the company spent more than 90 minutes teasing a long list of AI announcements, including expanding access to its existing chatbot Bard and bringing new AI capabilities to Google Search.

    Google said it developed the virtual try-on option using many pairs of images of more than 80 models standing forward and sideways, from sizes XS to XL, and with varying skin tones, body shapes and ethnic backgrounds. The AI-powered tool then learned to match the shape of certain shirts in those positions to generate realistic images of the person from all angles.

    The feature will initially work with women’s tops from brands such as Anthropology, Loft, H&M and Everlane. Google said it will expand to men’s shirts in the future. Google also said the tool will get more precise over time.

    Google isn’t the only e-commerce company blending generative AI into the shopping experience. Some companies such as Shopify and Instacart are using the technology to help inform customers’ shopping decisions. Amazon is experimenting with using artificial intelligence to sum up customer feedback about products on the site, with the potential to cut down on the time shoppers spend sifting through reviews before making a purchase. And eBay recently rolled out an AI tool to help sellers generate product listing descriptions.

    [ad_2]

    Source link

  • Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Arkansas Gov. Sarah Huckabee Sanders has signed a sweeping bill imposing a minimum age limit for social media usage, in the latest example of states taking more aggressive steps intended to protect teens online.

    But even as Sanders signed the bill into law on Wednesday afternoon, the legislation appeared to contain vast loopholes and exemptions benefiting companies that lobbied on the bill and raising questions about how much of the industry it truly covers.

    The legislation, known as the Social Media Safety Act and taking effect in September, is aimed at giving parents more control over their kids’ social media usage, according to lawmakers. It defines social media companies as any online forum that lets users create public profiles and interact with each other through digital content.

    It requires companies that operate those services to verify the ages of all new users and, if the users are under 18 years old, to obtain a parent’s consent before allowing them to create an account. To perform the age checks, the law relies on third-party companies to verify users’ personal information, such as a driver’s license or photo ID.

    “While social media can be a great tool and a wonderful resource, it can have a massive negative impact on our kids,” Sanders said at a press conference before signing the bill.

    Utah finalized a similar law last month, raising concerns among some users and advocacy groups that the legislation could make user data less secure, internet access less private and infringe upon younger users’ basic rights.

    The push by states to legislate on social media comes after years of mounting scrutiny of the industry and claims that it has harmed users’ well-being and mental health, particularly among teens.

    Despite its seemingly universal scope, however, the new law, also known as SB396, includes numerous carveouts for certain types of digital services and, in some cases, individual companies. And although its sponsors have said the law is specifically meant to apply to certain platforms, including TikTok, parts of the legislative language appear to result in the exact opposite effect.

    In the final days of negotiation over the bill, Arkansas lawmakers approved an amendment that created several categorical exemptions from the age verification requirements. Media companies that “exclusively” offer subscription content; social media platforms that permit users to “generate short video clips of dancing, voice overs, or other acts of entertainment”; and companies that “exclusively offer” video gaming-focused social networking features were exempted.

    Another amendment carved out companies that sell cloud storage services, business cybersecurity services or educational technology and that simultaneously derive less than 25% of their total revenue from running a social media platform.

    Sen. Tyler Dees, a lead co-sponsor of the legislation, explained in remarks on the Arkansas senate floor on April 6 that the exemptions and tweaks to the bill, some of which he said were made in consultation with Apple, Meta and Google, were intended to shield non-social media services from the bill’s age requirements and to focus attention on new accounts created by children, not existing adult accounts.

    “There’s other services that Google offers … like cloud storage, et cetera,” Dees said. “So that’s really the intent of carving out — like LinkedIn, that is a social – I’m sorry, that is a business networking site, and so that’s the intent of those bills.”

    Microsoft-owned LinkedIn is apparently exempt from SB396 under a provision that carves out companies that provide “career development opportunities, including professional networking, job skills, learning certifications, and job posting and application services.”

    Other lawmakers have questioned whether the legislation — which has now become law — exempts a giant of the social media industry: YouTube, whose auto-play features and algorithmic recommendation engine have been accused of promoting extremism and radicalizing viewers.

    The confusion over YouTube appears to stem from the carveout for businesses that offer cloud storage and that make less than 25% of their revenue from social media.

    What is unclear is whether YouTube is subject to SB396 because it is a distinct company within Google whose revenue comes almost entirely from operating a social media platform, or whether it is not covered because YouTube is a part of Google and Google is exempt because it derives only a small share of its revenues from YouTube.

    In response to questions by CNN, Dees said SB396 targets platforms including Facebook, Instagram and TikTok, but omitted any mention of Google and declined to answer whether YouTube specifically would be covered by the law.

    “The purpose of this bill was to empower parents and protect kids from social media platforms, like Facebook, Instagram, TikTok and Snapchat,” Dees said in a statement. “We worked with stakeholders to ensure that email, text messaging, video streaming, and networking websites were not covered by the bill.”

    In remarks at Wednesday’s bill signing, Sanders told reporters that Google and Amazon are exempted from the law, implying that YouTube will not be subject to the age verification requirements imposed on other major social media sites.

    Meanwhile, Dees’ statement appeared to contradict the language in SB396 that purports to exempt any company that “allows a user to generate short video clips of dancing, voice overs, or other acts of entertainment in which the primary purpose is not educational or informative” — content that can be commonly found on TikTok, Snapchat and the other social media platforms Deese named.

    According to Meta spokesperson, “We want teens to be safe online. We’ve developed more than 30 tools to support teens and families, including tools that let parents and teens work together to limit the amount of time teens spend on Instagram, and age-verification technology that helps teens have age-appropriate experiences.”

    Meta “automatically set teens’ accounts to private when they join Instagram, we’ve further restricted the options advertisers have to reach teens, as well as the information we use to show ads to teens… and we don’t allow content that promotes suicide, self-harm or eating disorders,” according to the spokesperson, who added: “We’ll continue to work closely with experts, policymakers and parents on these important issues.”

    Spokespeople for Snapchat, TikTok and YouTube didn’t immediately respond to a request for comment.

    [ad_2]

    Source link

  • The largest newspaper publisher in the US sues Google, alleging online ad monopoly | CNN Business

    The largest newspaper publisher in the US sues Google, alleging online ad monopoly | CNN Business

    [ad_1]



    CNN
     — 

    Gannett, the largest newspaper publisher in the United States, is suing Google, alleging the tech giant holds a monopoly over the digital ad market.

    The publisher of USA Today and more than 200 local publications filed the lawsuit in a New York federal court on Tuesday, and is seeking unspecified damages. Gannett argues in court documents that Google and its parent company, Alphabet, controls how publishers buy and sell ads online.

    “The result is dramatically less revenue for publishers and Google’s ad-tech rivals, while Google enjoys exorbitant monopoly profits,” the lawsuit states.

    Google controls about a quarter of the US digital advertising market, with Meta, Amazon and TikTok combining for another third, according to eMarketer. News publishers and other websites combine for the other roughly 40%. Big Tech’s share of the market is beginning to erode slightly, but Google remains by far the largest individual player.

    That means publishers often rely at least in part on Google’s advertising technology to support their operations: Gannett says Google controls 90% of the ad market for publishers.

    Michael Reed, Gannett’s chairman and CEO, said in a statement Tuesday that Google’s dominance in the online advertising industry has come “at the expense of publishers, readers and everyone else.”

    “Digital advertising is the lifeblood of the online economy,” Reed added. “Without free and fair competition for digital ad space, publishers cannot invest in their newsrooms.”

    Dan Taylor, Google’s vice president of global ads, told CNN that the claims in the suit “are simply wrong.”

    “Publishers have many options to choose from when it comes to using advertising technology to monetize – in fact, Gannett uses dozens of competing ad services, including Google Ad Manager,” Taylor said in a statement Tuesday. “And when publishers choose to use Google tools, they keep the vast majority of revenue.”

    He continued: “We’ll show the court how our advertising products benefit publishers and help them fund their content online.”

    The legal action from Gannett comes as Google faces a growing number of antitrust complaints in the United States and the European Union over its advertising business, which remains its central moneymaker.

    EU officials said last week that Google’s advertising business should be broken up, alleging that the tech giant’s involvement in multiple parts of the digital advertising supply chain creates “inherent conflicts of interest” that risk harming competition.

    Earlier this year, the Justice Department and eight states sued Google, accusing the company of harming competition with its dominance in the online advertising market and similarly calling for it to be broken up.

    [ad_2]

    Source link

  • Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    [ad_1]



    CNN
     — 

    Shares of Google-parent Alphabet fell more than 3% in early trading Monday after a report sparked concerns that its core search engine could lose market share to AI-powered rivals, including Microsoft’s Bing.

    Last month, Google employees learned that Samsung was weighing making Bing the default search engine on its devices instead of Google’s search engine, prompting a “panic” inside the company, according to a report from the New York Times, citing internal messages and documents. (CNN has not reviewed the material.)

    In an effort to address the heightened competition, Google is said to be developing a new AI-powered search engine called Project “Magi,” according to the Times. The company, which reportedly has about 160 people working on the project, aims to change the way results appear in Google Search and will include an AI chat tool available to answer questions. The project is expected to be unveiled to the public next month, according to the report.

    In a statement sent to CNN, Google spokesperson Lara Levin said the company has been using AI for years to “improve the quality of our results” and “offer entirely new ways to search,” including with a feature rolled out last year that lets users search by combining images and words.

    “We’ve done so in a responsible and helpful way that maintains the high bar we set for delivering quality information,” Levin said. “Not every brainstorm deck or product idea leads to a launch, but as we’ve said before, we’re excited about bringing new AI-powered features to Search, and will share more details soon.”

    Samsung did not immediately respond to a request for comment.

    Google’s search engine has dominated the market for two decades. But the viral success of ChatGPT, which can generate compelling written responses to user prompts, appeared to put Google on defense for the first time in years.

    In March, Google began opening up access to Bard, its new AI chatbot tool that directly competes with ChatGPT and promises to help users outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    At an event in February, a Google executive also said the company will bring “the magic of generative AI” directly into its core search product and use artificial intelligence to pave the way for the “next frontier of our information products.”

    Microsoft, meanwhile, has invested in and partnered with OpenAI, the company behind ChatGPT, to deploy similar technology in Bing and other productivity tools. Other tech companies, including Meta, Baidu and IBM, as well as a slew of startups, are racing to develop and deploy AI-powered tools.

    But tech companies face risks in embracing this technology, which is known to make mistakes and “hallucinate” responses. That’s particularly true when it comes to search engines, a product that many use to find accurate and reliable information.

    Google was called out after a demo of Bard provided an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.

    Microsoft’s Bing AI demo was also called out for several errors, including an apparent failure to differentiate between the types of vacuums and even made up information about certain products.

    In an interview with 60 Minutes that aired on Sunday, Google and Alphabet CEO Sundar Pichai stressed the need for companies to “be responsible in each step along the way” as they build and release AI tools.

    For Google, he said, that means allowing time for “user feedback” and making sure the company “can develop more robust safety layers before we build, before we deploy more capable models.”

    He also expressed his belief that these AI tools will ultimately have broad impacts on businesses, professions and society.

    “This is going to impact every product across every company and so that’s, that’s why I think it’s a very, very profound technology,” he said. “And so, we are just in early days.”

    [ad_2]

    Source link

  • Google earned $10 million by allowing misleading anti-abortion ads from ‘fake clinics,’ report says | CNN Business

    Google earned $10 million by allowing misleading anti-abortion ads from ‘fake clinics,’ report says | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Google has earned more than $10 million over the past two years by allowing misleading advertisements for “fake” abortion clinics that aim to stop women from having the procedure, according to an estimate from a report released Thursday from the non-profit Center for Countering Digital Hate.

    The estimated amount is microscopic compared to the more than $200 billion Google generates from ad sales annually. But the report’s data hints at the broad reach pro-life groups can have by placing these advertisements in Google results for common phrases searched for by abortion seekers.

    Using Semrush, an analytics tool, researchers at the CCDH identified “188 fake clinic websites” that placed ads on Google between March, 2021 and February of this year. CCDH estimates that ads for fake clinics were clicked on by users 13 million times during this period.

    Some searching for “abortion clinics near me” on Google instead found results directing them toward so-called “crisis pregnancy centers” that may try to talk abortion-seekers out of treatment and offer medically unproven abortion pill reversal techniques, according to the report.

    Other Google searches populated by crisis clinic ads included “abortion pill,” “abortion clinic” and “planned parenthood,” the report said, with clinics in states where abortion is legal spending two times as much as those in states with bans.

    In the wake of the Supreme Court overturning Roe v Wade, Google faced calls from Congressional Democrats to do more to prevent searches for abortion clinics from returning results for misleading ads – as well as calls from Republican lawmakers to do the opposite. The dueling pressure from lawmakers highlighted how central Google can be for women searching for information on the procedure.

    In a statement Thursday, Google said its approach to abortion ads follows local laws and that any advertiser targeting certain keywords or phrases related to abortions must complete a certification to confirm if it does or does not provide abortion services.

    “We require any organization that wants to advertise to people seeking information about abortion services to be certified and clearly disclose whether they do or do not offer abortions,” a Google spokesperson told CNN. “We do not allow ads promoting abortion reversal treatments and we also prohibit advertisers from misleading people about the services they offer.”

    “We remove or block ads that violate these policies,” the company added.

    Google said it does not allow for abortion reversal pill advertisements because the treatment isn’t approved by the FDA. In response to Thursday’s CCDH report, the company told CNN it took “enforcement action” on content violating this policy.

    Google has continued to face scrutiny in recent months for the steps it takes to protect abortion seekers’ location data.

    Nearly a dozen Senate Democrats wrote to Google in May with questions about how it deletes users’ location history when they have visited sensitive locations such as abortion clinics. The letter came after tests performed by The Washington Post and other privacy advocates appeared to show that Google was not quickly or consistently deleting users’ recorded visits to fertility centers of Planned Parenthood clinics.

    Google previously declined to comment on the lawmakers’ letter. Instead, it referred CNN to a company blog post that includes abortion clinics on a list of sensitive locations, but did not explain what it means when it claims the data will be deleted “soon after” a visit.

    [ad_2]

    Source link

  • YouTube rolls out new policies for eating disorder content | CNN Business

    YouTube rolls out new policies for eating disorder content | CNN Business

    [ad_1]


    New York
    CNN
     — 

    YouTube on Tuesday announced a series of changes to how it deals with content related to eating disorders.

    The platform has long removed content that glorifies or promotes eating disorders, and YouTube’s Community Guidelines will now also prohibit content that features behaviors such as purging after eating or extreme calorie counting that at-risk users could be inspired to imitate. For videos that feature such “imitable behaviors” in the context of recovery, YouTube will allow the content to remain on the site but restrict it to users who are logged into the site and are over the age of 18.

    The policy changes, developed in consultation with the National Eating Disorder Association and other nonprofit organizations, aim to ensure “that YouTube creates space for community recovery and resources, while continuing to protect our viewers,” YouTube’s Global Head of Healthcare Garth Graham told CNN in an interview.

    “We’re thinking about how to thread the needle in terms of essential conversations and information that people might have,” Graham said, “allowing people to hear stories about recovery and allowing people to hear educational information but also realizing that the display of that information … can serve as a trigger as well.”

    The changes come as social media platforms have faced increased scrutiny for their effects on the mental health of users, especially young people. In 2021, lawmakers called out Instagram and YouTube for promoting accounts featuring content depicting extreme weight loss and dieting to young users. And TikTok has faced criticism from an online safety group that claimed the app served eating disorder related content to teens (although the platform pushed back against the research). They also follow several updates by YouTube in recent years to how it handles misinformation about medical issues such as abortion and vaccines.

    In addition to removing or age restricting some videos, YouTube plans to add panels pointing viewers to crisis resources under eating disorder-related content in nine countries, with plans to expand to more areas. And when a creators’ video is removed for violating its eating disorder policy, Graham said YouTube will send them resources about how to create content that’s less likely to harm other viewers.

    As with many social media policies, however, the challenge often isn’t introducing it but enforcing it, a challenge YouTube could face in discerning which videos are, for example, pro-recovery. YouTube said it will be rolling out enforcement of the policy globally in the coming weeks, and plans to use both human and automated moderation to review videos and their context.

    “These are complicated, societal public health [issues],” Graham said, “I want never to profess perfection, but to understand that we have to be proactive, we have to be thoughtful … it’s taken a while to get here because we wanted to articulate a process that had different layers and understood the challenges.”

    [ad_2]

    Source link

  • YouTube removed video of Robert F. Kennedy, Jr. for violating vaccine misinformation policy | CNN Business

    YouTube removed video of Robert F. Kennedy, Jr. for violating vaccine misinformation policy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    YouTube said on Monday that it had removed a video of presidential hopeful Robert F. Kennedy, Jr. being interviewed by podcast host Jordan Peterson for violating its policy prohibiting vaccine misinformation.

    A YouTube spokesperson told CNN that the platform removed the video from Peterson’s channel because it does not allow “content that alleges that vaccines cause chronic side effects, outside of rare side effects that are recognized by health authorities.”

    The platform’s latest move comes as Kennedy, an environmental lawyer and anti-vaccine activist, has gained more mainstream attention with his views and recently had his account reinstated on Instagram as a result of his long-shot presidential campaign.

    YouTube began cracking down broadly on vaccine misinformation in 2021, following an earlier policy preventing false or misleading claims about Covid-19. At the time, YouTube said it would remove the channels of “several well-known vaccine misinformation spreaders,” including one belonging to the Children’s Health Defense, a group affiliated with Kennedy. (The YouTube channel for Kennedy’s presidential campaign remains active.)

    Under its policy, YouTube removes false claims about currently administered vaccines that the World Health Organization and local authorities have approved and confirmed to be safe.

    Although YouTube removed the video, it remains available on Twitter, showing the fractured approach to vaccine misinformation across the internet as his campaign gets underway.

    In a tweet on Sunday, Kennedy noted YouTube’s removal of the video saying, “What do you think … Should social media platforms censor presidential candidates?”

    Kennedy also gained attention for his anti-vaccine views on a different podcast this week.

    On Monday, prominent vaccine scientist Peter Hotez said he was accosted outside of his home after a Twitter exchange with podcaster Joe Rogan, who challenged Hotez to debate Kennedy over the weekend.

    Hotez had tweeted in support of a Vice article criticizing Spotify’s handling of vaccine misinformation in an interview with Kennedy on Rogan’s show. After Twitter owner Elon Musk and hedge fund manager Bill Ackman weighed in, Hotez said he was “stalked in front of my home by a couple of antivaxxers.”

    Kennedy suggested to Hotez that they have a “respectful, congenial, informative debate.” Hotez said he would go on Rogan’s podcast but would not debate Kennedy.

    [ad_2]

    Source link

  • AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, who has been called the ‘Godfather of AI,’ confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it.

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision.

    In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. “Google has acted very responsibly.”

    Jeff Dean, chief scientist at Google, said Hinton “has made foundational breakthroughs in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”

    “We remain committed to a responsible approach to AI,” Dean said in a statement provided to CNN. “We’re continually learning to understand emerging risks while also innovating boldly.”

    Hinton’s decision to step back from the company and speak out on the technology comes as a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.

    In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    In the interview with the Times, Hinton echoed concerns about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Even before stepping aside from Google, Hinton had spoken publicly about AI’s potential to do harm as well as good.

    “I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good,” Hinton said in a 2021 commencement address at the Indian Institute of Technology Bombay in Mumbai. He noted how AI will boost healthcare while also creating opportunities for lethal autonomous weapons. “I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off.”

    Hinton isn’t the first Google employee to raise a red flag on AI. In July, the company fired an engineer who claimed an unreleased AI system had become sentient, saying he violated employment and data security policies. Many in the AI community pushed back strongly on the engineer’s assertion.

    [ad_2]

    Source link

  • First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    [ad_1]



    CNN
     — 

    Three US senators are pressing Facebook-parent Meta, Google-parent Alphabet and Twitter about whether their layoffs may have hindered the companies’ ability to fight the spread of misinformation ahead of the 2024 elections.

    In a letter to the companies dated Tuesday, the lawmakers warned that reported staff cuts to content moderation and other teams could make it harder for the companies to fulfill their commitments to election integrity.

    “This is particularly troubling given the emerging use of artificial intelligence to mislead voters,” wrote Minnesota Democratic Sen. Amy Klobuchar, Vermont Democratic Sen. Peter Welch and Illinois Democratic Sen. Dick Durbin, according to a copy of the letter reviewed by CNN.

    Since purchasing Twitter in October, Elon Musk has slashed headcount by more than 80%, in some cases eliminating entire teams.

    Alphabet announced plans to cut roughly 12,000 workers across product areas and regions earlier this year. And Meta has previously said it would eliminate about 21,000 jobs over two rounds of layoffs, hitting across teams devoted to policy, user experience and well-being, among others.

    “We remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community – including our efforts to prepare for elections around the world,” Andy Stone, a spokesperson for Meta, said in a statement to CNN about the letter.

    Alphabet and Twitter did not immediately respond to a request for comment.

    The pullback at those companies has coincided with a broader industry retrenchment in the face of economic headwinds. Peers such as Microsoft and Amazon have also trimmed their workforces, while others have announced hiring freezes.

    But the social media companies are coming under greater scrutiny now in part due to their role facilitating the US electoral process.

    Tuesday’s letter asked Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai and Twitter CEO Linda Yaccarino how each company is preparing for the 2024 elections and for mis- and disinformation surrounding the campaigns.

    To illustrate their concerns, the lawmakers pointed to recent changes at Alphabet-owned YouTube to allow the sharing of false claims that the 2020 presidential election was stolen, along with what they described as content moderation “challenges” at Twitter since the layoffs.

    The letter, which seeks responses by July 10, also asked whether the companies may hire more content moderation employees or contractors ahead of the election, and how the platforms may be specifically preparing for the rise of AI-generated deepfakes in politics.

    Already, candidates such as Florida Gov. Ron DeSantis appear to have used fake, AI-generated images to attack their opponents, raising questions about the risks that artificial intelligence could pose for democracy.

    [ad_2]

    Source link