ReportWire

Tag: alphabet inc

  • Apple and Google are teaming up on a plan to make Bluetooth trackers like AirTags safer | CNN Business

    Apple and Google are teaming up on a plan to make Bluetooth trackers like AirTags safer | CNN Business

    [ad_1]



    CNN
     — 

    Apple and Google are working together on a new industry-wide effort to help limit the risk of Bluetooth devices like AirTags being used for unwanted tracking after a number of reports about these products enabling stalking.

    The companies announced a joint proposal on Tuesday for a new technical specification for manufacturers to build into future products. It would allow location-tracking devices to implement “unauthorized tracking detection and alerts” and work on both iOS and Android platforms.

    The goal, according to the proposal, is to enable “unwanted tracking detection” on these devices that “can both detect and alert individuals that a location tracker separated from the owner’s device is traveling with them.” It would also “provide means to find and disable the tracker.”

    In a press release, Google and Apple said manufacturers including Samsung, Tile, Chipolo, eufy Security, and Pebblebee have expressed support for the draft specification.

    “This new industry specification builds upon the AirTag protections, and through collaboration with Google results in a critical step forward to help combat unwanted tracking across iOS and Android,” said Ron Huang, Apple’s vice president of sensing and connectivity.

    The companies added that they have incorporated feedback and insight from device manufacturers, as well as safety and advocacy groups, into the development of the specification. The proposal has been submitted for review to the Internet Engineering Task Force (IETF), a standards development organization, the companies said.

    In 2021, Apple launched the AirTag, a $29 Tile-like Bluetooth locator that attaches to and helps users find items such as keys, wallets, laptops or even a car by giving nearly anything a digital footprint, enabling it to be found on a map. But soon after its launch, some experts warned that the devices could be used to track individuals without their consent.

    Late last year, Apple was sued by two women who allege their previous romantic partners used the company’s AirTag devices to track their whereabouts, potentially putting their safety at risk. Separately, in June 2022, a woman from Indiana allegedly used one to track and ultimately murder her boyfriend over an alleged affair, according to reports. AirTags have also allegedly been used to steal cars.

    Over time, Apple has worked with safety groups and law enforcement agencies to identify more ways to update its AirTag safety warnings, including alerting people sooner if the small Bluetooth tracker is suspected to be tracking someone.

    Location trackers aren’t new. The issue of unwanted tracking also “existed long before AirTags came on the market,” Erica Olsen, director of the Safety Net Project at the National Network to End Domestic Violence, told CNN last year.

    [ad_2]

    Source link

  • Google workers in London stage walkout over job cuts | CNN Business

    Google workers in London stage walkout over job cuts | CNN Business

    [ad_1]



    Reuters
     — 

    Hundreds of Google employees staged a walkout at the company’s London offices on Tuesday, following a dispute over layoffs.

    In January, Google’s parent company Alphabet announced it was laying off 12,000 employees worldwide, equivalent to 6% of its global workforce.

    The move came amid a wave of job cuts across corporate America, particularly in the tech sector, which has so far seen companies shed more than 290,000 workers since the start of the year, according to tracking site Layoffs.fyi.

    Trade union Unite, which counts hundreds of Google’s UK employees among its members, said the company had ignored concerns put forward by employees.

    “Our members are clear: Google needs to listen to its own advice of not being evil,” said Unite regional officer Matt Whaley.

    “They and Unite will not back down until Google allows workers full union representation, engages properly with the consultation process and treats its staff with the respect and dignity they deserve.”

    A Google employee attending the protest, who asked not to be named for fear of retaliation, told Reuters that talks between employees and management had been “extremely frustrating.”

    “It has been difficult for those involved. We have a redundancy process for a reason, so that employees can make their voice heard,” they said. “But it feels as if our concerns have fallen on deaf ears.”

    Google’s senior management has been engaged in redundancy talks in many parts of Europe, in line with local employment laws.

    Last month, workers at the company’s Zurich office in Switzerland staged a similar walkout, with employee representatives claiming Google had rejected their proposals to reduce job cuts.

    “As we said on January 20, we’ve made the difficult decision to reduce our workforce by approximately 12,000 roles globally. We know this is a very challenging time for our employees,” a Google spokesperson said.

    “In the UK, we have been constructively engaging and listening to our employees through numerous meetings, and are working hard to bring them clarity and share updates as soon as we can in adherence with all UK processes and legal requirements.”

    Google employs more than 5,000 people in the United Kingdom.

    [ad_2]

    Source link

  • ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities.

    But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.”

    McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI.

    Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate.

    “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”

    Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted.

    Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.”

    A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.”

    “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.”

    Companies are broadly taking two approaches to address the issue.

    One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature.

    Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data.

    Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerator Y Combinator, says it uses “proprietary deepfake and generative content fingerprinting technology” to spot AI-generated video, audio and images.

    In an example provided by the company, Reality Defender highlights an image of a Tom Cruise deepfake as 53% “suspicious,” telling the user it has found evidence showing the face was warped, “a common artifact of image manipulation.”

    Defending reality could prove to be a lucrative business if the issue becomes a frequent concern for businesses and individuals. These services offer limited free demos as well as paid tiers. Hive Moderation said it charges $1.50 for every 1,000 images as well as “annual contract deals” that offer a discount. Realty Defender said its pricing may vary based on various factors, including whether the client needs “any bespoke factors requiring our team’s expertise and assistance.”

    “The risk is doubling every month,” Ben Colman, CEO of Reality Defender, told CNN. “Anybody can do this. You don’t need a PhD in computer science. You don’t need to spin up servers on Amazon. You don’t need to know how to write ransomware. Anybody can do this just by Googling ‘fake face generator.’”

    Kevin Guo, CEO of Hive Moderation, described it as “an arms race.”

    “We have to keep looking at all the new ways that people are creating this content, we have to understand it and add it to our dataset to then classify the future,” Guo told CNN. “Today it’s a small percent of content for sure that’s AI-generated, but I think that’s going to change over the next few years.”

    In a different, preventative approach, some larger tech companies are working to integrate a kind of watermark to images to certify media as real or AI-generated when they’re first created. The effort has so far largely been driven by the Coalition for Content Provenance and Authenticity, or C2PA.

    The C2PA was founded in 2021 to create a technical standard that certifies the source and history of digital media. It combines efforts by the Adobe-led Content Authenticity Initiative (CAI) and Project Origin, a Microsoft- and BBC-spearheaded initiative that focuses on combating disinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

    Based on the C2PA’s guidelines, the CAI makes open source tools for companies to create content credentials, or the metadata that contains information about the image. This “allows creators to transparently share the details of how they created an image,” according to the CAI website. “This way, an end user can access context around who, what, and how the picture was changed — then judge for themselves how authentic that image is.”

    “Adobe doesn’t have a revenue center around this. We’re doing it because we think this has to exist,” Andy Parsons, Senior Director at CAI, told CNN. “We think it’s a very important foundational countermeasure against mis- and disinformation.”

    Many companies are already integrating the C2PA standard and CAI tools into their applications. Adobe’s Firefly, an AI image generation tool recently added to Photoshop, follows the standard through the Content Credentials feature. Microsoft also announced that AI art created by Bing Image Creator and Microsoft Designer will carry a cryptographic signature in the coming months.

    Other tech companies like Google appear to be pursuing a playbook that pulls a bit from both approaches.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online. The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    While tech companies are trying to tackle concerns about Ai-generated images and the integrity of digital media, experts in the field stress that these businesses will ultimately need to work with each other and the government to address the problem.

    “We’re going to need cooperation from the Twitters of the world and the Facebooks of the world so they start taking this stuff more seriously, and stop promoting the fake stuff and start promoting the real stuff,” said Farid. “There’s a regulatory part that we haven’t talked about. There’s an education part that we haven’t talked about.”

    Parsons agreed. “This is not a single company or a single government or a single individual in academia who can make this possible,” he said. “We need everybody to participate.”

    For now, however, tech companies continue to move forward with pushing more AI tools into the world.

    [ad_2]

    Source link

  • This is Google’s new folding phone | CNN Business

    This is Google’s new folding phone | CNN Business

    [ad_1]



    CNN
     — 

    Just a few days ahead of its product launch, Google unveiled an early look at its first foldable smartphone.

    In a video posted to Twitter and YouTube, the company teased a Pixel phone with a vertical hinge that can be opened to reveal a tablet-like display.

    The company will host its annual developer conference at its Mountain View, California, headquarters next week, where it’s rumored to also introduce a Pixel 7a budget phone, its latest Android operating system and advancements to its AI-powered Bard chatbot.

    Although the company didn’t reveal specs for the Pixel Fold, it’s become increasingly common for companies to show off products leading up to their own events in an effort to drum up excitement and set expectations at a time when it’s difficult to surprise onlookers with something unexpected.

    Despite great interest in foldable phones — and a resurgence in 90s-style flip phones among celebrities and TikTok influencers — the foldable market is relatively small; with Samsung dominating the category, followed by others including Motorola/Lenovo, Oppo, and Huawei. According to ABI Research, foldable and flexible displays made up about 0.7% of the smartphone market in 2021, and in 2022 expected to fall just shy of 2%.

    High price points have limited consumer adoption, too. The Pixel Fold is rumored to start at $1,700.

    It’s not surprising Google is dipping its toes into the world of foldables but it’s possible it waited to launch its own version until the technology became more advanced. Early versions of the Samsung Galaxy Z Fold, for example, had issues with the screen and most apps were not well optimized for the design.

    “Google has been working on bringing better user experiences to foldable devices from a software perspective, so when coupled with improvements on the hardware side the market conditions are at a state now where it makes sense for a Pixel Fold,” said Michael Inouye, an analyst at ABI Research.

    [ad_2]

    Source link

  • Google is building an AI tool for journalists | CNN Business

    Google is building an AI tool for journalists | CNN Business

    [ad_1]



    CNN
     — 

    Google is developing an artificial intelligence tool for news publishers that can generate article text and headlines, the company said, highlighting how the technology may soon transform the journalism industry.

    The tech giant said in a statement that it is looking to partner with news outlets on the AI tool’s use in newsrooms.

    “Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” a Google spokesperson said, “just like we’re making assistive tools available for people in Gmail and in Google Docs.”

    The effort was first reported by The New York Times, which said the project is referred to internally as “Genesis” and has been pitched to The Times, The Washington Post and News Corp, which owns The Wall Street Journal.

    Google’s statement did not name those media companies but said the company is particularly focusing on “smaller publishers.” It added that the project is not aimed at replacing journalists nor their “essential role … in reporting, creating, and fact-checking their articles.”

    The new tool comes as tech companies, including Google, race to develop and deploy a new crop of generative AI features into applications used in the workplace, with the promise of streamlining tasks and making employees more productive.

    But these tools, which are trained on information online, have also raised concerns because of their potential to get facts wrong or “hallucinate” responses.

    News outlet CNET had to issue “substantial” corrections earlier this year after experimenting with using an AI tool to write stories. And what was supposed to be a simple AI-written story on “Star Wars” published by Gizmodo earlier this month similarly required a correction. But both outlets have said they will still move forward with using the technology.

    [ad_2]

    Source link

  • Google rolls out an alternative to the password | CNN Business

    Google rolls out an alternative to the password | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The days of having to think up new passwords that aren’t “password123” may be coming to an end – at least on your Google accounts.

    Google on Wednesday began rolling out support for passkeys, an alternative sign-in method for apps and websites that the company says is meant to serve as an “easier to use and more secure” alternative to the password.

    With passkeys, Google said users can access their various accounts the same way they might unlock their phone: with a fingerprint, face scan or screen lock PIN.

    The FIDO Alliance, a security consortium that counts many tech firms as members, previously developed standards for passkeys. Microsoft, Apple and Google have since been working to make passkeys a reality.

    Apple rolled out its passkey option with the release of iOS 16, allowing people to use the technology across apps, including Apple Wallet. Passkey support was rolled out on Chrome and Android devices in October 2022, but now the option is available across Google accounts, from Gmail to Drive.

    People are notoriously bad at picking passwords. But even adding a special character or alphanumeric combination can only add so much protection from bad actors. Passkeys, by comparison, are widely seen as more secure than other options, with Google calling them “resistant to online attacks like phishing.”

    Google will continue to support passwords and two-factor authentication as other account access options.

    [ad_2]

    Source link

  • EU officials accuse Google of antitrust violations in its ad tech business | CNN Business

    EU officials accuse Google of antitrust violations in its ad tech business | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Google’s advertising business should be broken up, European Union officials said Wednesday, alleging that the tech giant’s involvement in multiple parts of the digital advertising supply chain creates “inherent conflicts of interest” that risk harming competition.

    The formal accusations mark the latest antitrust challenge to Google over its sprawling ad tech business, following a lawsuit by the US Justice Department in January that also called for a breakup of the company.

    The EU Commission has submitted its allegations to Google in writing, officials said, kicking off a legal process that could potentially end in billions of dollars in fines in addition to a possible breakup that could impact part of its core advertising business.

    The commission alleges that since 2014, Google has unfairly boosted its own proprietary ad exchange — the online auction house known as AdX that matches advertisers and publishers — through its simultaneous ownership of some of the most popular ad tools for publishers and advertisers.

    For example, the commission claims, advertisers who used Google’s ad buying tools frequently had their purchases routed to AdX instead of to rival ad exchanges.

    Meanwhile, Google’s publisher-facing tools unfairly gave AdX a leg up over rival ad exchanges, the commission alleged, because Google’s publisher tools gave AdX competitive bidding information that the exchange could use to help advertisers win an auction.

    One proposed solution by the commission would spin off Google’s ad exchange and publisher tools from the ad-buying tools it provides to advertisers.

    “@Google controls both sides of the #adtech market: sell & buy,” tweeted Margrethe Vestager, the commission’s top competition official. “We are concerned that it may have abused its dominance to favour its own #AdX platform. If confirmed, this is illegal.”

    In a statement, Dan Taylor, Google’s vice president of global ads, said the EU’s probe “focuses on a narrow aspect of our advertising business,” that the company opposes the commission’s preliminary conclusions and that Google plans to “respond accordingly.”

    “Our advertising technology tools help websites and apps fund their content, and enable businesses of all sizes to effectively reach new customers. Google remains committed to creating value for our publisher and advertiser partners in this highly competitive sector,” Taylor said.

    A Google spokesperson told CNN Wednesday that the company has only just received the commission’s complaint and that it will take time to review the commission’s claims. Google also added that it will oppose calls for a breakup.

    [ad_2]

    Source link

  • Google is using AI to change how you shop | CNN Business

    Google is using AI to change how you shop | CNN Business

    [ad_1]



    CNN
     — 

    Google wants to make it easier for online shoppers to know how clothing will look on them before making a purchase.

    The company on Wednesday announced a new virtual try-on feature that uses generative AI, the same technology underpinning a new crop of chatbots and image creation tools, to show clothes on a wide selection of body types.

    With the feature, shoppers can see how an item would drape, fold, cling, stretch or form wrinkles and shadows on a diverse set of models in various poses, according to the company.

    Google is also launching a feature that helps users find similar clothing pieces in different colors, patterns or styles, from merchants across the web, using a visual matching algorithm powered by AI.

    These efforts are part of Google’s bigger push to defend its search engine from the threat posed by a wave of new AI-powered tools in the wake of the viral success of ChatGPT. At the Google I/O developer conference last month, the company spent more than 90 minutes teasing a long list of AI announcements, including expanding access to its existing chatbot Bard and bringing new AI capabilities to Google Search.

    Google said it developed the virtual try-on option using many pairs of images of more than 80 models standing forward and sideways, from sizes XS to XL, and with varying skin tones, body shapes and ethnic backgrounds. The AI-powered tool then learned to match the shape of certain shirts in those positions to generate realistic images of the person from all angles.

    The feature will initially work with women’s tops from brands such as Anthropology, Loft, H&M and Everlane. Google said it will expand to men’s shirts in the future. Google also said the tool will get more precise over time.

    Google isn’t the only e-commerce company blending generative AI into the shopping experience. Some companies such as Shopify and Instacart are using the technology to help inform customers’ shopping decisions. Amazon is experimenting with using artificial intelligence to sum up customer feedback about products on the site, with the potential to cut down on the time shoppers spend sifting through reviews before making a purchase. And eBay recently rolled out an AI tool to help sellers generate product listing descriptions.

    [ad_2]

    Source link

  • Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Arkansas Gov. Sarah Huckabee Sanders has signed a sweeping bill imposing a minimum age limit for social media usage, in the latest example of states taking more aggressive steps intended to protect teens online.

    But even as Sanders signed the bill into law on Wednesday afternoon, the legislation appeared to contain vast loopholes and exemptions benefiting companies that lobbied on the bill and raising questions about how much of the industry it truly covers.

    The legislation, known as the Social Media Safety Act and taking effect in September, is aimed at giving parents more control over their kids’ social media usage, according to lawmakers. It defines social media companies as any online forum that lets users create public profiles and interact with each other through digital content.

    It requires companies that operate those services to verify the ages of all new users and, if the users are under 18 years old, to obtain a parent’s consent before allowing them to create an account. To perform the age checks, the law relies on third-party companies to verify users’ personal information, such as a driver’s license or photo ID.

    “While social media can be a great tool and a wonderful resource, it can have a massive negative impact on our kids,” Sanders said at a press conference before signing the bill.

    Utah finalized a similar law last month, raising concerns among some users and advocacy groups that the legislation could make user data less secure, internet access less private and infringe upon younger users’ basic rights.

    The push by states to legislate on social media comes after years of mounting scrutiny of the industry and claims that it has harmed users’ well-being and mental health, particularly among teens.

    Despite its seemingly universal scope, however, the new law, also known as SB396, includes numerous carveouts for certain types of digital services and, in some cases, individual companies. And although its sponsors have said the law is specifically meant to apply to certain platforms, including TikTok, parts of the legislative language appear to result in the exact opposite effect.

    In the final days of negotiation over the bill, Arkansas lawmakers approved an amendment that created several categorical exemptions from the age verification requirements. Media companies that “exclusively” offer subscription content; social media platforms that permit users to “generate short video clips of dancing, voice overs, or other acts of entertainment”; and companies that “exclusively offer” video gaming-focused social networking features were exempted.

    Another amendment carved out companies that sell cloud storage services, business cybersecurity services or educational technology and that simultaneously derive less than 25% of their total revenue from running a social media platform.

    Sen. Tyler Dees, a lead co-sponsor of the legislation, explained in remarks on the Arkansas senate floor on April 6 that the exemptions and tweaks to the bill, some of which he said were made in consultation with Apple, Meta and Google, were intended to shield non-social media services from the bill’s age requirements and to focus attention on new accounts created by children, not existing adult accounts.

    “There’s other services that Google offers … like cloud storage, et cetera,” Dees said. “So that’s really the intent of carving out — like LinkedIn, that is a social – I’m sorry, that is a business networking site, and so that’s the intent of those bills.”

    Microsoft-owned LinkedIn is apparently exempt from SB396 under a provision that carves out companies that provide “career development opportunities, including professional networking, job skills, learning certifications, and job posting and application services.”

    Other lawmakers have questioned whether the legislation — which has now become law — exempts a giant of the social media industry: YouTube, whose auto-play features and algorithmic recommendation engine have been accused of promoting extremism and radicalizing viewers.

    The confusion over YouTube appears to stem from the carveout for businesses that offer cloud storage and that make less than 25% of their revenue from social media.

    What is unclear is whether YouTube is subject to SB396 because it is a distinct company within Google whose revenue comes almost entirely from operating a social media platform, or whether it is not covered because YouTube is a part of Google and Google is exempt because it derives only a small share of its revenues from YouTube.

    In response to questions by CNN, Dees said SB396 targets platforms including Facebook, Instagram and TikTok, but omitted any mention of Google and declined to answer whether YouTube specifically would be covered by the law.

    “The purpose of this bill was to empower parents and protect kids from social media platforms, like Facebook, Instagram, TikTok and Snapchat,” Dees said in a statement. “We worked with stakeholders to ensure that email, text messaging, video streaming, and networking websites were not covered by the bill.”

    In remarks at Wednesday’s bill signing, Sanders told reporters that Google and Amazon are exempted from the law, implying that YouTube will not be subject to the age verification requirements imposed on other major social media sites.

    Meanwhile, Dees’ statement appeared to contradict the language in SB396 that purports to exempt any company that “allows a user to generate short video clips of dancing, voice overs, or other acts of entertainment in which the primary purpose is not educational or informative” — content that can be commonly found on TikTok, Snapchat and the other social media platforms Deese named.

    According to Meta spokesperson, “We want teens to be safe online. We’ve developed more than 30 tools to support teens and families, including tools that let parents and teens work together to limit the amount of time teens spend on Instagram, and age-verification technology that helps teens have age-appropriate experiences.”

    Meta “automatically set teens’ accounts to private when they join Instagram, we’ve further restricted the options advertisers have to reach teens, as well as the information we use to show ads to teens… and we don’t allow content that promotes suicide, self-harm or eating disorders,” according to the spokesperson, who added: “We’ll continue to work closely with experts, policymakers and parents on these important issues.”

    Spokespeople for Snapchat, TikTok and YouTube didn’t immediately respond to a request for comment.

    [ad_2]

    Source link

  • Senate Democrats write to Google over concerns about abortion-seekers’ location data | CNN Business

    Senate Democrats write to Google over concerns about abortion-seekers’ location data | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Nearly a dozen Senate Democrats wrote to Google this week with questions about how it deletes users’ location history when they have visited sensitive locations such as abortion clinics, expressing concerns that the company may not have been consistently deleting the data as promised.

    The letter dated Monday and led by Sens. Amy Klobuchar, Elizabeth Warren and Mazie Hirono seeks answers from Google about the types of locations Google considers to be sensitive and how long it takes for the company to automatically delete visit history.

    The letter comes after tests performed by The Washington Post and other privacy advocates appeared to show that Google was not quickly or consistently deleting users’ recorded visits to fertility centers of Planned Parenthood clinics.

    “This data is extremely personal and includes information about reproductive health care,” the senators wrote. “We are also concerned that it can be used to target advertisements for services that may be unnecessary or potentially harmful physically, psychologically, or emotionally.”

    Concerns about the security of location data have spiked in Washington since the Supreme Court overturned Roe v. Wade last year, opening the door to state laws restricting or penalizing abortion-seekers. Under those laws, privacy advocates have said, states could potentially compel tech companies to hand over location data that might reveal whether a person has illegally sought an abortion.

    “Claiming and publicly announcing that Google will delete sensitive location data, without consistently doing so, could be considered a deceptive practice,” the senators added, implying that Google’s conduct could be grounds for an investigation by the Federal Trade Commission, which is authorized to police unfair and deceptive business practices.

    Google declined to comment Wednesday on the lawmakers’ letter, instead referring CNN to a blog post that answers some but not all of the senators’ questions.

    Google defines sensitive locations as “including counseling centers, domestic violence shelters, abortion clinics, fertility centers, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others,” according to an update to the blog post dated May 12. “If you visit a general purpose medical facility (like a hospital), the visit may persist.”

    The blog post does not, however, address the senators’ request for Google to explain what it means when it claims the data will be deleted “soon after” a visit.

    [ad_2]

    Source link

  • Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    [ad_1]



    CNN
     — 

    Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products.

    The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in a federal court in California on Tuesday, and was brought by Clarkson Law Firm. The firm previously filed a similar suit against ChatGPT-maker OpenAI last month. (OpenAI did not previously respond to a request for comment on the suit.)

    The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

    Halimah DeLaine Prado, Google’s general counsel, called the claims in the suit “baseless” in a statement to CNN. “We’ve been clear for years that we use data from public sources — like information published to the open web and public datasets — to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” DeLaine Prado said.

    “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” the statement added.

    Alphabet and DeepMind did not immediately respond to a request for comment.

    The complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

    In response to an earlier Verge report on the update, the company said its policy “has long been transparent” about this practice and “this latest update simply clarifies that newer services like Bard are also included.”

    The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

    In the process, however, companies are also drawing mounting legal scrutiny over copyright issues from works swept up in these data sets, as well as their apparent use of personal and possibly sensitive data from everyday users, including data from children, according to the Google lawsuit.

    “Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

    The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

    Giordano contrasted the benefits and alleged harms of how Google typically indexes online data to support its core search engine with the new allegations of it scraping data to train AI tools.

    With its search engine, he said, Google can “serve up an attributed link to your work that can actually drive somebody to purchase it or engage with it.” Data scraping to train AI tools, however, is creating “an alternative version of the work that radically alters the incentives for anybody to need to purchase the work,” Giordano added.

    While some internet users may have grown accustomed to their digital data being collected and used for search results or targeted advertising, the same may not be true for AI training. “People could not have imagined their information would be used this way,” Giordano said.

    Ryan Clarkson, a partner at the law firm, said Google needs to “create an opportunity for folks to opt out” of having their data used for training AI while still maintaining their ability to use the internet for their everyday needs.

    [ad_2]

    Source link

  • A foldable phone, new tablet and lots of AI: What Google unveiled at its big developer event | CNN Business

    A foldable phone, new tablet and lots of AI: What Google unveiled at its big developer event | CNN Business

    [ad_1]



    CNN
     — 

    Google on Wednesday unveiled its latest lineup of hardware products, including its first foldable phone and a new tablet, as well as plans to roll out new AI features to its search engine and productivity tools.

    The updates, announced at its annual Google I/O developer conference, come as the company is simultaneously trying to push beyond its core advertising business with new devices while also racing to defend its search engine from the threat posed by a wave of new AI-powered tools.

    In a sign of where Google’s focus currently lies, the company spent more than 90 minutes teasing a long list of new AI features before mentioning hardware updates.

    Here’s what Google announced at the event.

    Google became the latest tech company to unveil a foldable smartphone. Like other foldables, the $1799 Pixel Fold features a vertical hinge that can be opened to reveal a tablet-like display. But Google calls the Fold the thinnest foldable on the market.

    “It took some clever engineering work redesigning components like our speakers, our battery and haptics,” said George Hwang, a product manager at Google, on a call ahead of the announcement. The company packed a Pixel phone into a less than 6 mm body – about two thirds of the thickness of its other Pixel phones.

    The Pixel Fold is very much a phone first: when it’s unfolded, it opens up into a 7.6-inch screen, and moves on Google’s custom-built 180-degree hinge. That hinge mechanism is moved out entirely from under the display to improve its dust resistance and decrease the device’s overall thickness, according to the company.

    The Google Fold includes features you’d find on a Pixel, such as long exposure, unblur, magic eraser, which lets users remove unwanted or distracting object. It also has Pixel Fold-specific tools such as dual-screen live translate, which lets a user communicate in another language with the help of fast audio and text translations on the outer screen.

    Google said it optimized its top apps to take advantage of the larger screen but “there’s still work to be done” because “optimizing for a new foldable form factor takes time,” Hwang said. “It’s a process that we’re committed to and it requires steep investment with our developer partners across Android,” Hwang added.

    Google is far from the first to embrace foldables, but it’s possible it waited to launch its own version until the technology became more advanced. Early versions of the Samsung Galaxy Z Fold, for example, had issues with the screen and most apps were not well optimized for the design.

    But even now, the future for foldables remains uncertain. Most apps are still not optimized for foldable devices; prices remain very high; and Google’s chief rival, Apple, has yet to embrace the option.

    Despite great consumer interest in foldable phones — and a resurgence in 90s-style flip phones among celebrities and TikTok influencers — the foldable market is relatively small, with Samsung dominating the category, followed by others including Motorola, Lenovo, Oppo, and Huawei. According to ABI Research, foldable and flexible displays made up about 0.7% of the smartphone market in 2021, and in 2022 expected to fall just shy of 2%.

    The Pixel Fold will be available in the US, UK, Germany and Japan. The company said the device will start shipping next month.

    A look at the Google's Pixel 7a lineup

    On the surface, the 7a looks similar to the Pixel 7 and 7 Pro, with the same pixel camera bar along the back. It comes with the typical advancements you’d expect to find with any smartphone upgrade – better display, advanced camera and longer-lasting battery. But the 7a now boasts a Tensor G2 processor and a TItan M2 security chip, which brings advanced processing and new artificial intelligence features. It also offers wireless charging for the first time on an A model.

    The Pixel lineup has long been known for its cameras, and the 7a is no exception. It’s packed with upgrades, including a 64-megapixel main camera – the largest sensor on a Pixel A series to date, which will help with improved image quality, low light performance and other features. It also offers a new 13-megapixel ultra-wide camera for capturing even wider shots and a new 13-megapixel front camera. For the first time, each camera enables 4K video.

    The 7a also supports many significant Pixel features, including unblur, magic eraser and an improved Night Sight that’s two times faster and sharper than its predecessor. It also allows users to capture long exposure and enhanced zoom.

    The Pixel comes in several colors, including charcoal, snow, sea and coral, and starts at $499 via the Google Store on May 10.

    The Pixel Series A line has long been aimed at the cost conscious who want good features at a reasonable price, but its reach is limited. Google sells between eight to 10 million of the Pixel devices each year, according to ABI Research.

    “Generally, the smartphones were really meant for Google to showcase how software, and now AI capabilities, could be effectively optimized on hardware and improve the Android user experience,” said David McQueen, an analyst at ABI Research. “Google has purposely kept volume sales limited as it also has to be mindful of its relationship with other smartphone manufacturers that use the Android OS.”

    The Google Pixel tablet

    While phones were a key focus at the event, Google also refreshed other parts of its hardware lineup.

    Google introduced the Pixel Tablet, which is intended for use around the house, from turning off the lights off in the house to setting the thermostat without getting off the couch.

    The tablet, which has rounded edges and corners, comes in three colors: porcelain, hazel and rose, and starts at $499. It will be available on June 20.

    Under the hood, the 11-inch tablet is powered by Google’s Tensor G2 chips, which bring long-lasting battery life and AI features to the device. It also offers a front-facing camera, an 8-megapixel rear camera, and a charging dock.

    Google is also moving forward with plans to bring AI chat features to its core search engine amid a renewed arms race over the technology in Silicon Valley.

    The company said it is introducing the next evolution of Google Search, which will use an AI-powered chatbot to answer questions “you never thought Search could answer” and to help get users the information they want quicker than ever.

    With the update, the look and feel of Google Search results will be noticeably different. When users type a query into the main search bar, they will automatically see a pop-up an AI-generated response in addition to displaying traditional results.

    Users can now sign up for the new Google Search, which will first launch in the United States, via the Google app or Chrome’s desktop browser. A limited number of users will have access to it in the weeks ahead, according to the company, before it scales upward.

    Google is expanding access to its existing chatbot Bard, which operates outside the search engine and can help users do tasks such as outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    The tool, which was previously available to early users via a waitlist only in the US, will soon be available for all users in 120 countries and 40 languages.

    Google is also launching extensions for Bard from its own services, such as Gmail, Sheets and Docs, allowing users to ask questions and collaborate with the chatbot within the apps they’re using.

    Google also announced PaLM 2, its latest large language model to rival ChatGPT-creator OpenAI’s GPT-4.

    The move marks a big step forward for the technology that powers the company’s AI products and promises to be better at logic, common sense reasoning and mathematics. It can also generate specialized code in different programming languages.

    [ad_2]

    Source link

  • The largest newspaper publisher in the US sues Google, alleging online ad monopoly | CNN Business

    The largest newspaper publisher in the US sues Google, alleging online ad monopoly | CNN Business

    [ad_1]



    CNN
     — 

    Gannett, the largest newspaper publisher in the United States, is suing Google, alleging the tech giant holds a monopoly over the digital ad market.

    The publisher of USA Today and more than 200 local publications filed the lawsuit in a New York federal court on Tuesday, and is seeking unspecified damages. Gannett argues in court documents that Google and its parent company, Alphabet, controls how publishers buy and sell ads online.

    “The result is dramatically less revenue for publishers and Google’s ad-tech rivals, while Google enjoys exorbitant monopoly profits,” the lawsuit states.

    Google controls about a quarter of the US digital advertising market, with Meta, Amazon and TikTok combining for another third, according to eMarketer. News publishers and other websites combine for the other roughly 40%. Big Tech’s share of the market is beginning to erode slightly, but Google remains by far the largest individual player.

    That means publishers often rely at least in part on Google’s advertising technology to support their operations: Gannett says Google controls 90% of the ad market for publishers.

    Michael Reed, Gannett’s chairman and CEO, said in a statement Tuesday that Google’s dominance in the online advertising industry has come “at the expense of publishers, readers and everyone else.”

    “Digital advertising is the lifeblood of the online economy,” Reed added. “Without free and fair competition for digital ad space, publishers cannot invest in their newsrooms.”

    Dan Taylor, Google’s vice president of global ads, told CNN that the claims in the suit “are simply wrong.”

    “Publishers have many options to choose from when it comes to using advertising technology to monetize – in fact, Gannett uses dozens of competing ad services, including Google Ad Manager,” Taylor said in a statement Tuesday. “And when publishers choose to use Google tools, they keep the vast majority of revenue.”

    He continued: “We’ll show the court how our advertising products benefit publishers and help them fund their content online.”

    The legal action from Gannett comes as Google faces a growing number of antitrust complaints in the United States and the European Union over its advertising business, which remains its central moneymaker.

    EU officials said last week that Google’s advertising business should be broken up, alleging that the tech giant’s involvement in multiple parts of the digital advertising supply chain creates “inherent conflicts of interest” that risk harming competition.

    Earlier this year, the Justice Department and eight states sued Google, accusing the company of harming competition with its dominance in the online advertising market and similarly calling for it to be broken up.

    [ad_2]

    Source link

  • Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    [ad_1]



    CNN
     — 

    Some of the world’s top artificial intelligence companies are launching a new industry body to work together — and with policymakers and researchers — on ways to regulate the development of bleeding-edge AI.

    The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society.

    Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry.

    News of the forum comes after the four AI firms, along with several others including Amazon and Meta, pledged to the Biden administration to subject their AI systems to third-party testing before releasing them to the public and to clearly label AI-generated content.

    The industry-led forum, which is open to other companies designing the most advanced AI models, plans to make its technical evaluations and benchmarks available through a publicly accessible library, the companies said in a joint statement.

    “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

    The announcement comes a day after AI experts such as Anthropic CEO Dario Amodei and AI pioneer Yoshua Bengio warned lawmakers of potentially serious, even “catastrophic” societal risks stemming from unrestrained AI development.

    “In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.

    Within two to three years, Amodei said, AI could become powerful enough to help malicious actors build functional biological weapons, where today those actors may lack the specialized knowledge needed to complete the process.

    The best way to prevent major harms, Bengio told a Senate panel, is to restrict access to AI systems; develop standard and effective testing regimes to ensure those systems reflect shared societal values; limit how much of the world any single AI system can truly understand; and constrain the impact that AI systems can have on the real world.

    The European Union is moving toward legislation that could be finalized as early as this year that would ban the use of AI for predictive policing and limit its use in lower-risk scenarios.

    US lawmakers are much further behind. While a number of AI-related bills have already been introduced in Congress, much of the driving force for a comprehensive AI bill rests with Senate Majority Leader Chuck Schumer, who has prioritized getting members up to speed on the basics of the industry through a series of briefings this summer.

    Starting in September, Schumer has said, the Senate will hold a series of nine additional panels for members to learn about how AI could affect jobs, national security and intellectual property.

    [ad_2]

    Source link

  • Google earned $10 million by allowing misleading anti-abortion ads from ‘fake clinics,’ report says | CNN Business

    Google earned $10 million by allowing misleading anti-abortion ads from ‘fake clinics,’ report says | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Google has earned more than $10 million over the past two years by allowing misleading advertisements for “fake” abortion clinics that aim to stop women from having the procedure, according to an estimate from a report released Thursday from the non-profit Center for Countering Digital Hate.

    The estimated amount is microscopic compared to the more than $200 billion Google generates from ad sales annually. But the report’s data hints at the broad reach pro-life groups can have by placing these advertisements in Google results for common phrases searched for by abortion seekers.

    Using Semrush, an analytics tool, researchers at the CCDH identified “188 fake clinic websites” that placed ads on Google between March, 2021 and February of this year. CCDH estimates that ads for fake clinics were clicked on by users 13 million times during this period.

    Some searching for “abortion clinics near me” on Google instead found results directing them toward so-called “crisis pregnancy centers” that may try to talk abortion-seekers out of treatment and offer medically unproven abortion pill reversal techniques, according to the report.

    Other Google searches populated by crisis clinic ads included “abortion pill,” “abortion clinic” and “planned parenthood,” the report said, with clinics in states where abortion is legal spending two times as much as those in states with bans.

    In the wake of the Supreme Court overturning Roe v Wade, Google faced calls from Congressional Democrats to do more to prevent searches for abortion clinics from returning results for misleading ads – as well as calls from Republican lawmakers to do the opposite. The dueling pressure from lawmakers highlighted how central Google can be for women searching for information on the procedure.

    In a statement Thursday, Google said its approach to abortion ads follows local laws and that any advertiser targeting certain keywords or phrases related to abortions must complete a certification to confirm if it does or does not provide abortion services.

    “We require any organization that wants to advertise to people seeking information about abortion services to be certified and clearly disclose whether they do or do not offer abortions,” a Google spokesperson told CNN. “We do not allow ads promoting abortion reversal treatments and we also prohibit advertisers from misleading people about the services they offer.”

    “We remove or block ads that violate these policies,” the company added.

    Google said it does not allow for abortion reversal pill advertisements because the treatment isn’t approved by the FDA. In response to Thursday’s CCDH report, the company told CNN it took “enforcement action” on content violating this policy.

    Google has continued to face scrutiny in recent months for the steps it takes to protect abortion seekers’ location data.

    Nearly a dozen Senate Democrats wrote to Google in May with questions about how it deletes users’ location history when they have visited sensitive locations such as abortion clinics. The letter came after tests performed by The Washington Post and other privacy advocates appeared to show that Google was not quickly or consistently deleting users’ recorded visits to fertility centers of Planned Parenthood clinics.

    Google previously declined to comment on the lawmakers’ letter. Instead, it referred CNN to a company blog post that includes abortion clinics on a list of sensitive locations, but did not explain what it means when it claims the data will be deleted “soon after” a visit.

    [ad_2]

    Source link