ReportWire

Tag: google inc

  • EU officials warn Google and YouTube about Hamas-Israel disinformation and graphic content | CNN Business

    EU officials warn Google and YouTube about Hamas-Israel disinformation and graphic content | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The European Commission sent a warning letter Friday to Google and its subsidiary YouTube over disinformation and graphic content linked to the Hamas-Israel conflict, in the European Union’s latest effort to scrutinize Big Tech’s handling of the war.

    The letter from European Commissioner Thierry Breton, addressed to Google CEO Sundar Pichai and also sent to YouTube CEO Neal Mohan, reminded the company about its content moderation obligations under the EU’s Digital Services Act (DSA). Breton shared the letter on X.

    Breton highlighted legal requirements for Google to keep graphic content such as hostage videos away from underage users; to act swiftly when authorities flag content that violates European laws; and to mitigate disinformation.

    “This brings me to a second area of pressing concern: tackling disinformation in the context of elections, a priority which we personally discussed when we met in Brussels in May,” Breton wrote, referencing upcoming elections in a number of EU countries.

    It also warned of possible penalties if a future investigation were to find Google (GOOGL) is not complying with the DSA.

    Breton’s warning comes after similar letters he sent this week to X, the platform formerly known as Twitter, as well as Meta and TikTok.

    Unlike some of those previous letters, however, Breton’s letter to Google does not directly suggest the company has spread misleading or illegal content. And where Breton had asked some of Google’s counterparts to respond to his letter within 24 hours, Friday’s letter to Google merely requests a report “in a prompt, accurate and complete manner.”

    In response, YouTube spokeswoman Ivy Choi said the company has been actively working to take offensive videos down.

    “Following the devastating attacks on civilians in Israel and the escalating conflict in Israel and Gaza, our teams have removed thousands of harmful videos, and our systems continue to connect people with high-quality news and information,” Choi said. “Our teams are working around the clock to monitor for harmful footage and remain vigilant to take action quickly across YouTube, including videos, Shorts and livestreams.”

    YouTube previously told CNN its teams have removed thousands of videos since Hamas’ attacks on Israel began, and that it continues to monitor for hate speech, extremism, graphic imagery and other content that violates its policies.

    According to CNN’s own review of the platform, YouTube is also surfacing almost entirely videos from mainstream news organizations in searches related to the war.

    [ad_2]

    Source link

  • YouTube to prohibit false claims about cancer treatments under its medical misinformation policy | CNN Business

    YouTube to prohibit false claims about cancer treatments under its medical misinformation policy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    YouTube announced Tuesday that it will start removing false claims about cancer treatments as part of an ongoing effort to build out its medical misinformation policy.

    Under the updated policy, YouTube will prohibit “content that promotes cancer treatments proven to be harmful or ineffective, or content that discourages viewers from seeking professional medical treatment,” Dr. Garth Graham, head of YouTube Health, said in a blog post Tuesday.

    “This includes content that promotes unproven treatments in place of approved care or as a guaranteed cure, and treatments that have been specifically deemed harmful by health authorities,” he said, such as the misleading claim that patients should “take vitamin C instead of radiation therapy.”

    The update is just one of several steps YouTube has made in recent years to build out its medical misinformation policy, which also prohibits false claims about vaccines and abortions, as well as content that promotes or glorifies eating disorders.

    As part of the announcement, YouTube is rolling out a broader updated medical misinformation policy framework that will consider content in three categories: prevention, treatment and denial.

    “To determine if a condition, treatment or substance is in scope of our medical misinformation policies, we’ll evaluate whether it’s associated with a high public health risk, publicly available guidance from health authorities around the world, and whether it’s generally prone to misinformation,” Graham said. He added that YouTube will take action on content that falls into that framework and “contradicts local health authorities or the World Health Organization.”

    Graham said the policy is designed to preserve “the important balance of removing egregiously harmful content while ensuring space for debate and discussion.”

    Cancer treatment fits YouTube’s updated medical misinformation framework because the disease poses a high public health risk and is a topic prone to frequent misinformation, and because there is “stable consensus about safe cancer treatments from local and global health authorities,” Graham said.

    As with many social media policies, however, the challenge often isn’t introducing it but enforcing it. YouTube says its restrictions on cancer treatment misinformation will go into effect on Tuesday and enforcement will ramp up in the coming weeks. The company has previously said it uses both human and automated moderation to review videos and their context.

    YouTube also plans to promote cancer-related content from the Mayo Clinic and other authoritative sources.

    [ad_2]

    Source link

  • Google Maps and Waze temporarily disable live traffic data in Israel | CNN Business

    Google Maps and Waze temporarily disable live traffic data in Israel | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Google is temporarily disabling live traffic conditions on its mapping service apps, Google Maps and Waze, in Israel, the tech company confirmed Monday, as the country prepares for a potential ground invasion into Gaza.

    “As we have done previously in conflict situations and in response to the evolving situation in the region, we have temporarily disabled the ability to see live traffic conditions and busyness information out of consideration for the safety of local communities,” a Google Maps spokesperson said.

    A Google spokesperson said the company consulted several sources that included regional and local authorities to make the assessment.

    However, Google did not say whether the tools would be disabled in Israel, Gaza or both. It also did not say whether the action was at the request of the Israel Defense Forces. CNN has reached out to IDF for comment.

    The website Geektime first reported the news.

    Google made a similar move last year after Russia invaded Ukraine, Reuters reported. In Ukraine, Google temporarily disabled real-time vehicle data.

    Google Maps added that “anyone navigating to a specific place will still get routes and ETAs that take current traffic conditions into account.”

    Google acquired Israeli mapping service Waze in 2013 and merged both product teams in 2022.

    [ad_2]

    Source link

  • Willi Ninja, ‘Godfather of Voguing,’ celebrated in Google Doodle | CNN

    Willi Ninja, ‘Godfather of Voguing,’ celebrated in Google Doodle | CNN

    [ad_1]



    CNN
     — 

    Google is honoring the late dancer, choreographer and LGBTQ+ icon Willi Ninja with a Google Doodle.

    Ninja, who was featured in the documentary “Paris is Burning,” rose to fame in the 1980s and 1990s and created the “The Iconic House of Ninja” social community and dance troupe, which lives on today.

    A star in the Harlem ballroom scene, credited as the “Godfather of Voguing,” Ninja’s given name was William Roscoe Leake. Born in 1961, Ninja grew up in Flushing, Queens with his mom taking him to ballet performances at the Apollo Theater in New York.

    He would go on to invent his own style of dancing.

    “Paris is Burning,” a film about LGBTQ+ culture in America in the 1980s, premiered on June 9 1990. The film featured Ninja prominently and is recognized as “culturally, historically, or aesthetically significant” by the Library of Congress. It was selected for preservation in the United States National Film Registry in 2016.

    Ninja’s dancing inspired and influenced artists like Madonna. He danced in two music videos for Janet Jackson and was was a runway model for designer Jean-Paul Gaultier.

    An advocate for HIV/AIDS education and prevention, Ninja died in 2006 at age 45.

    His Google Doodle celebrates his dancing style with a 48-second animated clip.

    [ad_2]

    Source link

  • Why foldable phones are so incredibly expensive | CNN Business

    Why foldable phones are so incredibly expensive | CNN Business

    [ad_1]



    CNN
     — 

    Chris Pantons is what you’d call a Google Pixel super fan. The Knoxville, Tennessee native loves the software, the camera, the virtual assistant, all of it. He even credits the phone’s car crash detection tool with saving his life a few years ago when he was in an accident.

    “I’ve owned practically every Pixel device,” said Pantons, 33, who has posted hundreds of YouTube videos about Pixel phones and other tech products. “I’ve influenced so much of my family to switch to Pixel – my brother and sister-in-law, mom and wife … and I had a coworker switch, too.”

    But this is the first year he won’t be upgrading to Pixel’s latest offering: the Pixel Fold, a foldable smartphone that starts at $1,799. “I’d love to own it,” he told CNN. “I don’t have the finances to do so. … [That] price for a first generation device is astronomical.”

    Earlier this month, Google became the latest tech company to unveil a foldable smartphone, with the promise of giving customers all the features they’ve come to expect in a phone, paired with a tablet-sized display. But Pantons wasn’t the only one who felt sticker shock.

    “My first car was $1800,” one user wrote on Twitter. “Google [lost] their minds.” Another user said they’ve been saving up, knowing the price for a Pixel foldable phone would inevitably be high once announced.

    “The fact you can buy a new Pixel, Pixel tablet and a Pixel Watch for less than the Fold and have various devices for use cases is a better value,” said Pantons.

    The pricing problem isn’t unique to Google. When Samsung launched the Galaxy Z Fold in 2020, it cost $1,999. It has come down in price somewhat, but the latest version of the Z Fold still starts at $1,799 – the same as the Pixel Fold. Even foldable models from budget brands retail for well over $1,000 in markets abroad.

    By comparison, the flagship iPhone starts at $799, less than half the price of the Pixel Fold. And classic 90s-style pre-paid flip phones, which are suddenly trendy again, can cost as little as $20.

    The higher price point is one of the factors limiting the size of the foldable market. Samsung currently dominates the category, followed by others including Motorola, Lenovo, Oppo, and Huawei. According to ABI Research, foldable and flexible displays made up about 0.7% of the smartphone market in 2021, and in 2022 expected to fall just shy of 2%.

    Lowering the price could help boost traction, but manufacturers may struggle to do that anytime soon.

    The flexible screen found on foldable phones is one of the biggest reasons why they cost so much.

    Flexible displays require more engineering and are more expensive to manufacture than traditional displays. And the Google Pixel Fold has two: a 5.8-inch cover display and a 7.6-inch inner display.

    Other components unique to foldables also drive up the cost. The Pixel Fold, for example, moves on a custom-built 180-degree hinge. The mechanism is moved out entirely from under the display to improve its dust resistance and decrease the device’s overall thickness, according to the company. This also requires complex engineering and costly manufacturing.

    “Expense is mainly to do with the high costs of components, notably the folding displays and hinge technology, which in many cases is a proprietary hinge design,” said David McQueen, research director at ABI Research. “So until volume grows enough that vendors can get scale, prices won’t be falling any time soon.”

    Foldable smartphones are still in their infancy. As a result, much of the research and development, and the costs associated with it, still lie ahead for manufacturers as they fine tune their products.

    “Companies often try to recoup their investment with a high price tag,” said Nabila Popal, research director at market research firm IDC.

    Foldable phones also remain a niche product for now, and manufacturers are targeting the price for the people willing to buy them early to help offset costs.

    The future for foldables remains uncertain. Most apps are still not optimized for foldable devices; Google’s chief rival, Apple, has yet to embrace the option; and splurging for a first-generation device with a lot of unknowns is a risky bet for anyone.

    Foldable phones are also notoriously fragile. Early versions of the Samsung Galaxy Z Fold, for example, had issues with the screen. Repairs for foldable smartphones can be costly too.

    But Google’s decision to embrace the option may help persuade more consumers to take a chance.

    Sean Milfort, a PhD student at Northcentral University, said he pre-ordered the Pixel Fold because he always wanted a foldable smartphone and didn’t want to leave the Pixel brand.

    “I’m a big fan of the Pixel line and have loved the idea of a foldable,” he said. “The fact that it is coming from Google – because they make Android – gives me hope that they will be really investing in that larger form factor device with Android.”

    But holdouts like Pantons may wait on the chance it could come down in price.

    “If a trade-in deal later on becomes available or it goes on sale then maybe then [I’ll buy one],” he said.

    [ad_2]

    Source link

  • When you’re talking to a chatbot, who’s listening? | CNN Business

    When you’re talking to a chatbot, who’s listening? | CNN Business

    [ad_1]


    New York
    CNN
     — 

    As the tech sector races to develop and deploy a crop of powerful new AI chatbots, their widespread adoption has ignited a new set of data privacy concerns among some companies, regulators and industry watchers.

    Some companies, including JPMorgan Chase

    (JPM)
    , have clamped down on employees’ use of ChatGPT, the viral AI chatbot that first kicked off Big Tech’s AI arms race, due to compliance concerns related to employees’ use of third-party software.

    It only added to mounting privacy worries when OpenAI, the company behind ChatGPT, disclosed it had to take the tool offline temporarily on March 20 to fix a bug that allowed some users to see the subject lines from other users’ chat history.

    The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post.

    And just last week, regulators in Italy issued a temporary ban on ChatGPT in the country, citing privacy concerns after OpenAI disclosed the breach.

    “The privacy considerations with something like ChatGPT cannot be overstated,” Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN. “It’s like a black box.”

    With ChatGPT, which launched to the public in late November, users can generate essays, stories and song lyrics simply by typing up prompts.

    Google and Microsoft have since rolled out AI tools as well, which work the same way and are powered by large language models that are trained on vast troves of online data.

    When users input information into these tools, McCreary said, “You don’t know how it’s then going to be used.” That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, McCreary said, “I think the opportunity for company trade secrets to get dropped into these different various AI’s is just going to increase.”

    Steve Mills, the chief AI ethics officer at Boston Consulting Group, similarly told CNN that the biggest privacy concern that most companies have around these tools is the “inadvertent disclosure of sensitive information.”

    “You’ve got all these employees doing things which can seem very innocuous, like, ‘Oh, I can use this to summarize notes from a meeting,’” Mills said. “But in pasting the notes from the meeting into the prompt, you’re suddenly, potentially, disclosing a whole bunch of sensitive information.”

    If the data people input is being used to further train these AI tools, as many of the companies behind the tools have stated, then you have “lost control of that data, and somebody else has it,” Mills added.

    OpenAI, the Microsoft-backed company behind ChatGPT, says in its privacy policy that it collects all kinds of personal information from the people that use its services. It says it may use this information to improve or analyze its services, to conduct research, to communicate with users, and to develop new programs and services, among other things.

    The privacy policy states it may provide personal information to third parties without further notice to the user, unless required by law. If the more than 2,000-word privacy policy seems a little opaque, that’s likely because this has pretty much become the industry norm in the internet age. OpenAI also has a separate Terms of Use document, which puts most of the onus on the user to take appropriate measures when engaging with its tools.

    OpenAI also published a new blog post Wednesday outlining its approach to AI safety. “We don’t use data for selling our services, advertising, or building profiles of people — we use data to make our models more helpful for people,” the blogpost states. “ChatGPT, for instance, improves by further training on the conversations people have with it.”

    Google’s privacy policy, which includes its Bard tool, is similarly long-winded, and it has additional terms of service for its generative AI users. The company states that to help improve Bard while protecting users’ privacy, “we select a subset of conversations and use automated tools to help remove personally identifiable information.”

    “These sample conversations are reviewable by trained reviewers and kept for up to 3 years, separately from your Google Account,” the company states in a separate FAQ for Bard. The company also warns: “Do not include info that can be used to identify you or others in your Bard conversations.” The FAQ also states that Bard conversations are not being used for advertising purposes, and “we will clearly communicate any changes to this approach in the future.”

    Google also told CNN that users can “easily choose to use Bard without saving their conversations to their Google Account.” Bard users can also review their prompts or delete Bard conversations via this link. “We also have guardrails in place designed to prevent Bard from including personally identifiable information in its responses,” Google said.

    “We’re still sort of learning exactly how all this works,” Mills told CNN. “You just don’t fully know how information you put in, if it is used to retrain these models, how it manifests as outputs at some point, or if it does.”

    Mills added that sometimes users and developers don’t even realize the privacy risks that lurk with new technologies until it’s too late. An example he cited was early autocomplete features, some of which ended up having some unintended consequences like completing a social security number that a user began typing in — often to the alarm and surprise of the user.

    Ultimately, Mills said, “My view of it right now, is you should not put anything into these tools you don’t want to assume is going to be shared with others.”

    [ad_2]

    Source link

  • Amazon, Microsoft could face UK antitrust probe over cloud services | CNN Business

    Amazon, Microsoft could face UK antitrust probe over cloud services | CNN Business

    [ad_1]


    London
    CNN
     — 

    Britain’s media and communications regulator Ofcom says it has “significant concerns” that Amazon and Microsoft could be harming competition in the market for cloud services.

    In a statement Wednesday, Ofcom said it was “proposing to refer” the cloud services market to the Competition and Markets Authority, the UK antitrust regulator, for further investigation.

    Ofcom’s own probe, which it launched in October, had so far uncovered some “concerning practices, including by some of the biggest tech firms in the world,” said Fergal Farragher, the Ofcom director leading the investigation.

    “High barriers to switching are already harming competition in what is a fast-growing market. We think more in-depth scrutiny is needed, to make sure it’s working well for people and businesses who rely on these services,” Farragher added.

    The Competition and Markets Authority said it received Ofcom’s provisional findings Wednesday and was reviewing them. “We stand ready to carry out a market investigation into this area, should Ofcom determine it is required,” a spokesperson said.

    The Ofcom announcement comes days after Google Cloud accused Microsoft

    (MSFT)
    of anti-competitive cloud computing practices. In an interview with Reuters, Google Cloud Vice President Amit Zavery said the company had raised the issue with antitrust agencies and urged EU antitrust regulators to take a closer look.

    Cloud services are delivered to businesses and consumers over the internet and include applications such as Gmail and Dropbox.

    Europe’s Digital Markets Act, which will apply from May, aims to enhance competition in online services. Britain’s own Digital Markets, Competition and Consumer Bill is expected to come before lawmakers this year.

    According to Ofcom, Amazon

    (AMZN)
    Web Services and Microsoft’s Azure have a combined UK market share of 60%-70% in cloud services. Google

    (GOOGL)
    is their closest competitor with 5%-10%.

    Ofcom said the three companies charged high “egress fees” for transferring data out of a cloud, which discourages customers from switching providers or using multiple providers to best serve their needs.

    It also flagged technical restrictions imposed by the leading providers that prevent some of the services of one provider working effectively with cloud services from other firms, and said that fee discounts were structured to incentivize customers to use a single provider for all or most of their cloud needs.

    There were indications that these market features were already causing harm, “with evidence of cloud customers facing significant price increases when they come to renew their contracts,” Ofcom said.

    A Microsoft spokesperson said the company would continue to engage with Ofcom on its investigation. “We remain committed to ensuring the UK cloud industry stays highly competitive,” the spokesperson added. CNN has also contacted Amazon and Google.

    Ofcom has invited feedback on its interim findings and will publish a final decision by October 5 on whether to refer the cloud services market to the Competition and Markets Authority.

    “Making a market investigation reference would be a significant step for Ofcom to take. Our proposal reflects the importance of cloud computing to UK consumers and businesses,” it said.

    [ad_2]

    Source link

  • GM plans to phase out Apple CarPlay in EVs, with Google’s help | CNN Business

    GM plans to phase out Apple CarPlay in EVs, with Google’s help | CNN Business

    [ad_1]

    General Motors plans to phase out widely used Apple

    (AAPL)
    CarPlay and Android Auto technologies that allow drivers to bypass a vehicle’s infotainment system, shifting instead to built-in infotainment systems developed with Google

    (GOOG)
    for future electric vehicles.

    Apple CarPlay and Android Auto systems allow users to mirror their smartphone screens in a vehicle’s dashboard display.

    GM’s decision to stop offering those systems in future electric vehicles, starting with the 2024 Chevrolet Blazer, could help the automaker capture more data on how consumers drive and charge EVs.

    GM is designing the on-board navigation and infotainment systems for future EVs in partnership with Alphabet’s Google.

    The decision to phase out CarPlay smartphone projection technology is a setback for Apple in the competition with Google to capture more real estate on vehicle dashboards in North America. GM’s Chevrolet brand in the past boasted of offering more models with CarPlay or Android Auto than any other brand.

    GM has been working with Google since 2019 to develop the software foundations for infotainment systems that will be more tightly integrated with other vehicle systems such as GM’s Super Cruise driver assistant. The automaker is accelerating a strategy for its EVs to be platforms for digital subscription services.

    By 2035, GM’s goal is to phase out production of new combustion light-duty vehicles.

    GM would benefit from focusing engineers and investment on one approach to more tightly connecting in-vehicle infotainment and navigation with features such as assisted driving, Edward Kummer, GM chief digital officer, and Mike Hichme, executive director of digital cockpit experience, said in an interview.

    “We have a lot of new driver assistance features coming that are more tightly coupled with navigation,” Hichme told Reuters. “We don’t want to design these features in a way that are dependent on a person having a cellphone.”

    Buyers of GM EVs with the new systems will get access to Google Maps and Google Assistant, a voice command system, at no extra cost for eight years, GM said. GM said the future infotainment systems will offer applications such as Spotify’s music service, Audible and other services that many drivers now access via smartphones.

    “We do believe there are subscription revenue opportunities for us,” Kummer said. GM Chief Executive Mary Barra is aiming for $20 billion to $25 billion in annual revenue from subscriptions by 2030.

    GM plans to continue offering Apple CarPlay and Android Auto mirroring systems in its combustion models. Owners of vehicles equipped with the mirroring technologies will still be able to use the systems, GM said.

    Drivers also will still be able to listen to music or make phone calls on iPhones or Android smartphones using Bluetooth wireless connectivity, GM said.

    [ad_2]

    Source link

  • Google begins rolling out its ChatGPT rival | CNN Business

    Google begins rolling out its ChatGPT rival | CNN Business

    [ad_1]



    CNN
     — 

    Google is opening up access to Bard, its new AI chatbot tool that directly competes with ChatGPT.

    Starting Tuesday, users can join a waitlist to gain access to Bard, which promises to help users outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    A company representative told CNN it will be a separate, complementary experience to Google Search, and users can also visit Search to check its responses or sources. Google said in a blog post it plans to “thoughtfully” add large language models to search “in a deeper way” at a later time.

    Google said it will start rolling out the tool in the United States and United Kingdom, and plans to expand it to more countries and languages in the future.

    The news comes as Google, Microsoft, Facebook and other tech companies race to develop and deploy AI-powered tools in the wake of the recent, viral success of ChatGPT. Last week, Google announced it is also bringing AI to its productivity tools, including Gmail, Sheets and Docs. Shortly after, Microsoft announced a similar AI upgrade to its productivity tools.

    Google unveiled Bard last month in a demo that was later called out for providing an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.

    Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model. These models are trained on vast troves of data online in order to generate compelling responses to user prompts. The immense attention on ChatGPT reportedly prompted Google’s management to declare a “code red” situation for its search business.

    But Bard’s blunder highlighted the challenge Google and other companies face with integrating the technology into their core products. Large language models can present a handful of issues, such as perpetuating biases, being factually incorrect and responding in an aggressive manner.

    Google acknowledged in the blog post Tuesday that AI tools are “not without their faults.” The company said it continues to use human feedback to improve its systems and add new “guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic.”

    Last week, OpenAI released GPT-4, the next-generation version of the technology that powers ChatGPT and Microsoft’s new Bing browser, with similar safeguards. In the first day after it was unveiled, GPT-4 stunned many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    [ad_2]

    Source link

  • Google suspends Chinese shopping app Pinduoduo over malware | CNN Business

    Google suspends Chinese shopping app Pinduoduo over malware | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Google has suspended Pinduoduo, a popular Chinese budget shopping app, from its Play Store after finding malware in versions of the app.

    In a Tuesday statement, Google said versions of the app that are not in the Play Store have been found to contain malware.

    “We have suspended the Play version of the app for security concerns while we continue our investigation,” a Google spokesperson said.

    It has also enforced Google Play Protect, which scans apps installed on Android phones for harmful behavior, on the allegedly malicious apps, according to the statement.

    “Google Play Protect enforcement has been set to block installation attempts of these identified malicious apps. Users that have malicious versions of the app downloaded to their devices are warned and prompted to uninstall the app,” the spokesperson said.

    In a statement to CNN, Pinduoduo said it was informed by Google Play on Tuesday morning that its app had been “temporarily suspended” because the current version is “not compliant with Google’s Policy.” It said Google Play did not share more details.

    “We are communicating with Google for more information. We have been told that there are several other apps that have been suspended as well,” a Pinduoduo spokesperson said.

    In a later statement Pinduoduo said it strongly rejects “the speculation and accusation that Pinduoduo app is malicious just from a generic and non-conclusive response from Google.”

    It reiterated that “there are several apps that have been suspended from Google Play at the same time.”

    CNN has asked Google for information on whether other apps have also been suspended.

    Malware, short for malicious software, refers to any software developed to steal data or damage computer systems and mobile devices. When hidden in apps, it can be used to gain unauthorized access to information on a user’s phone.

    Pinduoduo is one of China’s most popular e-commerce platforms, with approximately 900 million users. It made its name with a group buying business model, allowing people to save money by enlisting friends to buy the same item in bulk.

    Riding on the domestic success of Pinduoduo, its US-listed parent company PDD last year launched Temu, an online shopping platform in the United States.

    Temu, which runs an online superstore for virtually everything — from home goods to apparel to electronics — has quickly become the most downloaded app in the US for both iOS and Android.

    Since its rollout in September, the app had been downloaded 24 million times as of last month, racking up more than 11 million monthly active users, according to Sensor Tower.

    Google did not mention Temu in its statement. The app is still available to download on the Play Store.

    [ad_2]

    Source link

  • YouTube restores Donald Trump’s channel | CNN Business

    YouTube restores Donald Trump’s channel | CNN Business

    [ad_1]


    New York
    CNN
     — 

    YouTube on Friday said it would restore former President Donald Trump’s channel, more than two years after suspending it following the January 6 attack on the US Capitol.

    The move follows similar actions by Twitter and Facebook-parent Meta in recent months, although Trump has yet to resume posting on those platforms. It also comes after Trump announced last fall that he would run for president again in 2024.

    “We carefully evaluated the continued risk of real-world violence, while balancing the chance for voters to hear equally from major national candidates in the run up to an election,” YouTube said in a tweet Friday.

    A representative for Trump did not immediately respond to a request for comment from CNN. The channel restoration was first reported by Axios.

    YouTube initially suspended Trump’s channel after the Capitol riot, saying a video on the channel had violated its policy against inciting violence. Since then, Trump’s account had been blocked from uploading new videos or livestreams.

    YouTube had also disabled comments underneath videos on Trump’s channel, which appear to have been restored on Friday. Immediately after his account was restored, a number of users began posting “welcome back” comments under old videos.

    While YouTube was never Trump’s top social platform, the reactivation of his channel will restore his access to the massive video streaming platform, where his account has more than 2.6 million subscribers.

    As more platforms restore Trump’s account, some are also stressing he continues to face restrictions on what he can post, with the potential to be suspended again.

    YouTube said in its statement that Trump’s “channel will continue to be subject to our policies, just like any other channel on YouTube.” YouTube operates a strike policy under which users can receive escalating suspensions based on the number and severity of their violations.

    Meanwhile, Meta said last month that it had implemented new guardrails on Trump’s account that could result in it being suspended again if he breaks the company’s rules.

    For now, the former president has continued posting only on his own platform, Truth Social, which launched after he was suspended from more mainstream options. Trump on Friday morning posted a series of six videos on Truth Social, including multiple that repeated false claims that the 2020 presidential election was stolen.

    [ad_2]

    Source link

  • Google Glass is being discontinued, again | CNN Business

    Google Glass is being discontinued, again | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Google will no longer sell the latest Enterprise Edition of Google Glass, the company announced this week, effectively killing off an innovative but failed wearable product line from another era that many consumers may have assumed was long gone.

    First unveiled in 2013, Google Glass was initially marketed for a general audience, with the promise of giving people access to a computer on their face rather than having to pull out a phone. But the smartglasses were discontinued in 2015 after beta versions failed to gain traction due to its high price tag, clunky design and concerns about privacy.

    Google then shifted the focus from consumers to enterprise. The first Enterprise edition of Glass, announced in 2017, was pushed for use in industries such as manufacturing and logistics. The Enterprise Edition 2, released in 2019, was Google’s last attempt at saving the Glass product. But the $999 product failed to catch on.

    “Thank you for over a decade of innovation and partnership,” Google wrote on its FAQ page announcing the decision. The company will continue to support the phased out Enterprise Edition until September.

    Google did not respond to CNN’s request for comment.

    Google’s decision to discontinue the product comes amid cost cuts across the company. Like many of its peers, Google has recently announced plans to lay off thousands in response to recession fears and shifting pandemic demand for digital products.

    Still, the dream of Google Glass lives on. Snapchat’s parent company sells Spectacles, another set of smartglasses that has struggled over the years to gain traction. Apple is reportedly working on augmented reality glasses. And even after the setback of Glass, Google said last year it was continuing to test other AR glasses.

    “Augmented reality (AR) is opening up new ways to interact with the world around us,” the company said in a blog post last summer. “It can help us quickly and easily access the information we need — like understanding another language or knowing how best to get from point A to point B.”

    A decade after Google launched Glass with a similarly ambitious objective, the future is still coming into focus.

    [ad_2]

    Source link

  • A Japanese YouTube star became a lawmaker last year. Now he’s been fired for never coming to work | CNN

    A Japanese YouTube star became a lawmaker last year. Now he’s been fired for never coming to work | CNN

    [ad_1]



    CNN
     — 

    A YouTube star who became a Japanese lawmaker has been stripped of his role after he failed to show up for a single day of work in parliament.

    At a plenary session on Wednesday, Japan’s parliament expelled Yoshikazu Higashitani for his continued absence, the first time it has taken such a step in more than seven decades.

    Yoshikazu is better known by his online alias GaaSyy under which he ran a YouTube channel talking about celebrity gossip.

    He was elected to the Upper House of Japan’s parliament in July 2022.

    But he failed to respond to a ‘letter of invitation’ from the Speaker of the Upper House, was absent from the March 8 plenary session and did not attend a single parliamentary session since his election.

    Japanese parliamentary law stipulates that members of parliament must come to the House on the day it is convened.

    At the time of the March 8 plenary session, GaaSyy was out of the country, according to local media.

    In a video, GaaSyy said he was going to Turkey to help in areas affected by the recent devastating earthquake, CNN affiliate TV Asahi reported on March 6.

    GaaSyy was previously asked to apologize for his absence but did not.

    On Tuesday, the disciplinary committee of the Upper House unanimously decided to expel him as a member of parliament, the most severe possible punishment.

    His expulsion is the first in 72 years and only the third ever under the current Japanese constitution.

    GaaSyy joins a member of Japan’s upper chamber who was expelled in 1950 and a member of the lower house from 1951.

    Japanese media has reported that GaaSyy refused to attend parliament because he feared being arrested if he returned to his country.

    He’s being sued for defamation by several celebrities over the content of some of his YouTube videos.

    For at least one celebrity exposé video in particular, the Tokyo Metropolitan Police have asked GaaSyy to participate in a voluntary interview.

    [ad_2]

    Source link

  • ‘It’s all a lie’: Russians are trapped in Putin’s parallel universe. But some want out | CNN

    ‘It’s all a lie’: Russians are trapped in Putin’s parallel universe. But some want out | CNN

    [ad_1]



    CNN
     — 

    One year ago, when Russia launched its all-out invasion of Ukraine and began Europe’s biggest land war since 1945, it waged another battle at home – intensifying its information blockade in an effort to control the hearts and minds of its own citizens.

    Draconian new censorship laws targeted any media still operating outside the controls of the Kremlin and most independent journalists left the country. A digital Iron Curtain was reinforced, shutting Russians off from Western news and social media sites.

    And as authorities rounded up thousands in a crackdown on anti-war protests, a culture of fear descended on Russian cities and towns that prevents many people from sharing their true thoughts on the war in public.

    One year on, that grip on information remains tight – and support for the conflict seemingly high – but cracks have started to show.

    Some Russians are tuning out the relentless jingoism on Kremlin-backed airwaves. Tech-savvy internet users skirt state restrictions to access dispatches and pictures from the frontlines. And, as Russia turns to mobilization to boost its stuttering campaign, it is struggling to contain the personal impact that one year of war is having on its citizens.

    “In the beginning I was supporting it,” Natalya, a 53-year-old Moscow resident, told CNN of what the Kremlin and most Russians euphemistically call a “special military operation.” “But now I am completely against it.”

    “What made me change my opinion?,” she contemplated aloud. “First, my son is of mobilization age, and I fear for him. And secondly, I have very many friends there, in Ukraine, and I talk to them. That is why I am against it.”

    CNN is not using the full names of individuals who were critical of the Kremlin. Public criticism of the war in Ukraine or statements that discredit Russia’s military can potentially mean a fine or a prison sentence.

    For Natalya and many of her compatriots, the endless, personal grind of war casts Russian propaganda in a different light. And for those hoping to push the tide of public opinion against Putin, that creates an opening.

    “I do not trust our TV,” she said. “I cannot be certain they are not telling the truth, I just don’t know.

    “But I have my doubts,” she added. “I think, probably, they’re not.”

    ​​Natalya is not the only Russian to turn against the conflict, but she appears to be in the minority.

    Gauging public opinion is notoriously difficult in a country where independent pollsters are targeted by the government, and many of the 146 million citizens are reluctant to publicly condemn President Vladimir Putin. But according to the Levada Center, a non-governmental polling organization, support dipped by only 6% among Russians from March to November last year, to 74%.

    In many respects, that is unsurprising. There is little room for dissenting voices on Russian airwaves; the propaganda beamed from state-controlled TV stations since the onset of war has at times attracted derision around the world, so overblown are their more fanatical presenters and pundits.

    In the days leading up to Friday’s one-year anniversary of war – according to BBC Monitoring’s Francis Scarr, who analyzes Russian media daily – a Russian MP told audiences on state-owned TV channel Russia-1 that “if Kyiv needs to lie in ruins for our flag to fly above it, then so be it!”; radio presenter Sergey Mardan proclaimed: “There’s only one peace formula for Ukraine: the liquidation of Ukraine as a state.”

    And, in a farfetched statement that encapsulates the alternate reality in state TV channels exist, another pro-Russian former lawmaker claimed of Moscow’s war progress: “Everything is going to plan and everything is under control.”

    Russian state TV presents a picture that is worlds away from the realities of the battleground. But it has won over some Russians who once held concerns about the war.

    Such programming typically appeals to a select group of older, more conservative Russians who pine for the days of the Soviet Union – though its reach spans generations, and it has claimed some converts.

    “My opinion on Ukraine has changed,” said Ekaterina, 37, who turns to popular Russian news program “60 Minutes” after getting home from work. “At first my feelings were: what is the point of this war? Why did they take the decision to start it? It makes the lives of the people here in Russia much worse!”

    The conflict has taken a personal toll on her. “My life has deteriorated a lot in this year. Thankfully, no one close to me has been mobilized. But I lost my job. And I see radical changes around me everywhere,” she said.

    And yet, Ekaterina’s initial opposition to the invasion has disappeared. “I arrived at the understanding that this special military operation was inevitable,” she said. “It would have come to this no matter what. And had we not acted first, war would have been unleashed against us,” she added, mirroring the false claims of victimhood at the hands of the West that state media relentlessly communicate.

    07 russia information interviews

    Ekaterina, 37 (top) and Daniil, 20, follow news on the war from Russian state TV. But they have reached different conclusions on how closely to trust the output.

    Reversals like hers will be welcomed in the Kremlin as vindication of their notorious and draconian grip on media reporting.

    “I trust the news there completely. Yes, they all belong to the state, (but) why should I not trust them?” Yuliya, a 40-year-old HR director at a marketing firm, told CNN. “I think (the war) is succeeding. Perhaps it is taking longer than one could wish for. But I think it is successful,” said Yuliya, who said her main source of news is the state-owned Channel One.

    Around two-thirds of Russians rely primarily on television for their news, according to the Levada Center, a higher proportion than in most Western countries.

    But the sentiment of Yuliya and Ekaterina is far from universal. Even among those who generally support the war, Kremlin-controlled TV remains far removed from the reality many Russians live in.

    “Everything I hear on state channels I split in half. I don’t trust anyone (entirely),” 55-year-old accountant Tatyana said. “One needs to analyze everything … because certain things they are omitting, (or) not saying,” said Leonid, a 58-year-old engineer.

    Several people whom CNN spoke with in Moscow this month relayed similar feelings, stressing that they engaged with state-controlled TV but treated it with skepticism. And many reach different views on Ukraine.

    “I think you can trust them all only to an extent. The state channels sometimes reflect the truth, but on other occasions they say things just to calm people down,” 20-year-old Daniil said.

    Vocal minorities on each side of the conflict exist in Russia, and some have cut off friendships or left the country as a result. But sociologists tracking Russian opinion say most people in the country fall between those two extremes.

    “Quite often we are only talking about these high numbers of support (for the war),” Denis Volkov, the director of the Moscow-based Levada Center, said. “But it’s not that all these people are happy about it. They support their side, (but) would rather have it finished and fighting stopped.”

    This group of people tends to pay less attention to the war, according to Natalia Savelyeva, a Future Russia Fellow at the Center for European Policy Analysis (CEPA) who has interviewed hundreds of Russians since the invasion to trace the levels of public support for the conflict. “We call them ‘doubters,’” she said.

    “A lot of doubters don’t go very deep into the news … many of them don’t believe that Russian soldiers kill Ukrainians – they repeat this narrative they see on TV,” she said.

    The center ground also includes many Russians who have developed concerns about the war. But if the Kremlin cannot expect all-out support across its populace, sociologists say it can at least rely on apathy.

    Putin addresses a rally in Red Square marking the illegal Russian annexation of four regions of Ukraine -- Luhansk, Donetsk, Kherson and Zaporizhzhia -- in September.

    “I try to avoid watching news on the special military operation because I start feeling bad about what’s going on,” Natalya added. “So I don’t watch.”

    She is far from alone. “The major attitude is not to watch (the news) closely, not to discuss it with colleagues or friends. Because what can you do about it?” said Volkov. “Whatever you say, whatever you want, the government will do what they want.”

    That feeling of futility means anti-war protests in Russia are rare and noteworthy, a social contract that suits the Kremlin. “People don’t want to go and protest; first, because it might be dangerous, and second, because they see it as a futile enterprise,” Volkov said.

    “What are we supposed to do? Our opinion means diddly squat,” a woman told CNN in Moscow in January, anonymously discussing the conflict.

    The bulk of the population typically disengages instead. “In general, those people try to distance themselves from what’s going on,” Savelyeva added. “They try to live their lives as though nothing is happening.”

    And a culture of silence – re-enforced by heavy-handed authorities – keeps many from sharing skepticism about the conflict. A married couple in the southwestern Russian city of Krasnodar were reportedly arrested in January for professing anti-war sentiments during a private conversation in a restaurant, according to the independent Russian monitoring group OVD-Info.

    “I do have an opinion about the special military operation … it remains the same to this day,” Anna told CNN in Moscow. “I can’t tell you which side I support. I am for truth and justice. Let’s leave it like that,” she said.

    The partial mobilization of Russians has brought the war home for many citizens, leading to cracks in Putin's information Iron Curtain.

    Keeping the war at arm’s length has, however, become more difficult over the course of the past year. Putin’s chaotic partial mobilization order and Russia’s increasing economic isolation has brought the conflict to the homes of Russians, and communication with friends and relatives in Ukraine often paint a different picture of the war than that reported by state media.

    “I have felt anxious ever since this began. It’s affecting (the) availability of products and prices,” a woman who asked to remain anonymous told CNN last month. “There is a lack of public information. People should be explained things. Everyone is listening to Soloviev,” she said, referring to prominent propagandist Vladimir Soloviev.

    “It would be good if the experts started expressing their real opinions instead of obeying orders, from the government and Putin,” the woman said.

    A film student, who said she hadn’t heard from a friend for two months following his mobilization, added: “I don’t know what’s happened to him. It would be nice if he just responded and said ‘OK, I’m alive.’”

    “I just wish this special military operation never started in the first place – this war – and that human life was really valued,” she said.

    For those working to break through the Kremlin’s information blockade, Russia’s quiet majority is a key target.

    Most Russians see on state media a “perverted picture of Russia battling the possible invasion of their own territory – they don’t see their compatriots dying,” said Kiryl Sukhotski, who oversees Russian-language content at Radio Free Europe/Radio Liberty, the US Congress-funded media outlet that broadcasts in countries where information is controlled by state authorities.

    “That’s where we come in,” Sukhotski said.

    The outlet is one of the most influential platforms bringing uncensored scenes from the Ukrainian frontlines into Russian-speaking homes, primarily through digital platforms still allowed by the Kremlin including YouTube, Telegram and WhatsApp.

    And interest has surged throughout the war, the network says. “We saw traffic spikes after the mobilization, and after the Ukrainian counter-offensives, because people started to understand what (the war) means for their own communities and they couldn’t get it from local media.”

    Russians see a

    Current Time, its 24/7 TV and digital network for Russians, saw a two-and-a-half-fold increase in Facebook views, and more than a three-fold rise in YouTube views, in the 10 months following the invasion, RFE/RL told CNN. Last year, QR codes which directed smartphone users to the outlet’s website started popping up in Russian cities, which RFE/RL believed were stuck on lampposts and street signs by anti-war citizens.

    But independent outlets face a challenge reaching beyond internet natives, who tend to be younger and living in cities, and penetrating the media diet of older, poorer and rural Russians, who are typically more conservative and supportive of the war.

    “We need to get to the wider audience in Russia,” Sukhotski said. “We see a lot of people indoctrinated by Russian state propaganda … it will be an uphill battle but this is where we shape our strategy.”

    Reaching Russians at all has not been easy. Most of RFE/RL’s Russia-based staff made a frantic exit from the country after the invasion, following the Kremlin’s crackdown on independent outlets last year, relocating to the network’s headquarters in Prague.

    The same fate befell outlets like BBC Russia and Latvia-based Meduza, which were also targeted by the state.

    A new law made it a crime to disseminate “fake” information about the invasion of Ukraine – a definition decided at the whim of the Kremlin – with a penalty of up to 15 years in prison for anyone convicted. This month, a Russian court sentenced journalist Maria Ponomarenko to six years in prison for a Telegram post that the court said spread supposedly “false information” about a Russian airstrike on a theater in Mariupol, Ukraine, that killed hundreds, state news agency TASS reported.

    “All our staff understand they can’t go back to Russia,” Sukhotski told CNN. “They still have families there. They still have ailing parents there. We have people who were not able to go to their parents’ funerals in the past year.”

    His staff are “still coming to terms with that,” Sukhotski admitted. “They are Russian patriots and they wish Russia well … they see how they can help.”

    Outlets like RFE/RL have openings across the digital landscape, in spite of Russia’s move to ban Twitter, Facebook and other Western platforms last year.

    About a quarter of Russians use VPN services to access blocked sites, according to a Levada Center poll carried out two months after Russia’s invasion.

    Searches for such services on Google spiked to record levels in Russia following the invasion, and have remained at their highest rates in over a decade ever since, the search engine’s tracking data shows.

    YouTube meanwhile remains one of the few major global sites still accessible, thanks to its huge popularity in Russia and its value in spreading Kremlin propaganda videos.

    “YouTube became the television substitute for Russia … the Kremlin fear that if they don’t have YouTube, they won’t be able to control the flow of information to (younger people),” Sukhotski said.

    A billboard displays the face of Specialist Nodar Khydoyan, who is participating in Russia's military action in Ukraine, in central Moscow on February 15, 2023.

    And that allows censored organizations a way in. “I watch YouTube. I watch everything there – I mean everything,” one Moscow resident who passionately opposes the war told CNN, speaking on the condition of anonymity. “These federal channels I never watch,” she said. “I don’t trust a word they say. They lie all the time! You’ve just got to switch on your logic, compare some information and you will see that it’s all a lie.”

    Telegram, meanwhile, has spiked in popularity since the war began, becoming a public square for military bloggers to analyze each day on the battlefield.

    At first, that analysis tended to mirror the Kremlin’s line. But “starting around September, when Ukraine launched their successful counter-offensives, everything started falling apart,” said Olga Lautman, a US-based Senior Fellow at CEPA who studies the Kremlin’s internal affairs and propaganda tactics. “I’ve never seen anything like it,” she said.

    Scores of hawkish bloggers, some of whom boast hundreds of thousands of followers, have strayed angrily from the Kremlin’s line in recent months, lambasting its military tactics and publicly losing faith in the armed forces’ high command.

    This month, a debacle in Vuhledar that saw Russian tanks veer wildly into minefields became the latest episode to expose those fissures. The former Defense Minister of the Moscow-backed Donetsk People’s Republic, Igor Girkin, sometimes known by his nom de guerre Igor Strelkov – now a a strident critic of the campaign – said Russian troops “were shot like turkeys at a shooting range.” In another post, he called Russian forces “morons.” Several Russian commentators called for the dismissal of Lieutenant General Rustam Muradov, the commander of the Eastern Grouping of Forces.

    “This public fighting is spilling over,” Lautman told CNN. “Russia has lost control of the narrative … it has normally relied on having a smooth propaganda machine and that no longer exists.”

    One year into an invasion that most Russians initially thought would last days, creaks in the Kremlin’s control of information are showing.

    The impact of those fractures remains unclear. For now, Putin can rely on a citizenry that is generally either supportive of the conflict or too fatigued to proclaim its opposition.

    But some onlookers believe the pendulum of public opinion is slowly swinging away from the Kremlin.

    “One family doesn’t know of another family who hasn’t suffered a loss in Ukraine,” Lautman said. “Russians do support the conflict because they do have an imperialistic ambition. But now it is knocking on their door, and you’re starting to see a shift.”

    [ad_2]

    Source link

  • DOJ seeks court sanctions against Google over ‘intentional destruction’ of chat logs | CNN Business

    DOJ seeks court sanctions against Google over ‘intentional destruction’ of chat logs | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Google should face court sanctions over “intentional and repeated destruction” of company chat logs that the US government expected to use in its antitrust case targeting Google’s search business, the Justice Department said Thursday.

    Despite Google’s promises to preserve internal communications relevant to the suit, for years the company maintained a policy of deleting certain employee chats automatically after 24 hours, DOJ said in a filing in District of Columbia federal court.

    The practice has harmed the US government’s case against the tech giant, DOJ alleged.

    “Google’s daily destruction of written records prejudiced the United States by depriving it of a rich source of candid discussions between Google’s executives, including likely trial witnesses,” the filing said.

    “We strongly refute the DOJ’s claims,” Google

    (GOOGL)
    said in a statement. “Our teams have conscientiously worked for years to respond to inquiries and litigation. In fact, we have produced over 4 million documents in this case alone, and millions more to regulators around the world.”

    The federal government’s call for sanctions adds to the pressure Google faces as it battles antitrust suits on multiple fronts, and highlights a rare move by prosecutors.

    Through a setting in its chat software, Google employees can save chat history for up to 18 months — but only if the setting is manually enabled, the US government said in its filing, adding that Google routinely trained and encouraged employees to discuss sensitive topics over chat messages they knew would be auto-deleted the next day.

    The filing cites several attached exhibits in which Google employees, sensing that a conversation was about to stray into sensitive territory, suggested that the discussion continue on the chat platform, with history turned off.

    The government’s filing follows a similar sanctions motion against Google by Epic Games, maker of the hit video game “Fortnite,” in a separate antitrust case related to Google’s app store. The two sides faced off in an evidentiary hearing last month; on Feb. 15, the judge in the case ordered Google to produce more chat messages.

    Thursday’s DOJ filing also cites the Epic evidentiary hearing, saying that it proved Google destroyed records of at least nine individuals who were each considered potential trial witnesses, and that the federal judge overseeing that case agreed the chats could have contained relevant evidence but that Google “did not systematically preserve those chats.”

    “Google admitted that — for litigations spanning the past five years — it has never preserved all chats for relevant individuals by turning chat history on,” the DOJ filing said.

    It was not until earlier this month that Google agreed to preserve the chats, the filing alleged, after failing to disclose to prosecutors its practice of deleting history-off chats after 24 hours.

    It is not the first time DOJ has tussled with Google over evidence. Last year, in the same case, the agency asked the court to sanction Google for a program known as “Communicate with Care,” in which the company allegedly trained employees to copy lawyers on emails as a way to claim attorney-client privilege on communications that were business sensitive but did not seek legal advice and did not merit confidentiality.

    While Judge Amit Mehta declined to issue sanctions at the time, he ordered that all of the emails in question be re-reviewed.

    [ad_2]

    Source link

  • What is the future of the internet? Don’t ask the Supreme Court | CNN Politics

    What is the future of the internet? Don’t ask the Supreme Court | CNN Politics

    [ad_1]



    CNN
     — 

    Nine justices set out Tuesday to determine what the future of the internet would look like if the Supreme Court were to narrow the scope of a law that some believe created the age of modern social media.

    After nearly three hours of arguments, it was clear that the justices had no earthly idea.

    That hesitancy, coupled with the fact that the justices were wading for the first time into new territory, suggests the court, in the case at hand, is not likely to issue a sweeping decision with unknown ramifications in one of the most closely watched disputes of the term.

    Tech companies big and small have been following the case, fearful that the justices could reshape how the sites recommend and moderate content going forward and render websites vulnerable to dozens of lawsuits, threatening their very existence.

    The case before the justices was initially brought by the family of Nohemi Gonzalez, a US student who was killed in a Paris bistro in 2015 after ISIS terrorists opened fire. Now, her family seeks to hold YouTube, a subsidiary of Google, liable for her death because of the site’s alleged promotion – through algorithms – of terrorist videos.

    The family sued under a federal law called the Antiterrorism Act of 1990 , which authorizes such lawsuits for injuries “by reason of an act of international terrorism.”

    Lower courts dismissed the challenge, citing Section 230 of the Communications Decency Act of 1996, the law that has been used for years to provide immunity for websites from what one justice on Tuesday called a “world of lawsuits” that stem from third party content. The Gonzalez family argues that Section 230 does not protect Google from liability when it comes to targeted recommendations.

    Oral arguments drifted into a maze of issues, raising concerns about trending algorithms, thumbnail pop-ups, artificial intelligence, emojis, endorsements and even Yelp restaurant reviews. But at the end of the day, the justices seemed deeply frustrated with the scope of the arguments before them and unclear of the road ahead.

    Family of ISIS victim says YouTube algorithm is liable. What will the Supreme Court say?


    02:30

    – Source:
    CNN Business

    A lawyer representing the plaintiffs challenging the law repeatedly failed, for instance, to offer substantial limiting principles to his argument that could trigger a deluge of lawsuits against powerful sites such as Google or Twitter or threaten the very survival of smaller sites. And some justices retracted from the “sky is falling” attitude put forward by an advocate for Google.

    On several occasions, the justices said they were confused by the arguments before them – a sign that they may find a way to dodge weighing in on the merits or send the case back to the lower courts for more deliberations. At the very least they seemed spooked enough to tread carefully.

    “I’m afraid I’m completely confused by whatever argument you’re making at the present time,” Justice Samuel Alito said early on. “So I guess I’m thoroughly confused,” Justice Ketanji Brown Jackson said at another point. “I’m still confused,” Justice Clarence Thomas said halfway through arguments.

    Justice Elena Kagan even suggested that Congress step in. “I mean, we’re a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the internet,” she said to laughter.

    But in court, Eric Schnapper, a lawyer for the family, repeatedly pushed much broader arguments that could impact other areas of third party content.

    Yet even Thomas, who has expressed reservations about the scope of Section 230 before, seemed skeptical. He sought clarification from Schnapper of how one might be able to distinguish between algorithms that “present cooking videos to people who are interested in cooking and ISIS videos to people interested in ISIS.”

    Alito asked whether Google might have been simply organizing information, instead of recommending any kind of content.

    “I don’t know where you’re drawing the line,” Alito said.

    Chief Justice John Roberts tried to make an analogy with a book seller. He suggested that Google recommending certain information is no different than a book seller sending a reader to a table of books with related content.

    At one point Kagan suggested that Schnapper was trying to gut the entire statute: “Does your position send us down the road such that 230 can’t mean anything at all?” she asked.

    When Lisa Blatt, a lawyer for Google, stood up she warned the justices that Section 230 “created today’s internet” because “Congress made that choice to stop lawsuits from stifling the internet in its infancy.”

    “Exposing websites to liability for implicitly recommending third-party context defies the text [of 230] and threatens today’s internet,” she added.

    In the end, Schnapper seemed to speak for the court when he said that “it’s hard to do this in the abstract.”

    [ad_2]

    Source link

  • Takeaways from the Supreme Court’s hearing in blockbuster internet speech case | CNN Business

    Takeaways from the Supreme Court’s hearing in blockbuster internet speech case | CNN Business

    [ad_1]



    CNN
     — 

    Supreme Court justices appeared broadly concerned Tuesday about the potential unintended consequences of allowing websites to be sued for their automatic recommendations of user content, highlighting the challenges facing attorneys who want to hold Google accountable for suggesting YouTube videos created by terrorist groups.

    For nearly three hours on Tuesday, the nine justices peppered attorneys representing Google, the US government and the family of Nohemi Gonzalez, an American student killed in a 2015 ISIS attack, with questions about how the court could design a ruling that exposes harmful content recommendations to liability while still protecting innocuous ones.

    How – or if – the court draws that line could have significant implications for the way websites choose to rank, display and promote content to their users as they seek to avoid a litigation minefield.

    The attorney for the Gonzalez family argued that narrowing Section 230 of the Communications Decency Act – the federal law protecting websites’ right to moderate their platforms as they see fit – would not lead to sweeping consequences for the internet. But both the Court’s liberals and conservatives worried about the impact of such a decision on everything from “pilaf [recipes] from Uzbekistan” to individual users of YouTube, Twitter and other social media platforms.

    A big concern of the justices seems to be the waves of lawsuits that could happen if the court rules against Google.

    “Lawsuits will be nonstop,” Justice Brett Kavanaugh said at one point.

    But Eric Schnapper, representing the plaintiffs, argued that a ruling for Gonzalez would not have far-reaching effects because even if websites could face new liability as a result of the ruling, most suits would likely be thrown out anyway.

    “The implications are limited,” Schnapper said, “because the kinds of circumstance in which a recommendation would be actionable are limited.”

    Later, Justice Elena Kagan warned that narrowing Section 230 could lead to a wave of lawsuits, even if many of them would eventually be thrown out, in a line of questioning with US Deputy Solicitor General Malcolm Stewart.

    “You are creating a world of lawsuits,” Kagan said. “Really, anytime you have content, you also have these presentational and prioritization choices that can be subject to suit.”

    Chief Justice John Roberts mused that under a narrowed version of Section 230, terrorism-related cases might only be a small share of a much wider range of future lawsuits against websites alleging antitrust violations, discrimination, defamation and infliction of emotional distress, just to name a few.

    “I wouldn’t necessarily agree with ‘there would be lots of lawsuits’ simply because there are a lot of things to sue about,” Stewart said, “but they would not be suits that have much likelihood of prevailing, especially if the court makes clear that even after there’s a recommendation, the website still can’t be treated as the publisher or speaker of the underlying third party.”

    Multiple justices pushed Schnapper to clarify how the court should treat recommendation algorithms if the same algorithm that promotes an ISIS video to someone interested in terrorism might be just as likely to recommend a pilaf recipe to someone interested in cooking.

    “I’m trying to get you to explain to us how something that is standard on YouTube for virtually anything you have an interest in, suddenly amounts to aiding and abetting [terrorism] because you’re [viewing] in the ISIS category,” Justice Clarence Thomas said.

    Schnapper attempted several explanations, including at one point digressing into a hypothetical about the difference between YouTube videos and video thumbnail images, but many of the justices were lost about what he was calling for.

    “I admit I’m completely confused by whatever argument you’re making at the present time,” Justice Samuel Alito said.

    Roberts added: “It may be significant if the algorithm is the same across … the different subject matters, because then they don’t have a focused algorithm with respect to terrorist activities… Then it might be harder for you to say that there’s selection involved for which you can be held responsible.”

    One of the few justices focusing on how changes to Section 230 could affect individual internet users was Justice Amy Coney Barrett, who repeatedly asked whether narrowing the law in the ways Schnapper has proposed could put average social media users in legal jeopardy.

    The text of Section 230 explicitly immunizes “users,” and not just social media platforms, from liability for the content posted by third parties. So a change that exposes tech platforms to new lawsuits could also have implications for users, according to several amicus briefs.

    Under Schnapper’s interpretation, could liking, retweeting or saying “check this out” expose individuals to lawsuits that they could not deflect by invoking Section 230?

    Yes, Schnapper acknowledged, because “that’s content you’ve created.”

    Barrett raised the issue again in a question for Justice Department lawyer Stewart. She asked: “So the logic of your position, I think, is that retweets or likes or ‘check this out’ for users, the logic of your position would be that 230 would not protect in that situation either. Correct?”

    Stewart said there was distinction between an individual user making a conscious decision to amplify content and an algorithm that is making choices on a systemic basis. But Stewart did not provide a clear answer about how he believed changes to Section 230 could affect individual users.

    Tech law experts say an onslaught of defamation litigation is the real threat if Section 230’s protections are weakened and the justices seemed to agree, posing several questions and hypothetical that turned on defamation claims.

    “People have focused on the [Antiterrorism Act], because that’s the one point that’s at issue here. But I suspect there will be many, many times more defamation suits,” Chief Justice John Roberts said, while pointing to other types of claims that also may flood the legal system if tech companies no longer had broad Section 230 immunity.

    Justice Samuel Alito posed for Schnapper a scenario where a competitor of a restaurant created a video making false claims about the restaurant violating health code and YouTube refusing to take the video down despite knowing its defamatory.

    Kagan seized on Alito’s hypothetical later on in the hearing, asking what happens if a platform recommended the false restaurant competitor’s video and called it the greatest video of all time, but didn’t repeat anything about the content of the video.

    “Is the provider on the hook for that defamation?” Kagan asked.

    This story and headline have been updated with developments from Tuesday’s hearing.

    [ad_2]

    Source link

  • Two Supreme Court cases this week could upend the entire internet | CNN Business

    Two Supreme Court cases this week could upend the entire internet | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The Supreme Court is set to hear back-to-back oral arguments this week in two cases that could significantly reshape online speech and content moderation.

    The outcome of the oral arguments, scheduled for Tuesday and Wednesday, could determine whether tech platforms and social media companies can be sued for recommending content to their users or for supporting acts of international terrorism by hosting terrorist content. It marks the Court’s first-ever review of a hot-button federal law that largely protects websites from lawsuits over user-generated content.

    The closely watched cases, known as Gonzalez v. Google and Twitter v. Taamneh, carry significant stakes for the wider internet. An expansion of apps and websites’ legal risk for hosting or promoting content could lead to major changes at sites, including Facebook, Wikipedia and YouTube, to name a few.

    The litigation has produced some of the most intense rhetoric in years from the tech sector about the potential impact on the internet’s future. US lawmakers, civil society groups and more than two dozen states have also jumped into the debate with filings at the Court.

    At the heart of the legal battle is Section 230 of the Communications Decency Act, a nearly 30-year-old federal law that courts have repeatedly said provide broad protections to tech platforms but that has since come under scrutiny alongside growing criticism of Big Tech’s content moderation decisions.

    The law has critics on both sides of the aisle. Many Republican officials allege that Section 230 gives social media platforms a license to censor conservative viewpoints. Prominent Democrats, including President Joe Biden, have argued Section 230 prevents tech giants from being held accountable for spreading misinformation and hate speech.

    In recent years, some in Congress have pushed for changes to Section 230 that might expose tech platforms to more liability, along with proposals to amend US antitrust rules and other bills aimed at reining in dominant tech platforms. But those efforts have largely stalled, leaving the Supreme Court as the likeliest source of change in the coming months to how the United States regulates digital services.

    Rulings in the cases are expected by the end of June.

    The case involving Google zeroes in on whether it can be sued because of its subsidiary YouTube’s algorithmic promotion of terrorist videos on its platform.

    According to the plaintiffs in the case — the family of Nohemi Gonzalez, who was killed in a 2015 ISIS attack in Paris — YouTube’s targeted recommendations violated a US antiterrorism law by helping to radicalize viewers and promote ISIS’s worldview.

    The allegation seeks to carve out content recommendations so that they do not receive protections under Section 230, potentially exposing tech platforms to more liability for how they run their services.

    Google and other tech companies have said that that interpretation of Section 230 would increase the legal risks associated with ranking, sorting and curating online content, a basic feature of the modern internet. Google has claimed that in such a scenario, websites would seek to play it safe by either removing far more content than is necessary, or by giving up on content moderation altogether and allowing even more harmful material on their platforms.

    Friend-of-the-court filings by Craigslist, Microsoft, Yelp and others have suggested that the stakes are not limited to algorithms and could also end up affecting virtually anything on the web that might be construed as making a recommendation. That might mean even average internet users who volunteer as moderators on various sites could face legal risks, according to a filing by Reddit and several volunteer Reddit moderators. Oregon Democratic Sen. Ron Wyden and former California Republican Rep. Chris Cox, the original co-authors of Section 230, argued to the Court that Congress’ intent in passing the law was to give websites broad discretion to moderate content as they saw fit.

    The Biden administration has also weighed in on the case. In a brief filed in December, it argued that Section 230 does protect Google and YouTube from lawsuits “for failing to remove third-party content, including the content it has recommended.” But, the government’s brief argued, those protections do not extend to Google’s algorithms because they represent the company’s own speech, not that of others.

    The second case, Twitter v. Taamneh, will decide whether social media companies can be sued for aiding and abetting a specific act of international terrorism when the platforms have hosted user content that expresses general support for the group behind the violence without referring to the specific terrorist act in question.

    The plaintiffs in the case — the family of Nawras Alassaf, who was killed in an ISIS attack in Istanbul in 2017 — have alleged that social media companies including Twitter had knowingly aided ISIS in violation of a US antiterrorism law by allowing some of the group’s content to persist on their platforms despite policies intended to limit that type of content.

    Twitter has said that just because ISIS happened to use the company’s platform to promote itself does not constitute Twitter’s “knowing” assistance to the terrorist group, and that in any case the company cannot be held liable under the antiterror law because the content at issue in the case was not specific to the attack that killed Alassaf. The Biden administration, in its brief, has agreed with that view.

    Twitter had also previously argued that it was immune from the suit thanks to Section 230.

    Other tech platforms such as Meta and Google have argued in the case that if the Court finds the tech companies cannot be sued under US antiterrorism law, at least under these circumstances, it would avoid a debate over Section 230 altogether in both cases, because the claims at issue would be tossed out.

    In recent years, however, several Supreme Court justices have shown an active interest in Section 230, and have appeared to invite opportunities to hear cases related to the law. Last year, Supreme Court Justices Samuel Alito, Clarence Thomas and Neil Gorsuch wrote that new state laws, such as Texas’s that would force social media platforms to host content they would rather remove, raise questions of “great importance” about “the power of dominant social media corporations to shape public discussion of the important issues of the day.”

    A number of petitions are currently pending asking the Court to review the Texas law and a similar law passed by Florida. The Court last month delayed a decision on whether to hear those cases, asking instead for the Biden administration to submit its views.

    [ad_2]

    Source link

  • Microsoft and Google promised to invest in these communities. Now they’re backtracking | CNN Business

    Microsoft and Google promised to invest in these communities. Now they’re backtracking | CNN Business

    [ad_1]



    CNN Business
     — 

    When Microsoft President Brad Smith announced in February 2021 that the tech giant had purchased a 90-acre plot of land in Atlanta’s westside, he laid out a bold vision: The company, he said, would invest in the community and put it “on the path toward becoming one of Microsoft’s largest hubs” in the United States.

    The announcement, which was met with enthusiastic coverage in local media, promised the construction of affordable housing, programs to help public school children develop digital skills, support for historically Black colleges and universities, new funding for local nonprofits, and affordable broadband for more people in Atlanta.

    “Our biggest question today is not what Atlanta can do to support Microsoft,” Smith wrote. “It’s what Microsoft can do to support Atlanta.”

    Two years later, Microsoft announced a series of cost-cutting efforts, including eliminating 10,000 jobs, making changes to its hardware portfolio and consolidating leases. As part of those moves, Microsoft put development of its Atlanta campus on pause this month, a spokesperson confirmed to CNN.

    The decision to pause plans feels like a “broken promise” that caught many residents of the predominately Black neighborhood where Microsoft planned to build the campus off-guard, according to Jasmine Hope, a local resident and chair of her neighborhood planning unit.

    “All the promises of, ‘We’re going to put a grocery store here, we’re going to bring jobs to the area, we’re going to have a pipeline between the schools and Microsoft to create jobs,’ all that seems like it’s out the window,” she told CNN. “But the consequences are still being felt by the neighborhood.”

    A Microsoft spokesperson said the land is not for sale, “and we still aim to set aside a quarter of the 90 acres for community needs.” Microsoft will continue efforts “to create a positive impact in the region and be a contributing community partner,” the spokesperson added.

    As the tech industry boomed in the United States throughout the past decade, cities across the country vied to become tech hubs. State and city officials competed for Silicon Valley giants to bring offices, data centers and warehouses to their communities in hopes of creating jobs and bringing other benefits that cash-strapped local governments might struggle to fund on their own. In perhaps the biggest example of this, 238 communities submitted bids in 2017 to be home to Amazon’s second headquarters, with some offering major tax breaks or even to rename land “city of Amazon.”

    But now, a number of large tech companies are rethinking their costs, after years of seemingly limitless hiring and expansion. The reason: a perfect storm of shifting pandemic demand for online services, rising interest rates and fears of a looming recession. Much of the focus of this tech downturn so far has been on the long list of layoffs, but companies have also teased plans to dramatically reduce real estate expenses across the country.

    Facebook-parent Meta, Microsoft, Salesforce and Snap have each shuttered offices or announced plans to cut back on real estate, according to recent corporate announcements, filings and local news reports. Some tech companies have said they’ll let leases expire or go fully remote. Meta CEO Mark Zuckerberg said his company is “transitioning to desk-sharing for people who already spend most of their time outside the office.”

    The effect of those pullbacks can already be felt across the country, from New York City, where Meta reportedly scaled back its real estate footprint in the Hudson Yards neighborhood, to San Francisco, where some local businesses say they are facing the ripple effects of remote work and multiple tech office closures.

    “Tech had pretty much gained market share to become the top industry leasing office space across the US, and that started back in 2012, 2013,” said Colin Yasukochi, the executive director of the Tech Insights Center at CBRE, a commercial real estate firm. In 2022, however, finance and insurance companies overtook the tech industry for the highest share of US office leases, according to CBRE’s data.

    “Really, over the last couple of quarters, you’ve seen the tech industry decrease its leasing activity pretty significantly,” he added. “That’s really, I think, the biggest impact that you’ve seen regarding these layoffs and austerity measures: the leasing activity pullback by the tech industry.”

    But the impact of that pullback is perhaps most stark in the communities with less robust tech hubs.

    Quarry Yards, on Atlanta’s westside, has been a source of some promise and dashed hopes. In 2017, Georgia officials included the formerly industrial area on a list of sites where Amazon could build its second headquarters, as part of its pitch to the e-commerce giant. Amazon ultimately went with other cities, but four years later, another Seattle tech giant scooped up the land.

    After the purchase, Microsoft described Quarry Yards as a place with “wide, tree-lined streets” but “broken sidewalks.” The area, Microsoft said, is “food desert with no grocery store, pharmacy or bank.”

    The community, according to Hope, consists of “a lot of elderly, Black neighbors.” These residents, she said, have been worried about gentrification and displacement for years as housing prices and property taxes surge in the metro Atlanta region.

    Jasmine Hope, PhD, Department of Rehabilitation Medicine, Motions Analysis Laboratory, Emory University.

    “Just the announcement of Microsoft coming into town” brought new buyers and developers into the area, she said, exacerbating these longstanding concerns. Data from Zillow indicates average home values in the neighborhood surged more at a significantly faster pace between January 2020 and December 2022 than Atlanta as a whole.

    But residents also had cautious optimism about the benefits Microsoft promised to the community, according to Hope. Now, the community is left with higher prices but none of the promised improvements or economic opportunities. “We’re not going to see any benefits and only deal with the consequences,” she said.

    “It feels like the community is now going to be burdened by this,” she said.

    Hope’s community isn’t alone in confronting the whiplash of Silicon Valley’s real estate pullback. Late last month, the city of Kirkland, Washington, said in a press release that it had been notified by Google that the company will not be proceeding with its proposed redevelopment project that initially aimed to bring a massive new campus to the city.

    In a Kirkland City Council meeting held just last summer, representatives from Google teased a slew of community benefits from the build — including infrastructure improvements, such as the creation of bike lanes and pedestrian trails, as well as a more than $12 million investment in affordable housing. The planning process between Google and the city had been taking place since the fall of 2020.

    “As we continue to shape our future workplace experience, we’re working to ensure our real estate investments meet the current and future needs of our workforce,” Ryan Lamont, a Google spokesperson, told CNN in a statement. “Our campuses are at the heart of our Google community, and we remain committed to our long-term presence in Washington state.”

    Even San Francisco, whose fortunes are tied to Silicon Valley more than any other city, is showing signs of strain from the one-two punch of the shift to remote work and office closures.

    Office vacancy rates in the city hit a record high of 27.6% in the final three months of last year, according to CBRE, compared to the pre-pandemic figure of 3.7%.

    “The previous high was about 20%, after the Dotcom bust,” Yasukochi, of CBRE, told CNN. “We’re at the highest point that our records have shown.”

    The rise of remote and hybrid work had been a major driver in tech giants cutting back on their real estate investments, Yasukochi said. Then came the recent cost-cutting measures.

    Local business owners say they are now feeling the impacts.

    An office sits vacant on October 27, 2022 in San Francisco, California. According to a report by commercial real estate firm CBRE, the city of San Francisco has a record 27.1 million square feet of office space available as the city struggles to rebound from the Covid-19 pandemic. The US Census Bureau reports an estimated 35% of employees in San Francisco and San Jose continue to work from home.

    Mark Nagle, the owner of a 21-year-old Irish pub and restaurant in downtown San Francisco called The Chieftain, told CNN he has witnessed a “cascade of closures” of tech and corporate offices in his neighborhood recently — including the shuttering of a Snapchat office just down the street.

    “We’re in a great location normally, we’re downtown,” Nagle said. But now his business is surrounded by several vacant retail spaces and multiple lots that are under construction.

    The number of workers regularly coming into the area has not bounced back since the start of the pandemic, Nagle said, and neither has his business. Nagle said that in addition to workers stopping by for a drink at the end of their days, nearby companies would frequently hold events and meetings at The Chieftain, but that those have also largely dropped off.

    At least six bars and restaurants in a two-block radius of him have shuttered in recent years, he said.

    “You’re making do with less and it’s made the business so much more unpredictable,” he added. “And we’re one of the lucky ones that can keep their doors open.”

    – CNN’s Clare Duffy contributed to this report.

    [ad_2]

    Source link

  • The way we search for information online is about to change | CNN Business

    The way we search for information online is about to change | CNN Business

    [ad_1]



    CNN Business
     — 

    An entire generation of internet users has approached search engines the same way for decades: enter a few words into a search box and wait for a page of relevant results to emerge. But that could change soon.

    This week, the companies behind the two biggest US search engines teased radical changes to the way their services operate, powered by new AI technology that allows for more conversational and complex responses. In the process, however, the companies may test both the accuracy of these tools and the willingness of everyday users to embrace and find utility in a very different search experience.

    On Tuesday, Microsoft announced a revamped Bing search engine using the abilities of ChatGPT, the viral AI tool created by OpenAI, a company in which Microsoft recently invested billions of dollars. Bing will not only provide a list of search results, but will also answer questions, chat with users and generate content in response to user queries.

    The next day, Google, the dominant player in the market, held an event to detail how it plans to use similar AI technology to allow its search engine to offer more complex and conversational responses to queries, including providing bullet points ticking off the best times of year to see various constellations and also offering pros and cons for buying an electric vehicle. (Chinese tech giant Baidu also said this week that it would be launching its own ChatGPT-style service, though it did not provide details on whether it will appear as a feature in its search engine.)

    The updates come as the success of OpenAI’s ChatGPT, which can generate shockingly convincing essays and responses to user prompts, has sparked a wave of interest in AI chatbot tools. Multiple tech giants are now racing to deploy similar tools that could transform the way we draft e-mails, write essays and handle other tasks. But the most immediate impact may be on a foundational element of our internet experience: search.

    “Although we are 25 years into search, I dare say that our story has just begun,” said Prabhakar Raghavan, an SVP at Google, at the event Wednesday teasing the new AI features. “We have even more exciting, AI-enabled innovations in the works that will change the way people search, work and play. We’re reinventing what it means to search and the best is yet to come.”

    For those who may not be sure what exactly to do with the new tools, the companies offered some examples, ranging from writing a rhyming poem to helping plan an itinerary for a trip.

    Lian Jye Su, a research director at tech intelligence firm ABI Research, believes consumers and businesses would be happy to embrace a new way to search as long as “it is intuitive, removes more friction, and offers the path of least resistance — akin to the success of smart home voice assistants, like Alexa and Google Assistant.”

    But there is at least one wild card: how much users will be able to trust the AI-powered results.

    According to Google, Bard can be used to plan a friend’s baby shower, compare two Oscar-nominated movies or get lunch ideas based on what’s in your fridge. But the tool, which has yet to be released to the public, is already being called out for a factual error it made during a Google demo: it incorrectly stated that the James Webb Telescope took the first pictures of a planet outside of our solar system. A Google spokesperson said the error “highlights the importance of a rigorous testing process.”

    Bard and ChatGPT, which was released publicly in late November OpenAI, are built on large language models. These models are trained on vast troves of online data in order to generate compelling responses to user prompts. Experts warn these tools can be unreliable — spreading misinformation, making up responses and giving different answers to the same questions, or presenting sexist and racist biases.

    There is clearly strong interest in this type of AI. The public version of ChatGPT attracted a million users in its first five days last fall and is estimated to have hit 100 million users since. But the trust factor may decide whether that interest will stay, according to Jason Wong, an analyst at market research firm Gartner.

    “Consumers, and even business users, may have fun exploring the new Bing and Bard interfaces for a while, but as the novelty wears off and similar tools appear, then it really comes down to ease of access and accuracy and trust in the responses that will win out,” he said.

    Generative AI systems, which are algorithms that can create new content, are notoriously unreliable. Laura Edelson, a computer scientist and misinformation researcher at New York University, said, “there’s a big difference between an AI sounding authoritative and it actually producing accurate results.”

    While general search optimizes for relevance, according to Edelson, large language models try to achieve a particular style in their response without regard to factual accuracy. “One of those styles is, ‘I am a trustworthy, authoritative source,’” she said.

    On a very basic level, she said, AI systems analyze which words are next to each other, determine how they get associated and identify the patterns that lead them to appear together. But much of the onus remains on the user to fact check the answers, a process that could prove just as time consuming for people as the current model of scrolling through links on a page — if not more so.

    Microsoft and Google executives have acknowledged some of the potential issues with the new AI tools.

    “We know we wont be able to answer every question every single time,” said Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer. “We also know we’ll make our share of mistakes, so we’ve added a quick feedback button at the top of every search, so you can give us feedback and we can learn.”

    Raghavan, at Google, also emphasized the importance of feedback from internal and external testing to make sure the tool “meets the high bar, our high bar for quality, safety, and groundedness, before we launch more broadly.”

    But even with the concerns, the companies are betting that these tools offer the answer to the future of search.

    – CNN’s Clare Duffy, Catherine Thorbecke and Brian Fung contributed to this story.

    [ad_2]

    Source link