ReportWire

Tag: iab-technology & computing

  • Arkansas sues TikTok, ByteDance and Meta over mental health claims | CNN Business

    Arkansas sues TikTok, ByteDance and Meta over mental health claims | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The state of Arkansas has sued TikTok, its parent ByteDance, and Facebook-parent Meta over claims the companies’ products are harmful to users, in the latest effort by public officials to take social media companies to court over mental-health and privacy concerns.

    All three lawsuits claim the companies have violated the state’s Deceptive Trade Practices Act, and seek millions, if not billions, in potential fines. The suits were filed in Arkansas state court.

    The complaints come amid mounting pressure in Washington on TikTok for its ties to China and as states have grown more aggressive in suing tech companies broadly, particularly on mental health claims. Suits by school districts or county officials in California, Florida, New Jersey, Pennsylvania and Washington state have targeted multiple social media platforms over addiction allegations.

    The suit against Meta particularly zeroes in on the company’s impact to young users’ mental health, alleging that Meta’s implementation of like buttons, photo tagging, an unending news feed and other features are addictive and “intended to manipulate users’ brains by triggering the release of dopamine.”

    In a statement, Meta’s global head of safety, Antigone Davis, said the company has invested in “technology that finds and removes content related to suicide, self-injury or eating disorders before anyone reports it to us.”

    “We want to reassure every parent that we have their interests at heart in the work we’re doing to provide teens with safe, supportive experiences online,” Davis said in the statement. “These are complex issues, but we will continue working with parents, experts and regulators such as the state attorneys general to develop new tools, features and policies that meet the needs of teens and their families.”

    The remaining two suits, both naming ByteDance and TikTok as defendants, target TikTok’s alleged shortcomings in content moderation and also reiterate claims about TikTok’s alleged threat to US national security.

    The first suit alleges that TikTok has misled users by identifying its app as suitable for teens on app stores because of the “abundant” presence of content showing profanity, substance use and nudity. The suit further alleges that TikTok’s Chinese sister app, Douyin, does not make such content available within China.

    “TikTok poses known risks to young teens that TikTok’s parent company itself finds inappropriate for Chinese users who are the same age,” the complaint said. “Yet TikTok pushes salacious and other mature content to all young U.S. users age 13 and up.”

    The second suit against ByteDance and TikTok accuse the companies of having made misleading statements about the reach of Chinese government officials and their purported inability to access TikTok user data. TikTok has migrated US user data to servers operated by the American tech giant Oracle and has established organizational controls intended to prevent unauthorized data access. But, the suit alleges, that does not mean the data is necessarily protected.

    “Neither TikTok’s data storage practices, nor its data security practices, negate the applicability of Chinese law to that data or to the individuals and entities who are subject to Chinese law and have access to that data, or the risk of access by the Chinese Government or Communist Party,” the complaint said.

    The suit also claims TikTok has misrepresented its approach to privacy and security by omitting the potential risks of Chinese government access from its privacy policies and in its statements to app store operators.

    TikTok and ByteDance didn’t immediately respond to a request for comment.

    In a statement announcing the lawsuits, Arkansas Gov. Sarah Huckabee Sanders said the suits reflect a “failed status quo.”

    “We have to hold Big Tech companies accountable for pushing addictive platforms on our kids and exposing them to a world of inappropriate, damaging content,” Sanders said. “These actions are a long time coming. We have watched over the past decade as one social media company after another has exploited our kids for profit and escaped government oversight.”

    [ad_2]

    Source link

  • Pentagon investigating alleged classified documents circulating on social media of US and NATO intelligence on Ukraine | CNN Politics

    Pentagon investigating alleged classified documents circulating on social media of US and NATO intelligence on Ukraine | CNN Politics

    [ad_1]


    Washington
    CNN
     — 

    The Pentagon is investigating what appear to be screenshots of classified US and NATO military information about Ukraine circulating on social media, a Pentagon official told CNN.

    CNN has reviewed some of the images circulating on Twitter and Telegram but is unable to verify if they are authentic or have been doctored. US officials say the documents are real slides, part of a larger daily intelligence deck produced by the Pentagon about the war, but it appears the documents have been edited in some places.

    Pentagon deputy press secretary Sabrina Singh would not weigh in on the documents’ legitimacy but said in a statement that the Defense Department is “aware of the reports of social media posts, and the Department is reviewing the matter.”

    Mykhailo Podolyak, the adviser to the head of the Office of the President of Ukraine, said on his Telegram channel he believes the Russians are behind the purported leak. Podolyak said the documents that were disseminated are inauthentic, have “nothing to do with Ukraine’s real plans” and are based on “a large amount of fictitious information.”

    The emergence of the documents, whether genuine or not, has heightened focus on when the planned Ukrainian counteroffensive will begin and what, if anything, either side knows about the other’s preparations for it.

    One image that has been circulating on Russian Telegram channels and was reviewed by CNN is a photo of a hard copy of a document titled “US, Allied & Partner UAF Combat Power Build.” The document, which is from February and marked as secret, lists the amounts of certain Western weapons systems that Ukraine currently has on hand, estimated delivery of additional systems and the training Ukraine has or is expected to complete on the systems.

    Another is titled “Russia/Ukraine Joint Staff J3/4/5 Daily Update (D+370)” and is listed as secret. J3 refers to the operations directorate of the US military’s joint staff, J4 deals with logistics and engineering, and J5 proposes strategies, plans and policy recommendations. “D+370” refers to the date the document was produced: 370 days after the first day of the Russian invasion.

    A third document is a map, listed as top secret, that shows the status of the conflict as of March 1. The map shows Russian and Ukrainian battalion locations and sizes, as well as total assessed losses on both sides. The casualty numbers on this document are what officials believe was doctored – the Russian losses are actually far higher than the “16,000-17,500 killed in action” listed on the document, officials said.

    The document also says that 61,000-71,500 Ukrainians have been killed in action, a number that officials said also appeared edited to be higher than actual Pentagon estimates.

    A fourth document is a weather projection from February, listed as Secret, that assesses where the ground may freeze in Ukraine in a way that would be favorable for vehicle maneuver.

    The New York Times, which first disclosed the Pentagon investigation, reported that some of the images circulating online describe intelligence that could be useful to Russia, such as how quickly the Ukrainians are expending munitions used in US-provided rocket-systems.

    Podolyak called the documents “a bluff, dust in your eyes” and said that “if Russia really did receive real scenario preparations, it would hardly make them public.”

    “Russia is looking for any way to seize the information initiative, to try to influence the scenario plans for Ukraine’s counteroffensive,” he said. “To raise doubts, compromise previous ideas and frighten with their ‘awareness.’ But these are just standard elements of the Russian intelligence’s operational game and nothing more. It has nothing to do with Ukraine’s real plans.”

    Podolyak added that Russian troops “will get acquainted” with Ukraine’s real counteroffensive plans “very soon.”

    Asked about the images circulating on Twitter and Telegram, Kremlin spokesperson Dmitry Peskov told CNN in a statement that “we don’t have the slightest doubt about direct or indirect involvement of the United States and NATO in the conflict between Russia and Ukraine.”

    “This level of involvement is rising, is rising gradually,” he said. “We keep our eye on this process. Well, of course, it makes the whole story more complicated, but it cannot influence the final outcome of the special operation.”

    This story has been updated with additional details.

    [ad_2]

    Source link

  • Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    Google-parent stock drops on fears it could lose search market share to AI-powered rivals | CNN Business

    [ad_1]



    CNN
     — 

    Shares of Google-parent Alphabet fell more than 3% in early trading Monday after a report sparked concerns that its core search engine could lose market share to AI-powered rivals, including Microsoft’s Bing.

    Last month, Google employees learned that Samsung was weighing making Bing the default search engine on its devices instead of Google’s search engine, prompting a “panic” inside the company, according to a report from the New York Times, citing internal messages and documents. (CNN has not reviewed the material.)

    In an effort to address the heightened competition, Google is said to be developing a new AI-powered search engine called Project “Magi,” according to the Times. The company, which reportedly has about 160 people working on the project, aims to change the way results appear in Google Search and will include an AI chat tool available to answer questions. The project is expected to be unveiled to the public next month, according to the report.

    In a statement sent to CNN, Google spokesperson Lara Levin said the company has been using AI for years to “improve the quality of our results” and “offer entirely new ways to search,” including with a feature rolled out last year that lets users search by combining images and words.

    “We’ve done so in a responsible and helpful way that maintains the high bar we set for delivering quality information,” Levin said. “Not every brainstorm deck or product idea leads to a launch, but as we’ve said before, we’re excited about bringing new AI-powered features to Search, and will share more details soon.”

    Samsung did not immediately respond to a request for comment.

    Google’s search engine has dominated the market for two decades. But the viral success of ChatGPT, which can generate compelling written responses to user prompts, appeared to put Google on defense for the first time in years.

    In March, Google began opening up access to Bard, its new AI chatbot tool that directly competes with ChatGPT and promises to help users outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    At an event in February, a Google executive also said the company will bring “the magic of generative AI” directly into its core search product and use artificial intelligence to pave the way for the “next frontier of our information products.”

    Microsoft, meanwhile, has invested in and partnered with OpenAI, the company behind ChatGPT, to deploy similar technology in Bing and other productivity tools. Other tech companies, including Meta, Baidu and IBM, as well as a slew of startups, are racing to develop and deploy AI-powered tools.

    But tech companies face risks in embracing this technology, which is known to make mistakes and “hallucinate” responses. That’s particularly true when it comes to search engines, a product that many use to find accurate and reliable information.

    Google was called out after a demo of Bard provided an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.

    Microsoft’s Bing AI demo was also called out for several errors, including an apparent failure to differentiate between the types of vacuums and even made up information about certain products.

    In an interview with 60 Minutes that aired on Sunday, Google and Alphabet CEO Sundar Pichai stressed the need for companies to “be responsible in each step along the way” as they build and release AI tools.

    For Google, he said, that means allowing time for “user feedback” and making sure the company “can develop more robust safety layers before we build, before we deploy more capable models.”

    He also expressed his belief that these AI tools will ultimately have broad impacts on businesses, professions and society.

    “This is going to impact every product across every company and so that’s, that’s why I think it’s a very, very profound technology,” he said. “And so, we are just in early days.”

    [ad_2]

    Source link

  • Twitter removes transgender protections from hateful conduct policy | CNN Business

    Twitter removes transgender protections from hateful conduct policy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter appears to have quietly rolled back a portion of its hateful conduct policy that included specific protections for transgender people.

    The policy previously stated that Twitter prohibits “targeting others with repeated slurs, tropes or other content that intends to degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.” But the second line was removed earlier this month, according to archived versions of the page from the WayBack Machine.

    Twitter also removed a line from the policy detailing certain groups of people often subject to disproportionate abuse online, including “women, people of color, lesbian, gay, bisexual, transgender, queer, intersex, asexual individuals, and marginalized and historically underrepresented communities.”

    The platform first introduced its policy prohibiting misgendering and deadnaming (referring to a person’s pre-transition name) of transgender people in 2018 as part of a broader overhaul of its hateful conduct policy.

    The change to the hateful conduct policy is one of a number of updates Twitter has made to its safety and content moderation practices since Elon Musk took over the company last fall. Twitter has also restored the accounts of users who had previously been banned for violating its rules, stopped enforcing its Covid-19 misinformation policy, allowed users to purchase blue verification checkmarks and applied controversial new labels to the accounts of several news organizations.

    LGBTQ advocacy group GLAAD called out the hateful conduct policy change in a Tuesday statement.

    “Twitter’s decision to covertly roll back its longtime policy is the latest example of just how unsafe the company is for users and advertisers alike,” GLAAD President and CEO Sarah Kate Ellis said. “This decision to roll back LGBTQ safety pulls Twitter even more out of step with TikTok, Pinterest, and Meta, which all maintain similar policies to protect their transgender users at a time when anti-transgender rhetoric online is leading to real world discrimination and violence.”

    Twitter did not respond to a request for comment about the change, although the platform did announce earlier this week some other updates to how it enforces its hateful conduct policy. The platform said it plans to start applying labels to some tweets that violate its hateful conduct policy and reduce their visibility, a similar practice to the one used under the company’s previous leadership, under which it either reduced the visibility of or removed violative tweets.

    “Restricting the reach of Tweets helps reduce binary ‘leave up versus take down’ content moderation decisions and supports our freedom of speech vs freedom of reach approach,” the company said in a tweet. Twitter also said it will not place ads next to content that has been labeled as violative.

    Musk has been in the process of trying to encourage advertisers to return to the platform, after many paused their spending over concerns about Musk’s policy changes, increased hate speech on the platform and massive cuts to the company’s workforce, threatening the company’s core business.

    The billionaire tried to assuage advertisers about Twitter’s approach to hateful conduct at a marketing conference Tuesday, saying, “If somebody has something hateful to say, it doesn’t mean you should give them a megaphone,” according to a report from the Wall Street Journal.

    Musk has faced a number of criticisms from some in the transgender community, most notably from his transgender daughter Vivian Jenna Wilson. Last year, she petitioned a court in California to change her last name to that of of her mother, Justine Wilson, Musk’s ex-wife and mother of five of his seven children, because she no longer wanted to be related to her father “in any way, shape or form.”

    Musk has also had several tweets where he mocked the idea of use of people choosing the pronouns they want to apply to them. He had one tweet in December 2020, which he later deleted, that said “when you put he/him in your bio” alongside a drawing of an 18th century soldier rubbing blood on his face in front of a pile of dead bodies and wearing a cap that read “I love to oppress.”

    And this past December, a vocal critic of many Covid restrictions and protocols, Musk tweeted, “My pronouns are Prosecute/Fauci.”

    But in other tweets, Musk has insisted he had no problems with transgender people, saying that his problem is with “all these pronouns” which he called an “esthetic nightmare.” He also pointed out that his auto company Tesla

    (TSLA)
    has repeatedly scored a 100% rating from the Human Rights Campaign as being one of the “Best Places to Work for LGBTQ+ Equality.”

    — CNN’s Chris Isidore contributed to this report

    [ad_2]

    Source link

  • AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, who has been called the ‘Godfather of AI,’ confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it.

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision.

    In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. “Google has acted very responsibly.”

    Jeff Dean, chief scientist at Google, said Hinton “has made foundational breakthroughs in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”

    “We remain committed to a responsible approach to AI,” Dean said in a statement provided to CNN. “We’re continually learning to understand emerging risks while also innovating boldly.”

    Hinton’s decision to step back from the company and speak out on the technology comes as a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.

    In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    In the interview with the Times, Hinton echoed concerns about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Even before stepping aside from Google, Hinton had spoken publicly about AI’s potential to do harm as well as good.

    “I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good,” Hinton said in a 2021 commencement address at the Indian Institute of Technology Bombay in Mumbai. He noted how AI will boost healthcare while also creating opportunities for lethal autonomous weapons. “I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off.”

    Hinton isn’t the first Google employee to raise a red flag on AI. In July, the company fired an engineer who claimed an unreleased AI system had become sentient, saying he violated employment and data security policies. Many in the AI community pushed back strongly on the engineer’s assertion.

    [ad_2]

    Source link

  • China imposes sales restrictions on Micron as it escalates tech battle with Washington | CNN Business

    China imposes sales restrictions on Micron as it escalates tech battle with Washington | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    China has banned US chip maker Micron from selling to Chinese companies working on key infrastructure projects, in a major escalation of an ongoing battle between the world’s top two economies over access to crucial technology.

    The Cyberspace Administration of China (CAC) announced the decision on Sunday, saying the US chip maker had failed to pass a cybersecurity review. The news came shortly after the close of the Group of Seven (G7) summit in Hiroshima, Japan, where leaders of major democracies spoke in one voice on their growing concerns over China.

    “The review found that Micron’s products have relatively serious cybersecurity risks, which pose significant security risks to China’s critical information infrastructure supply chain and would affect national security,” the Chinese regulator said in a statement.

    As a result, operators involved in domestic critical information infrastructure projects should stop purchasing products from Micron, it said.

    Shares of Micron Technology

    (MU)
    sank about 3% Monday. Its Asian rivals had finished the day higher. Shares of Chinese memory chip maker Ingenic Semiconductor jumped 2.8%. Shenzhen Techwinsemi Technology surged 6.3%. Toyou Feiji Electronics soared 14%. In Seoul, SK Hynix, one of the world’s largest memory chip makers, gained 0.9%, outperforming the South Korean market.

    The Chinese regulator’s decision came seven weeks after it kicked off a cybersecurity review of Micron’s products, in apparent retaliation against sanctions imposed by Washington and its allies on China’s chip sector.

    Micron is one of the largest memory chip makers in the United States. It derives more than 10% of its revenue from mainland China.

    The company told CNN that it had received the regulator’s notice and was assessing its next steps.

    “We look forward to continuing to engage in discussions with Chinese authorities,” it said in a statement.

    Micron’s chief financial officer, Mark Murphy, said separately on Monday that the company was unclear what security concerns Beijing had. He said the company is evaluating what portion of its sales could be impacted.

    “We are currently estimating a range of impact in the low single digits percent of our company total revenue at the low end and high single-digit percentage of total company revenue at the high end,” he said at a conference.

    The US Commerce Department said it firmly opposed the restrictions that “have no basis in fact,” according to Reuters.

    “This action, along with recent raids and targeting of other American firms, is inconsistent with [China’s] assertions that it is opening its markets and committed to a transparent regulatory framework,” it was quoted as saying.

    The US State Department similarly said it has “very serious concerns” about the ban.

    “The Department of Commerce is engaging directly with the PRC to make our view clear, and broadly, this action appears inconsistent with the PRC’s assertions that it is open for business and committed to a transparent regulatory framework,” US State Department spokesperson Matthew Miller said Monday.

    On Sunday, China’s Foreign Ministry accused G7 leaders of “hindering international peace” and said the group needed to “reflect on its behavior and change course.”

    In a landmark joint communique Saturday, G7 member countries had made the group’s most detailed articulation of a shared position on China to date — stressing the need to cooperate with the world’s second-largest economy, but also to counter its “malign practices” and “coercion.” in a landmark joint communique Saturday.

    Since October 2022, Washington has imposed sweeping export curbs on advanced chips and chip-making equipment to China, in an attempt to cut off China’s access to critical technology for military purposes.

    In March, Japan and the Netherlands, both key US allies, also announced restrictions on overseas sales of chip-making technology to countries including China. China has strongly criticized the restrictions, labeling them “discriminatory containment” directed at the country.

    Chips are at the center of Beijing’s bid to become a tech superpower. China has its own chip manufacturers, but they supply mostly low- to mid-end processors used in home appliances and electric vehicles.

    The semiconductor battle is part of a growing divide between the United States and China. In recent years, relations between the two have reached their lowest level in decades.

    Tensions escalated this year after a suspected Chinese spy balloon was shot down by US fighter jets in February and Beijing continued to deepen its ties with Russia despite its continued invasion of Ukraine.

    However, US President Joe Biden said on Sunday that he expected ties between the two countries to improve soon.

    “I think you are gonna see that begin to thaw very shortly,” Biden told a news conference at the end of the Group of Seven summit in Japan.

    He said he had agreed with Chinese President Xi Jinping in November to keep communications open, but that everything changed after a “silly balloon that was carrying two freight cars worth of spying equipment” was shot down.

    “We are not looking to decouple from China,” he said. “We are looking to de-risk and diversify our relationship with China.”

    — CNN’s Simone McCarthy, Jennifer Hansler and Saba Haroon contributed to this report

    [ad_2]

    Source link

  • AI chip boom sends Nvidia’s stock surging after whopper of a quarter | CNN Business

    AI chip boom sends Nvidia’s stock surging after whopper of a quarter | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The AI boom is here, and Nvidia is reaping all the benefits.

    Shares of Nvidia

    (NVDA)
    exploded 28% higher Thursday after reporting earnings and sales that surged well above Wall Street’s already lofty expectations. That was enough to make investors temporarily forget about America’s dangerous debt ceiling standoff, sending the broader stock market higher — even after credit rating agency Fitch warned late Wednesday that America could soon lose its sterling AAA debt rating.

    Nvidia makes chips that power generative AI, a type of artificial intelligence that can create new content, such as text and images, in response to user prompts. That’s the kind of AI underlying ChatGPT, Google’s Bard, Dall-E and many of the other new AI technologies.

    “The computer industry is going through two simultaneous transitions — accelerated computing and generative AI,” said Jensen Huang, Nvidia’s CEO, in a statement. “A trillion dollars of installed global data center infrastructure will transition from general purpose to accelerated computing as companies race to apply generative AI into every product, service and business process.”

    Huang said Nvidia is increasing supply of its entire suite of data center products to meet “surging demand” for them.

    Last quarter, Nvidia’s profit surged 26% to $2 billion, and sales rose 19% to $7.2 billion, each easily surpassing Wall Street analysts’ forecasts. Nvidia’s outlook for the current quarter was also significantly — about 50% — higher than analysts’ predictions.

    Nvidia’s stock is up nearly 110% this year.

    “There is not one better indicator around underlying AI demand going on … than the foundational Nvidia story,” said Dan Ives, analyst at Wedbush. “We view Nvidia at the core hearts and lungs of the AI revolution.”

    [ad_2]

    Source link

  • Amazon is trying to make it simpler to sift through thousands of user reviews | CNN Business

    Amazon is trying to make it simpler to sift through thousands of user reviews | CNN Business

    [ad_1]



    CNN
     — 

    Amazon is experimenting with using artificial intelligence to sum up customer feedback about products on the site, with the potential to cut down on the time shoppers spend sifting through reviews before making a purchase.

    On the Amazon product page for Apple’s third-generation AirPods, for example, the AI feature now sums up the more than 4,000 user ratings to note that the wireless headphones “have received positive feedback from customers regarding their sound quality and battery life.” But, it adds, “mixed opinions were also expressed about the performance, durability, fit, comfort, and value of the headphones.”

    The summary features the disclaimer: “AI-generated from the text of customer reviews.”

    “We are significantly investing in generative AI across all of our businesses,” Amazon said in a statement to CNN on Monday, referring to the technology that underpins services such as ChatGPT.

    The effort, first reported by CNBC, marks Amazon’s latest attempt to incorporate generative AI into its services and has the potential to help customers quickly determine the pros and cons of various products. But there are limits.

    For starters, the AI wording is not always intuitive. In the AirPods review, for example, the blurb says “all customers who mentioned stability had a negative opinion about it.”

    As with other generative AI tools, which are trained on vast troves of data online to come up with responses, there are also concerns about tone, accuracy and its potential to “hallucinate” details.

    “Given that generative AI is based on probability, mistakes are possible … and summaries may not be an accurate reflection of customer reviews,” said Reece Hayden, a senior analyst at market research firm ABI Research. “The possibility of hallucinations will be a worry for customers and merchants.”

    Hayden also questions whether the tool will be able to decipher fraudulent or bot-created reviews. “These reviews will be treated equally and therefore the summary may reflect fake, non-customer reviews,” Hayden said. (Amazon didn’t immediately respond to a request for comment on this possibility.)

    Amazon isn’t the only e-commerce company blending generative AI into the shopping experience. Some companies such as Shopify and Instacart are using the technology to help inform customers’ shopping decisions. Meanwhile, eBay recently rolled out an AI tool to help sellers generate product listing descriptions.

    Amazon CEO Andy Jassy said in a letter to shareholders in April that the company remains focused on “investing heavily” in the technology “across all of our consumer, seller, brand, and creator experiences.” The company is also reportedly working on adding ChatGPT-like search capabilities for its e-commerce store, and it’s rumored to be planning to use generative AI to bring conversational language to a home robot.

    Last month, Dave Limp, senior VP of devices and services, told CNN there is great interest in bringing generative AI to virtual assistant Alexa, so users can interact with the technology in a more fluid, natural way.

    [ad_2]

    Source link

  • Hackers threaten to leak stolen Reddit data if company doesn’t pay $4.5 million and change controversial pricing policy | CNN Business

    Hackers threaten to leak stolen Reddit data if company doesn’t pay $4.5 million and change controversial pricing policy | CNN Business

    [ad_1]



    CNN
     — 

    Reddit’s month may be going from bad to worse.

    Hackers from the BlackCat ransomware gang, also known as ALPHV, are threatening to leak 80 gigabytes of confidential data from Reddit that they claim to have stolen during a February breach, according to a post from the group on the dark web, which was reviewed by CNN and an independent cybersecurity expert.

    In their post, the hackers claim they first demanded a $4.5 million payout “for the deletion of the data and our silence” in April. After receiving no response, the group said it followed up on Friday with an additional demand: Reddit should withdraw a controversial new pricing policy that has sparked a protest from some of the platform’s most influential users.

    Reddit CTO Chris Slowe previously posted about a security incident that took place in early February. In the post, Slowe said the company’s “systems were hacked as a result of a sophisticated and highly-targeted phishing attack,” with hackers accessing “some internal documents, code, and some internal business systems.” Only employee data was accessed, according to the post.

    A Reddit spokesperson confirmed to CNN on Monday that BlackCat’s post relates to the February incident. The spokesperson reiterated that no user data was accessed, but declined to comment beyond that.

    More than 6,000 Reddit forums went dark last Monday in what was supposed to be a two-day protest over the company’s plan to begin charging steep fees for some third party apps to access its platform. A week later, more than 3,500 Reddit forums remain dark.

    While the ransom note appears to support the protestors’ cause, some experts are skeptical of BlackCat’s actual motives.

    “I suspect that ALPHV doesn’t actually care about the API pricing. They simply want future victims to see how much ongoing harm they can cause to increase the likelihood of them deciding that payment is the least painful option,” said Brett Callow, threat analyst at cybersecurity firm Emsisoft, who reviewed the post on the dark web.

    BlackCat, for its part, said it does not expect Reddit to meet its demands.

    “We are very confident that Reddit will not pay for its data,” the group wrote in the post on the dark web. “We expect to leak the data.”

    [ad_2]

    Source link

  • Meta releases clues on how AI is used on Facebook and Instagram | CNN Business

    Meta releases clues on how AI is used on Facebook and Instagram | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    As demand for greater transparency in artificial intelligence mounts, Meta released tools and information Thursday aimed at helping users understand how AI influences what they see on its apps.

    The social media giant introduced nearly two dozen explainers focused on various features of its platforms, such as Instagram Stories and Facebook’s news feed. These describe how Meta selects what content to recommend to users.

    The description and disclosures came in the face of looming legislation around the world that may soon impose concrete disclosure requirements on companies that use AI technology.

    Meta’s so-called “system cards” cover how the company determines which accounts to present to users as recommended follows on Facebook and Instagram, how the company’s search tools function and how notifications work.

    For example, the system card devoted to Instagram’s search function describes how the app gathers all relevant search results in response to a user’s query, scores each result based on the user’s past interactions with the app and then applies “additional filters” and “integrity processes” to narrow the list before finally presenting it to the user.

    Meta’s president of global affairs, Nick Clegg, tied the company’s new disclosures to a global debate about the potential dangers of artificial intelligence that range from the spread of misinformation to a rise in AI-enabled fraud and scams.

    “With rapid advances taking place with powerful technologies like generative AI, it’s understandable that people are both excited by the possibilities and concerned about the risks,” Clegg wrote in a blog post Thursday. “We believe that the best way to respond to those concerns is with openness.”

    A longer blog post describing how Facebook content ranking works, meanwhile, identifies detailed factors that go into determining what information the platform presents first.

    Those factors include whether a post has been flagged by a third-party fact checker, how engaging the account that posted the material may be, and whether you may have interacted with the account in the past.

    Meta’s new explainers coincide with the release of new tools for users to tailor the company’s algorithms, including the ability to tell Instagram to supply more of a certain type of content. Previously, Meta had only offered the ability for users to tell Instagram to show less, not more, Clegg wrote.

    On both Facebook and Instagram, he added, users will now be able to customize their feeds further by accessing a menu from individual posts.

    Finally, he said, Meta will be making it easier for researchers to study its platforms by providing a content library and an application programming interface (API) featuring a variety of content from Facebook and Instagram.

    Meta’s announcement comes as European lawmakers have swiftly advanced legislation that would create new requirements for explanation and transparency for companies that use artificial intelligence, and as US lawmakers have said they hope to begin working on similar legislation later this year.

    [ad_2]

    Source link

  • Foxconn pulls out of $19 billion chipmaking project in India | CNN Business

    Foxconn pulls out of $19 billion chipmaking project in India | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Foxconn says it is exiting an ambitious project to help build one of India’s first chip factories.

    The world’s largest contract electronics maker will “no longer move forward” with its $19.4 billion joint venture with Vedanta

    (VEDL)
    , an Indian metals and energy conglomerate, in Asia’s third largest economy, it said Monday.

    The news was seen as a blow to the Indian government’s plans to turn the country into a tech manufacturing powerhouse, even as officials have sought to counter that view.

    In a statement to CNN, Foxconn, a Taiwanese tech giant best known for being one of Apple

    (AAPL)
    ’s top suppliers, said the decision was based on “mutual agreement” and allowed the company “to explore more diverse development opportunities.”

    The joint venture will now be wholly owned by Vedanta.

    In a followup statement Tuesday, Foxconn reaffirmed its commitment to invest in Indian chipmaking, saying it will apply for a government program that subsidizes the cost of setting up semiconductor or electronic display production facilities in the country.

    “Building fabs from scratch in a new geography is a challenge, but Foxconn is committed to invest in India,” the company said, referring to fabrication plants, the technical term for semiconductor factories.

    “There was recognition from both sides that the project was not moving fast enough, there were challenging gaps we were not able to smoothly overcome, as well as external issues unrelated to the project,” it said.

    Since announcing the deal in February 2022, Foxconn said it had worked with Vedanta on plans to set up a semiconductor plant in the country that would support a wider ecosystem for manufacturers.

    It did not provide an investment figure for the facility, but Indian Prime Minister Narendra Modi tweeted in September that the total investment would amount to 1.54 trillion rupees, which was then equivalent to $19.4 billion.

    Foxconn said last year it was actively scouting for locations for the plant and held discussions with “a few state governments.”

    Foxconn CEO Young Liu has in recent months courted Indian partners, having traveled there in February to seek new collaborators.

    The company, which already has factories in the Indian states of Andhra Pradesh and Tamil Nadu, is one of many global tech firms looking for opportunities in the country, particularly as multinationals seek to diversify their supply chains beyond China.

    On Monday, India’s electronics and information technology minister Ashwini Vaishnaw told Indian news outlet and CNN affiliate News18 that both Vedanta and Foxconn are “completely committed to India’s semiconductor mission.”

    Rajeev Chandrasekhar, the country’s minister of state for electronics and IT, also tweeted that the news “changes nothing about” India’s semiconductor manufacturing goals, adding that the decision would still allow “both companies to independently pursue their strategies” in India.

    The project had been hailed as a milestone in India’s campaign to attract more investment in manufacturing, a sector sorely needed to help ease unemployment.

    Prime Minister Modi had framed the project as a significant boost for the economy and jobs.

    Foxconn shares rose 1.3% in Taipei on Tuesday following its announcement, while Vedanta’s shares fell 1.4% in Mumbai. The latter has not responded to a request for comment.

    Other prominent tech companies have moved to expand production in India recently.

    Last month, US chipmaker Micron

    (MICR)
    announced a new factory in the western state of Gujarat, calling it the country’s first semiconductor assembly and test manufacturing facility.

    The venture will see Micron invest up to $825 million, and create “up to 5,000 new direct Micron jobs and 15,000 community jobs over the next several years,” according to the company.

    [ad_2]

    Source link

  • Microsoft unveils more secure AI-powered Bing Chat for businesses to ensure ‘data doesn’t leak’ | CNN Business

    Microsoft unveils more secure AI-powered Bing Chat for businesses to ensure ‘data doesn’t leak’ | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft on Tuesday announced a more secure version of its AI-powered Bing specifically for businesses and designed to assure professionals they can safely share potentially sensitive information with a chatbot.

    With Bing Chat Enterprise, the user’s chat data will not be saved, sent to Microsoft’s servers or used to train the AI models, according to the company.

    “What this [update] means is your data doesn’t leak outside the organization,” Yusuf Mehdi, Microsoft’s vice president and consumer chief marketing officer, told CNN in an interview. “We don’t co-mingle your data with web data, and we don’t save it without your permission. So no data gets saved on the servers, and we don’t use any of your data chats to train the AI models.”

    Since ChatGPT launched late last year, a new crop of powerful AI tools has offered the promise of making workers more productive. But in recent months, some businesses such as JPMorgan Chase banned the use of ChatGPT among its employees, citing security and privacy concerns. Other large companies have reportedly taken similar steps over concerns around sharing confidential information with AI chatbots.

    In April, regulators in Italy issued a temporary ban on ChatGPT in the country after OpenAI disclosed a bug that allowed some users to see the subject lines from other users’ chat histories. The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post at the time.

    Like other tech companies, Microsoft is racing to develop and deploy a range of AI-powered tools for consumers and professionals amid widespread investor enthusiasm for the new technology. Microsoft also said Tuesday that it will add visual searches to its existing AI-powered Bing Chat tool. And the company said the Microsoft 365 Co-pilot, its previously announced AI-powered tool that helps edit, summarize, create and compare documents across its various products, will cost $30 a month for each user.

    Bing Chat Enterprise will be free for all of its 160 million Microsoft 365 subscribers starting on Tuesday, if a company’s IT department manually turns on the tool. After 30 days, however, Microsoft will roll out access to all users by default; subscribed businesses can disable the tool if they so choose.

    Current conversational AI tools such as the consumer version of Bing Chat send data from personal chats to their servers to train and improve its AI model.

    Microsoft’s new enterprise option is identical to the consumer version of Bing but it will not recall conversations with users, so they’ll need to go back and start from scratch each time. (Bing recently started to enable saved chats on its consumer chat model.)

    With these changes, Microsoft, which uses OpenAI’s technology to power its Bing chat tool, said workers can have “complete confidence” their data “won’t be leaked outside of the organization.”

    To access the tool, a user will sign into the Bing browser with their work credentials and the system will automatically detect the account and put it into a protected mode, according to Microsoft. Above the “ask me anything” bar reads: “Your personal and company data are protected in this chat.”

    In a demo video shown to CNN ahead of its launch, Microsoft showed how a user could type confidential details into Bing Chat Enterprise, such as an someone sharing financial information as part of preparing a bid to buy a building. With the new tool, the user could ask Bing Chat to create a table to compare the property to other neighboring buildings and write an analysis that highlights the strengths and weaknesses of their bid relative to other local bids.

    In addition to trying to ease privacy and security concerns around AI in the workplace, Mehdi also addressed the problem of factual errors. To reduce the possibility of inaccuracies or “hallucinations,” as some in the industry call it, he suggested users write clear, better prompts and check the included citations.

    [ad_2]

    Source link

  • Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    Google, Microsoft, OpenAI and Anthropic announce industry group to promote safe AI development | CNN Business

    [ad_1]



    CNN
     — 

    Some of the world’s top artificial intelligence companies are launching a new industry body to work together — and with policymakers and researchers — on ways to regulate the development of bleeding-edge AI.

    The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society.

    Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry.

    News of the forum comes after the four AI firms, along with several others including Amazon and Meta, pledged to the Biden administration to subject their AI systems to third-party testing before releasing them to the public and to clearly label AI-generated content.

    The industry-led forum, which is open to other companies designing the most advanced AI models, plans to make its technical evaluations and benchmarks available through a publicly accessible library, the companies said in a joint statement.

    “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

    The announcement comes a day after AI experts such as Anthropic CEO Dario Amodei and AI pioneer Yoshua Bengio warned lawmakers of potentially serious, even “catastrophic” societal risks stemming from unrestrained AI development.

    “In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.

    Within two to three years, Amodei said, AI could become powerful enough to help malicious actors build functional biological weapons, where today those actors may lack the specialized knowledge needed to complete the process.

    The best way to prevent major harms, Bengio told a Senate panel, is to restrict access to AI systems; develop standard and effective testing regimes to ensure those systems reflect shared societal values; limit how much of the world any single AI system can truly understand; and constrain the impact that AI systems can have on the real world.

    The European Union is moving toward legislation that could be finalized as early as this year that would ban the use of AI for predictive policing and limit its use in lower-risk scenarios.

    US lawmakers are much further behind. While a number of AI-related bills have already been introduced in Congress, much of the driving force for a comprehensive AI bill rests with Senate Majority Leader Chuck Schumer, who has prioritized getting members up to speed on the basics of the industry through a series of briefings this summer.

    Starting in September, Schumer has said, the Senate will hold a series of nine additional panels for members to learn about how AI could affect jobs, national security and intellectual property.

    [ad_2]

    Source link

  • TikTok banned from school-owned devices at all Florida state universities | CNN Business

    TikTok banned from school-owned devices at all Florida state universities | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The State University System of Florida Board of Governors has banned the social media app TikTok, along with some other software, applications, and developers, from use on university-owned devices “due to the continued and increasing landscape of cyber threats.”

    In a memo sent to state university system presidents on Wednesday, Chancellor Ray Rodrigues said, “This regulation requires institutions to remove technologies published in the State University System (SUS) Prohibited Technologies List from any university-owned device and to block network traffic associated with these technologies.”

    The ban is effective immediately, the memo said.

    “Data privacy, particularly concerning student data and faculty research, is a critical priority for the State University System of Florida,” the Board of Governors said in a statement to CNN.

    “Therefore, at a March 29 meeting of the Florida Board of Governors, the Board unanimously approved an emergency regulation prohibiting the use of TikTok and other foreign actors identified as an immediate national security risk, across our 12 public university campuses,” according to the Board of Governors.

    In addition to TikTok, the prohibited technologies include Kaspersky, VKontakte, Tencent QQ, WeChat and any subsidiary or affiliate.

    CNN reached out to them for comment.

    TikTok spokesperson Hilary McQuaide said “TikTok has taken unprecedented actions to address national security concerns by securing U.S. user data on U.S. soil. The best way to address concerns about national security is with the transparent, U.S.-based protection of U.S. user data and systems, with robust third-party monitoring, vetting, and verification, which we are already implementing.”

    McQuaide added “TikTok is enjoyed by more than 150 million Americans including university and college students and teachers to engage in the classroom.”

    Bans and regulations of TikTok in particular, and of social media sites in general, have been increasing in the US and Europe as concerns over privacy, national security and child safety mount.

    Late last month, the governor of Utah signed a bill which requires teens to get parental approval to use social media. Earlier this week, the United Kingdom’s Information Commissioner’s Office, which regulates data, fined TikTok for a number of breaches of data protection law. Italy is investigating TikTok for “dangerous content.”

    [ad_2]

    Source link

  • FBI warns consumers not to use public phone charging stations | CNN Business

    FBI warns consumers not to use public phone charging stations | CNN Business

    [ad_1]


    New York
    CNN
     — 

    The FBI is warning consumers against using public phone charging stations in order to avoid exposing their devices to malicious software.

    Public USB stations like the kind found at malls and airports are being used by bad actors to spread malware and monitoring software, according to a tweet last week from the FBI’s Denver branch. The agency did not provide any specific examples.

    “Carry your own charger and USB cord and use an electrical outlet instead,” the agency advised in the tweet.

    While public charging stations are attractive to many when devices are running critically low on battery, security experts have for years raised concerns about the risk. In 2011, researchers coined the term “juice jacking” to describe the problem.

    “Just by plugging your phone into a [compromised] power strip or charger, your device is now infected, and that compromises all your data,” Drew Paik, formerly of security firm Authentic8, explained to CNN in 2017.

    The cord you use to charge your phone is also used to send data from your phone to other devices. For instance, when you plug your iPhone into your Mac with the charging cord, you can download photos from your phone to your computer.

    If a port is compromised, there’s no limit to what information a hacker could take, Paik previously explained to CNN. That includes your email, text messages, photos and contacts.

    “The FBI regularly provides reminders and public service announcements in conjunction with our partners,” Vikki Migoya, public affairs officer at the FBI’s Denver branch, told CNN. “This was a general reminder for the American public to stay safe and diligent, especially while traveling.”

    The Federal Communications Commission also updated a blog post on Tuesday warning that a corrupted charging port can allow a malicious actor to lock a device or extract personal data and passwords.

    “In some cases, criminals may have intentionally left cables plugged in at charging stations,” according to the FCC blog post. “There have even been reports of infected cables being given away as promotional gifts.”

    [ad_2]

    Source link

  • Universal Music Group calls AI music a ‘fraud,’ wants it banned from streaming platforms. Experts say it’s not that easy | CNN Business

    Universal Music Group calls AI music a ‘fraud,’ wants it banned from streaming platforms. Experts say it’s not that easy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Universal Music Group — the music company representing superstars including Sting, The Weeknd, Nicki Minaj and Ariana Grande — has a new Goliath to contend with: artificial intelligence.

    The music group sent urgent letters in April to streaming platforms, including Spotify

    (SPOT)
    and Apple Music, asking them to block artificial intelligence platforms from training on the melodies and lyrics of their copywritten songs.

    The company has “a moral and commercial responsibility to our artists to work to prevent the unauthorized use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators,” a spokesperson from Universal Music Group, or UMG, told CNN. “We expect our platform partners will want to prevent their services from being used in ways that harm artists.”

    The move by UMG, first reported by the Financial Times, aims to stop artificial intelligence from creating an existential threat to the industry.

    Artificial intelligence, and specifically AI music, learns by either training on existing works on the internet or through a library of music given to the AI by humans.

    UMG says it is not against the technology itself, but rather AI that is so advanced it can recreate melodies and even musicians’ voices in seconds. That could possibly threaten UMG’s deep library of music and artists that generate billions of dollars in revenue.

    “UMG’s success has been, in part, due to embracing new technology and putting it to work for our artists — as we have been doing with our own innovation around AI for some time already,” UMG said in a statement Monday. “However, the training of generative AI using our artists’ music … begs the question as to which side of history all stakeholders in the music ecosystem want to be on.”

    The company said AI that uses artists’ music violates UMG’s agreements and copyright law. UMG has been sending requests to streamers asking them to take down AI-generated songs.

    “I understand the intent behind the move, but I’m not sure how effective this will be as AI services will likely still be able to access the copyrighted material one way or another,” said Karl Fowlkes, an entertainment and business attorney at The Fowlkes Firm.

    No regulations exist that dictate on what AI can and cannot train. But last month, in response to individuals looking to seek copyright for AI-generated works, the US Copyright Office released new guidance around how to register literary, musical, and artistic works made with AI.

    “In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of ‘mechanical reproduction’ or instead of an author’s ‘own original mental conception, to which [the author] gave visible form,’” the new guidance says.

    The copyright will be determined on a case-by-case basis, the guidance continued, based on how the AI tool operates and how it was used to create the final piece or work.

    The US Copyright Office announced it will also be seeking public input on how the law should apply to copywritten works the AI trains on, and how the office should treat those works.

    “AI companies using copyrighted works to train their models to create similar works is exactly the type of behavior the copyright office and courts should explicitly ban. Original art is meant to be protected by law, not works created by machines that used the original art to create new work,” said Fowlkes.

    But according to AI experts, it’s not that simple.

    “You can flag your site not to be searched. But that’s a request — you can’t prevent it. You can just request that someone not do it,” said Shelly Palmer, Professor of Advanced Media at Syracuse University.

    For example, a website can apply a robots.txt file that works like a guardrail to control which URL’s “search engine crawlers” can access a given site, according to Google. But it is not a full stop, keep-out option.

    Grammy-winning DJ and producer David Guetta proved in February just how easy it is to create new music using AI. Using ChatGPT for lyrics and Uberduck for vocals, Guetta was able to create a new song in an hour.

    The result was a rap with a voice that sounded exactly like Eminem. He played the song at one of his shows in February, but said he would never release it commercially.

    “What I think is very interesting about AI is that it’s raising a question of what is it to be an artist,” Guetta told CNN last month.

    Guetta believes AI is going to have a significant impact on the music industry, so he’s embracing it instead of fighting it. But he admits there are still questions about copyright.

    “That is an ethical problem that needs to be addressed because it sounds crazy to me that today I can type lyrics and it’s going to sound like Drake is rapping it, or Eminem,” he said.

    And that is exactly what UMG wants to avoid. The music group likens AI music to “deep fakes, fraud, and denying artists their due compensation.”

    “These instances demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists,” the UMG statement said.

    Music streamers Spotify, Apple Music and Pandora did not return request for comment.

    [ad_2]

    Source link

  • ‘Too good to be true?’ As Shein and Temu take off, so does the scrutiny | CNN Business

    ‘Too good to be true?’ As Shein and Temu take off, so does the scrutiny | CNN Business

    [ad_1]


    Hong Kong/New York
    CNN
     — 

    Temu and Shein are taking off in the United States, topping app stores and creating a frenzy with consumers.

    But as the two online shopping platforms become hugely popular, they’re also facing questions over a litany of issues, including how they’re able to sell goods at such strikingly low prices, how transparent they are with the public and how much environmental waste their businesses generate.

    Some of those questions aren’t unique to the two companies: Longtime fast-fashion producers like Zara or H&M

    (HNNMY)
    have faced similar concerns.

    But in recent weeks, Temu and Shein have also faced greater scrutiny over their ties to China, the country where their businesses originated and where they continue to rely on manufacturers.

    Shein was started in China, while Temu was launched by a Chinese company that now bills itself as a multinational firm. They are based in Singapore and Boston, respectively.

    That may matter little to policymakers. As US-China tensions remain high, American legislators have increased attempts to restrict technology linked in any way to foreign entities.

    Earlier this month, a US congressional commission called out Shein and Temu in a report that suggested the companies and others in China were potentially linked to the use of forced labor, exploitation of trade loopholes, product safety hazards or intellectual property theft.

    Both firms have enjoyed major success in the United States, noted Nicholas Kaufman, a policy analyst for the US-China Economic and Security Review Commission. This “has encouraged both established Chinese e-commerce platforms and startups to copy their model, posing risks and challenges to US regulations, laws, and principles of market access,” he wrote.

    Temu and Shein have racked up tens of millions of US users

    Shein: 24.5 millionTemu: 22.8 million

  • Note: US monthly active users, as of April 19
  • Source: Sensor Tower, a market intelligence firm

“Like Shein, Temu’s success raises flags about its business practices,” Kaufman added.

Asked about the report, Shein said in a statement that it “takes visibility across our supply chain seriously.”

“For over a decade, we have been providing customers with on-demand and affordable fashion, beauty, and lifestyle products, lawfully and with full respect for the communities we serve,” a spokesperson said.

Temu did not respond to a request for comment.

Temu and Shein have taken the world’s largest retail market — the United States — by storm.

Temu, which runs a marketplace for virtually everything from home goods to apparel to electronics, was launched by PDD Holdings

(PDD)
last year. It has quickly become the most downloaded app in the United States, and continues to expand its user base.

PDD was founded in China but recently began billing itself as a Cayman Islands company, citing a new corporate registration there. As of a February regulatory filing, PDD’s head office was in Shanghai. Temu says it doesn’t operate in China.

PDD also owns Pinduoduo, a hugely popular Chinese e-commerce giant that was found in a recent CNN investigation to have the ability to spy on its users.

According to cybersecurity researchers, Pinduoduo can circumvent users’ mobile security to see what they’re doing on other apps, read their messages and even change settings.

While Temu has not been implicated, the allegations about its sister company have invited further scrutiny and were cited in the Congress report on Temu this month. PDD did not respond to CNN’s multiple requests for comment on the investigation.

Shein, which was founded by Chinese entrepreneur Chris Xu, has enjoyed similar success with its app over the last few years. The company initially created a cult following for its fast-fashion apparel and has since branched out into other offerings, such as home goods.

Both companies have gained traction stateside by offering extreme bargains to shoppers, many of whom continue to feel the squeeze from historically high inflation.

A shopper at a Shein pop-up store in New York last October. The company initially created a cult following for its fast-fashion apparel, and has since branched out into other offerings.

“The timing is very advantageous,” said Michael Felice, an associate partner in Kearney’s communications, media and technology practice. “You have extreme pressure on the consumer wallet right now.”

While Temu and Shein may appear similar, they have different business models.

Temu operates as an online store, carrying merchandise from independent sellers. Shein, on the other hand, commissions its own goods through manufacturers it teams up with in what is effectively seen as a supersonic version of fast fashion.

For some consumers, the companies’ low prices have raised eyebrows.

“I think transparency and traceability of product is becoming more important,” said Felice. “When you’re starting to see price points that almost could be too good to be true, you start to ask yourself, ‘Is that too good to be true?’”

Felice also said there was a risk of Temu facing resistance from US consumers as a cross-border business.

“There’s a rising sense of nationalism in markets,” he said. “It will be interesting to see which one wins as the dual pressures of inflation and nationalism take hold on American consumers.”

Lawmakers are also getting more hawkish. While both Temu and Shein have taken steps to separate their businesses from links to China, geopolitical tensions are proving hard to shake off.

Last month, a bipartisan group of US senators introduced legislation that would give the government new powers, including a ban on foreign-linked producers of software.

In a fact sheet distributed by lawmakers, Temu’s surge on US app stores was described as an example of how Chinese consumer technology was becoming more popular.

A screenshot from Temu's commercial unveiled during the Super Bowl in February, encouraging consumers to

“From the history of the companies to where their products come from, it’s very hard to say you’re not related to China,” said Sheng Lu, an associate professor of fashion and apparel studies at the University of Delaware.

Similar to TikTok, which faces the prospect of a US ban, Lu believes that Temu and Shein could face data privacy concerns from regulators.

“They’re large, influential and collect data,” he said. “This can make the companies a potential sensitive topic.”

The fashion industry is responsible for 10% of annual global carbon emissions, more than all international flights and maritime shipping combined, according to the United Nations Environment Programme. Around 85% of clothing ends up in landfills or is burned.

Experts say the problem is even worse with fast fashion, defined as the rapid design and production of cheap and low-quality goods that respond to fleeting trends.

These are “disposable fashion companies,” said Maxine Bédat, founder of the New Standard Institute.

“That’s the crux of what they are. This stuff is not meant to last in your wardrobe,” she added. “Their business wouldn’t function if it did.”

Shein argues that its business model enables it to reduce waste and overproduction by producing small batches and only responding with larger production if demand is shown. The company has set a goal of reducing emissions by 25% by 2030, based on 2021 figures.

A model trying on outfits in Temu's Super Bowl ad. The company runs a marketplace for virtually everything, from apparel to home goods to electronics.

Temu, which markets itself more as a general store than a fashion outlet, also said its model limits unsold inventory and waste by better matching demand with supply.

The company told CNN it offsets emissions for every order with “carbon credits which support wildlife conservation efforts” in the United States, though it did not provide details.

Researchers who study textile waste and sustainability in global supply chains say the companies need to go further.

Shein, for example, often uses low-cost fabrics that are hard to recycle. Compared with other fashion retailers, the company has a much lower percentage of products that mention using sustainable or recycled textile materials, said Lu.

There are also concerns about the conditions of workers who make some of the companies’ products.

In February, a bipartisan group of US senators wrote to Shein, pressing the company on its supply chain practices and calling for greater transparency in its supply chain.

“We are concerned that American consumers may be inadvertently purchasing apparel made in part with cotton grown, picked, and processed using forced labor,” the senators said.

The inquiry was made following a Bloomberg report showing lab testing on two occasions last year found that garments shipped to the United States by Shein were made with cotton from Xinjiang. Washington has banned all imports from the Chinese region over concerns of forced labor.

In a statement to CNN, Shein said it was committed to respecting human rights and adhering to laws and regulations in the countries where it operates. A spokesperson said the company had zero tolerance for forced labor, and worked with third parties to audit supplier factories.

To ensure compliance with US laws, Shein requires that suppliers purchase cotton from approved countries, and has built tracing systems to get visibility into the origins of cotton it uses, the spokesperson added.

Temu has not faced such questions, though its sister company received backlash in 2021 over allegations that it overworks its staff. Pinduoduo said at the time that it would provide counseling following the suicide of a worker.

Worker rights at Shein also made headlines in December, when a documentary by UK broadcaster Channel 4 alleged exploitation at two Chinese factories belonging to its suppliers.

The program claimed staff were working 18 hours a day, making the equivalent of pennies on each item. CNN has not independently verified the allegations.

Shein responded to the claims, saying independent audits had refuted most of the allegations. But it conceded that the investigation had showed workers at two of its suppliers were working longer hours than allowed.

The company has since reduced the size of its orders from those producers on an interim basis, and committed $15 million to upgrade hundreds of its partner factories.

Still, the “working conditions of workers making Shein’s products remain a black box,” said Lu, the University of Delaware professor.

“Shein should be more transparent about their factory conditions and workers’ well-being.”

[ad_2]

Source link

  • Jack Dorsey no longer thinks Elon Musk is the right person to run Twitter | CNN Business

    Jack Dorsey no longer thinks Elon Musk is the right person to run Twitter | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Former Twitter CEO Jack Dorsey backtracked Saturday on his earlier endorsement of Elon Musk as the right choice to lead the company, speaking out against the billionaire who, for the past six months, has led Twitter through a series of largely self-inflicted crises.

    Asked on Bluesky, Dorsey’s new Twitter-like social media venture, whether he believed Musk has been the best possible steward of Twitter, Dorsey said flatly: “No.”

    Dorsey added that Musk “should have walked away” from acquiring Twitter for $44 billion, and faulted Twitter’s board in hindsight for trying to compel Musk to follow through with the deal despite Musk’s attempts to back out of the purchase last year.

    “It all went south,” Dorsey said. “But it happened and all we can do now is build something to avoid that ever happening again.”

    Twitter, which has cut much of its public relations team under Musk, didn’t immediately respond to a request for comment.

    Under Musk, Twitter has slashed most of its staff, suffered frequent service disruptions and made a number of controversial changes to its policies and features, including a recent decision to remove blue checks from VIP users who don’t pay to be verified.

    Dorsey’s reflections, outlined in Bluesky posts reviewed by CNN, highlight the Twitter founder’s growing disillusionment with Musk. They also come after numerous exchanges in recent months where Dorsey has publicly questioned some of Musk’s decision-making.

    A year ago, Dorsey was quick to heap praise on Musk. When Musk’s deal to purchase Twitter was first announced, Dorsey said that so long as Twitter had to be owned by a single person or company, “Elon is the singular solution I trust.”

    “I trust his mission to extend the light of consciousness,” Dorsey proclaimed at the time.

    Dorsey also rolled over his more than 18 million shares in Twitter (a roughly 2.4% stake) into the new Musk-owned company as an equity investor, rather than receiving a cash payout, according to a securities filing after the deal was completed.

    Now, though, Dorsey appears to believe Musk was an imperfect choice. Confronted by criticism from other Bluesky users that Twitter could have gone in a different direction, Dorsey argued that there was nothing stopping someone else from outbidding Musk.

    “If Elon or anyone wanted to buy the company, all they had to do was name a price that the board felt was better than what the company could do independently,” he said. “This is true for every public company.”

    Asked whether he felt any responsibility for the role he played in the transaction, Dorsey, who served on Twitter’s board at the time, said he was not the only person who authorized the deal and that Twitter’s “only alternative” to Musk was an acquisition by “hedge funds and Wall Street activists.”

    “The company would have never survived as a public company,” Dorsey claimed, adding: “I wish it were different,” but that some of Twitter’s revenue initiatives prior to Musk’s takeover “would not have mattered given market turn.”

    [ad_2]

    Source link

  • TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    TV and film writers are fighting to save their jobs from AI. They won’t be the last | CNN Business

    [ad_1]



    CNN
     — 

    By any standard, John August is a successful screenwriter. He’s written such films as “Big Fish,” “Charlie’s Angels” and “Go.” But even he is concerned about the impact AI could have on his work.

    A powerful new crop of AI tools, trained on vast troves of data online, can now generate essays, song lyrics and other written work in response to user prompts. While there are clearly limits for how well AI tools can produce compelling creative stories, these tools are only getting more advanced, putting writers like August on guard.

    “Screenwriters are concerned about our scripts being the feeder material that is going into these systems to generate other scripts, treatments, and write story ideas,” August, a Writers Guild of America (WGA) committee member, told CNN. “The work that we do can’t be replaced by these systems.”

    August is one of the more than 11,000 members of the WGA who went on strike Tuesday morning, bringing an immediate halt to the production of some television shows and possibly delaying the start of new seasons of others later this year.

    WGA is demanding a host of changes from the Alliance of Motion Picture and Television Producers (AMPTP), from an increase in pay to receiving clear guidelines around working with streaming services. But as part of their demands, the WGA is also fighting to protect their livelihoods from AI.

    In a proposal published on WGA’s website this week, the labor union said AI should be regulated so it “can’t write or rewrite literary material, can’t be used as source material” and that writers’ work “can’t be used to train AI.”

    August said the AI demand “was one of the last things” added to the WGA list, but that it’s “clearly an issue writers are concerned about” and need to address now rather than when their contact is up again in three years. By then, he said, “it may be too late.”

    WGA said the proposal was rejected by AMPTP, which countered by offering annual meetings to discuss advancements in the technology. August said AMPTP’s response shows they want to keep their options open.

    In a document sent to CNN responding to some of WGA’s asks, AMPTP said it values the work of creatives and “the best stories are original, insightful and often come from people’s own experiences.”

    “AI raises hard, important creative and legal questions for everyone,” it wrote. “Writers want to be able to use this technology as part of their creative process, without changing how credits are determined, which is complicated given AI material can’t be copyrighted. So it’s something that requires a lot more discussion, which we’ve committed to doing.”

    It added that the current WGA agreement defines a “writer” as a “person,” and said “AI-generated material would not be eligible for writing credit.”

    The writers’ attempt at bargaining over AI is perhaps the most high-profile labor battle yet to address concerns about the cutting-edge technology that has captivated the world’s attention in the six months since the public release of ChatGPT.

    Goldman Sachs economists estimate that as many as 300 million full-job jobs globally could be automated in some way by the newest wave of AI. White-collar workers, including those in administrative and legal roles, are expected to be the most affected. And the impact may hit sooner than some think: IBM’s CEO recently suggested AI could eliminate the need for thousands of jobs at his company alone in the next five years.

    David Gunkel, a professor at the department of communications at Northern Illinois University who tracks AI in media and entertainment, said screenwriters want clear guidelines around AI because “they can see the writing on the wall.”

    “AI is already displacing human labor in many other areas of content creation—copywriting, journalism, SEO writing, and so on,” he said. “The WGA is simply trying to get out-in-front of and to protect their members against … ‘technological unemployment.’”

    While film and TV writers in Hollywood may currently be leading the charge, professionals in other industries will almost certainly be paying attention.

    “There’s certainly other industries that need to be paying close attention to this space,” said Rowan Curran, an analyst at Forrester Research who focuses on AI. He noted that digital artists, musicians, engineers, real estate professionals and customer service workers will all feel the impact of generative AI.

    “Watch this #WGA strike carefully,” Justine Bateman, a writer, director and former actress, wrote in a tweet shortly after the strike kicked off. “Understand that our fight is the same fight that is coming to your professional sector next: it’s the devaluing of human effort, skill, and talent in favor of automation and profits.”

    AI has had a place in Hollywood for years. In the 2018 “Marvel Avengers Infinity Wars” film, the face of Thanos – a character played by actor Josh Brolin – was created in part with the technology.

    Crowd and battle scenes in films including the “Lord of the Rings” and “Meg” have utilized AI, and the most recent Indiana Jones used it to make Harrison Ford’s character appear younger. It’s also been used for color correction, finding footage more quickly during post production and making improvements such as removing scratches and dust from footage.

    But AI in screenwriting is in its infancy. In March, a “South Park” episode called “Deep Learning,” was co-written by ChatGPT and the tool was highly focused on in the plot (the characters use ChatGPT to talk to girls and write school papers).

    August said writers are largely willing to play ball with tools, as long as they’re used as launching pads or for research and writers are still credited and utilized throughout the production process.

    “Screenwriters are not luddites, and we’ve been quick to use new technologies to help us tell our stories,” August said. “We went from typewriters to word processors happily and it increased productivity. …. But we don’t need a magical typewriter that types scripts all by itself.”

    Because large language models are trained on text that humans have written before, and find patterns in words and sentences to create responses to prompts, concerns around intellectual property exist, too. “It is entirely possible for a [chatbot] to generate a script in the style of a particular kind of filmmaker or scriptwriter without prior consent of the original artist or the Hollywood studio that holds the IP for that material,” Gunkel said.

    For example, one could prompt ChatGPT to generate a zombie apocalypse drama in the style of David Mamet. “Who should get credited for that?” August said. “What happens if we allow a producer or studio executive to come up with a treatment or pitch or something that looks like a screenplay that no writer has touched?”

    For now, the legal landscape remains very much unsettled on the matter, with regulations lagging behind the rapid pace of AI development. In early April, the Biden administration said it is seeking public comments on how to hold artificial intelligence systems like ChatGPT accountable.

    “We can’t protect studios from their own bad choices,” August said. “We can only protect writers from abuses.”

    The strike, and the demands around AI specifically, come at a time when both the writers and the studios are feeling financial pain.

    Many of the businesses represented by AMPTP have seen drops in their stock price, prompting deep cost cutting, including layoffs. The need to manage costs, combined with addressing the fallout from the strike, might only make the companies feel more pressure to turn to AI for scriptwriting.

    “In the short term, this could be an effective way to circumvent the WGA strike, mainly because [large language models], which are considered property and not personnel, can be employed for this task without violating the picket line,” Gunkel said. Such an “experiment” could also show production studios whether it’s possible “to get by with less humans involved,” he said.

    But Joshua Glick, a visiting professor of film and electronic arts at Bard University, believes such a move would be ill-advised.

    “It would be a pretty aggressive and antagonistic move for studios to move forward with AI-generated scripts in terms of getting writers to come to the negotiating table because AI is such a crucial sticking point in the negotiations,” said Glick, who also co-created Deepfake: Unstable Evidence on Screen, an exhibition at the Museum of the Moving Image in New York.

    “At the same time, I think the result of those scripts would be pretty mediocre at best,” he said.

    However the studios react, the issue is unlikely to go away in Hollywood. Film and TV actors’ contracts are up in June, and many are worried about how their faces, bodies and voices will be impacted by AI, August said.

    “As writers, we don’t want tools to replace us but actors have the same concerns with AI, as do directors, editors and everyone else who does creative work in this industry,” he added.

    [ad_2]

    Source link

  • The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    The man behind ChatGPT is about to have his moment on Capitol Hill | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a few months in 2017, there were rumors that Sam Altman was planning to run for governor of California. Instead, he kept his day job as one of Silicon Valley’s most influential investors and entrepreneurs.

    But now, Altman is about to make a different kind of political debut.

    Altman, the CEO and co-founder of OpenAI, the artificial intelligence company behind viral chatbot ChatGPT and image generator Dall-E, is set to testify before Congress on Tuesday. His appearance is part of a Senate subcommittee hearing on the risks artificial intelligence poses for society, and what safeguards are needed for the technology.

    House lawmakers on both sides of the aisle are also expected to hold a dinner with Altman on Monday night, according to multiple reports. Dozens of lawmakers are said to be planning to attend, with one Republican lawmaker describing it as part of the process for Congress to assess “the extraordinary potential and unprecedented threat that artificial intelligence presents to humanity.”

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    The hearing and meetings come as ChatGPT has sparked a new arms race over AI. A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. This week’s hearing may only cement his stature as a central player in AI’s rapid growth – and also add to scrutiny of him and his company.

    Those who know Altman have described him as a brilliant thinker, someone who makes prescient bets and has even been called “a startup Yoda.” In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    “If anyone knows where this is going, it’s Sam,” Brian Chesky, the CEO of Airbnb, wrote in a post about Altman for the latter’s inclusion this year on Time’s list of the 100 most influential people. “But Sam also knows that he doesn’t have all the answers. He often says, ‘What do you think? Maybe I’m wrong?’ Thank God someone with so much power has so much humility.”

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    OpenAI declined to make anyone available for an interview for this story.

    The success of ChatGPT may have brought Altman greater public attention, but he has been a well-known figure in Silicon Valley for years.

    Prior to cofounding OpenAI with Musk in 2015, Altman, a Missouri native, studied computer science at Stanford University, only to drop out to launch Loopt, an app that helped users share their locations with friends and get coupons for nearby businesses.

    In 2005, Loopt was part of the first batch of companies at Y Combinator, a prestigious tech accelerator. Paul Graham, who co-founded Y Combinator, later described Altman as “a very unusual guy.”

    “Within about three minutes of meeting him, I remember thinking ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham wrote in a post in 2006.

    Loopt was acquired in 2012 for about $43 million. Two years later, Altman took over from Graham as president of Y Combinator. The position allowed Altman to connect him with numerous powerful figures in the tech industry. He remained at the helm of the accelerator until 2019.

    Margaret O’Mara, a tech historian and professor at the University of Washington, told CNN that Altman “has long been admired as a thoughtful, significant guy and in the remarkably small number of powerful people who are kind of at the top of tech and have a lot of sway.”

    During the Trump administration, Altman gained new attention as a vocal critic of the president. It was against that backdrop that he was rumored to be considering a run for California governor.

    Rather than running, however, Altman instead looked to back candidates who aligned with his values, which include lower cost of living, clean energy and taking 10% off the defense budget to give to research and development of future technology.

    Altman continues to push for some of these goals through his work in the private sector. He invested in Helion, a fusion research company that inked a deal with Microsoft last week to sell clean energy to the tech giant by 2028.

    Altman has also been a proponent of the idea of a universal basic income and has suggested that AI could one day help fulfill that goal by generating so much wealth it could be redistributed back to the public.

    As Graham told The New Yorker about Altman in 2016, “I think his goal is to make the whole future.”

    When launching OpenAI, Musk and Altman’s original mission was to get ahead of the fear that AI could harm people and society.

    “We discussed what is the best thing we can do to ensure the future is good?” Musk told the New York Times about a conversation with Altman and others before launching the company. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    In an interview at the launch of OpenAI, Altman explained the company as his way of trying to steer the path of AI technology. “I sleep better knowing I can have some influence now,” he said.

    If there’s one thing AI enthusiasts and critics can agree on right now, it may be that Altman clearly has succeeded in having some influence over the rapidly evolving technology.

    Less than six months after the release of ChatGPT, it has become a household name, almost synonymous with AI itself. CEOs are using it to draft emails. Realtors are using it to write iistings and draft legal documents. The tool has passed exams from law and business schools – and been used to help some students cheat. And OpenAI recently released a more powerful version of the technology underpinning ChatGPT.

    Tech giants like Google and Facebook are now racing to catch up. Similar generative AI technology is quickly finding its way into productivity and search tools used by billions of people.

    A future that once seemed very far off now feels right around the corner, whether society is ready for it or not. Altman himself has professed not to be sure about how it will turn out.

    O’Mara said she believes Altman fits into “the techno-optimist school of thought that has been dominant in the Valley for a very long time,” which she describes as “the idea that we can devise technology that can indeed make the world a better place.”

    While Altman’s cautious remarks about AI may sound at odds with that way of thinking, O’Mara argues it may be an “extension” of it. In essence, she said, it’s related to “the idea that technology is transformative and can be transformative in a positive way but also has so much capacity to do so much that it actually could be dangerous.”

    And if AI should somehow help bring about the end of society as we know it, Altman may be more prepared than most to adapt.

    “I prep for survival,” he said in a 2016 profile of him in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

    “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

    [ad_2]

    Source link