ReportWire

Tag: iab-technology & computing

  • China just played a trump card in the chip war. Are more export curbs coming? | CNN Business

    China just played a trump card in the chip war. Are more export curbs coming? | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    A trade war between China and the United States over the future of semiconductors is escalating.

    Beijing hit back Monday by playing a trump card: It imposed export controls on two strategic raw materials, gallium and germanium, that are critical to the global chipmaking industry.

    “We see this as China’s second, and much bigger, counter measure to the tech war, and likely a response to the potential US tightening of [its] AI chip ban,” said Jefferies analysts. Sanctioning one of America’s biggest memory chipmakers, Micron Technology

    (MU)
    , in May was the first, they said.

    Here’s what you need to know about gallium and germanium, how they could play into the chip war and whether more countermeasures could be coming.

    Last October, the Biden administration unveiled a set of export controls banning Chinese companies from buying advanced chips and chip-making equipment without a license.

    Chips are vital for everything from smartphones and self-driving cars to advanced computing and weapons manufacturing. US officials have talked about the move as a measure to protect national security interests.

    But it didn’t stop there. For the curbs to be effective, Washington needed other key suppliers, located in the Netherlands and Japan, to join. They did.

    China eventually retaliated. In April, it launched a cybersecurity probe into Micron before banning the company from selling to Chinese companies working on key infrastructure projects. On Monday, Beijing announced the restrictions on gallium and germanium.

    Gallium is a soft, silvery metal and is easy to cut with a knife. It’s commonly used to produce compounds that are key materials in semiconductors and light-emitting diodes.

    Germanium is a hard, grayish-white and brittle metalloid that is used in the production of optical fibers that can transmit light and electronic data.

    The export controls have drawn comparisons with China’s reported attempts in early 2021 to restrict exports of rare earths, a group of 17 elements for which China controls more than half of the global supply.

    Gallium and germanium do not belong to this group of minerals. Like rare earths, they can be expensive to mine or produce.

    This is because they are usually formed as a byproduct of mining more common metals, primarily aluminum, zinc and copper, and processed in countries that produce them.

    China is the world’s leading producer of both gallium and germanium, according to the US Geological Survey. The country accounted for 98% of the global production of gallium, and 68% of the refinery production of germanium.

    “The economies of scale in China’s extensive and increasingly integrated mining and processing operations, along with state subsidies, have allowed it to export processed minerals at a cost that operators elsewhere can’t match, perpetuating the country’s market dominance for many critical commodities,” analysts from Eurasia Group said on Tuesday.

    Shares of Chinese producers of the two raw materials surged by 10% on Tuesday.

    Beyond China, Australian rare earths producers also advanced, as investors expected Beijing might extend export curbs to that group of strategically important minerals. Lynas Rare Earths

    (LYSCF)
    rose 1.5%.

    The United States is dependent on China for these the two critical elements. It imported more than 50% of the gallium and germanium it used in 2021 from the country, the US Geological Survey showed.

    Eurasia Group analysts described China’s export controls as a “warning shot.”

    “It is a shot across the bow intended to remind countries including the United States, Japan, and the Netherlands that China has retaliatory options and to thereby deter them from imposing further restrictions on Chinese access to high-end chips and tools,” Eurasia Group said in a research note.

    Chinese authorities may also intend to use its control over these niche metals as a possible bargaining chip in discussions with US Treasury Secretary Janet Yellen, who is scheduled to visit Beijing later this week.

    Jefferies analysts said the timing of the announcement was unlikely to be a casual decision.

    “It gives the US at least two days to digest and come up with a well-considered response,” they said.

    However, the move is not considered “a death blow” to the United States and its allies.

    China may be the industry leader, but there are alternative producers, as well as available substitutes for both minerals, the Eurasia Group analysts pointed out.

    The United States also imports a fifth of its gallium from the United Kingdom and Germany and buys more than 30% of its germanium from Belgium and Germany.

    That’s definitely possible, a former senior Chinese official has warned.

    The curbs announced this week are “just the start,” Wei Jianguo, a former deputy commerce minister, told the official China Daily on Wednesday, adding China has more tools in its arsenal with which to retaliate.

    “If the high-tech restrictions on China become tougher in the future, China’s countermeasures will also escalate,” he was quoted as saying.

    Analysts believe this too. Rare earths, which are not difficult to find but are complicated to process, are also critical in making semiconductors, and could be the next target.

    “If this action doesn’t change the US-China dynamics, more rare earth export controls should be expected,” Jefferies analysts said.

    However, analysts from Eurasia Group warned that restricting exports is a “double-edged sword.”

    Past attempts by China to leverage its dominance in rare earths have reduced availability and raised prices. Higher prices have spurred greater competition by making mining and processing ventures outside of China more cost-competitive, they said.

    China cut its rare earths export quota in 2010 amid tensions with the United States.

    That resulted in greater efforts by companies outside of the country to produce the metals. US data showed that China’s global market share dropped from 97% in 2010 to about 60% in 2019.

    “Imposing export restrictions risks reducing market dominance,” the Eurasia Group analysts said.

    CNN’s Hanna Ziady and Xiaofei Xu contributed to reporting.

    [ad_2]

    Source link

  • Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    Google hit with lawsuit alleging it stole data from millions of users to train its AI tools | CNN Business

    [ad_1]



    CNN
     — 

    Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products.

    The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in a federal court in California on Tuesday, and was brought by Clarkson Law Firm. The firm previously filed a similar suit against ChatGPT-maker OpenAI last month. (OpenAI did not previously respond to a request for comment on the suit.)

    The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

    Halimah DeLaine Prado, Google’s general counsel, called the claims in the suit “baseless” in a statement to CNN. “We’ve been clear for years that we use data from public sources — like information published to the open web and public datasets — to train the AI models behind services like Google Translate, responsibly and in line with our AI Principles,” DeLaine Prado said.

    “American law supports using public information to create new beneficial uses, and we look forward to refuting these baseless claims,” the statement added.

    Alphabet and DeepMind did not immediately respond to a request for comment.

    The complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

    In response to an earlier Verge report on the update, the company said its policy “has long been transparent” about this practice and “this latest update simply clarifies that newer services like Bard are also included.”

    The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

    In the process, however, companies are also drawing mounting legal scrutiny over copyright issues from works swept up in these data sets, as well as their apparent use of personal and possibly sensitive data from everyday users, including data from children, according to the Google lawsuit.

    “Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

    The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

    Giordano contrasted the benefits and alleged harms of how Google typically indexes online data to support its core search engine with the new allegations of it scraping data to train AI tools.

    With its search engine, he said, Google can “serve up an attributed link to your work that can actually drive somebody to purchase it or engage with it.” Data scraping to train AI tools, however, is creating “an alternative version of the work that radically alters the incentives for anybody to need to purchase the work,” Giordano added.

    While some internet users may have grown accustomed to their digital data being collected and used for search results or targeted advertising, the same may not be true for AI training. “People could not have imagined their information would be used this way,” Giordano said.

    Ryan Clarkson, a partner at the law firm, said Google needs to “create an opportunity for folks to opt out” of having their data used for training AI while still maintaining their ability to use the internet for their everyday needs.

    [ad_2]

    Source link

  • Threads now has ‘tens of millions’ of daily users. But its honeymoon phase may be over | CNN Business

    Threads now has ‘tens of millions’ of daily users. But its honeymoon phase may be over | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Two weeks after Meta launched its Twitter competitor Threads and received an unprecedented amount of user signups, the frenzy around the app appears to have come back to Earth.

    After surpassing 100 million user sign-ups in less than a week, user engagement on Threads has slowed. Threads daily active users fell from 49 million on July 7, two days after its launch, to 23.6 million users last Friday, according to a report published this week by web traffic analysis firm Similarweb. The app’s average usage time also fell from 21 minutes to 6 minutes over the same timeframe.

    The slowdown hints at the challenges ahead for Meta as it looks to not only draw users away from Twitter but build a service that reaches a far larger audience. Threads is already facing some of the common issues that often plague social media platforms, including user retention, spam and some early regulatory scrutiny around its approach to content moderation. It’s also not clear yet how much Meta’s investments in building Threads will actually amount to financial returns for the company.

    “I’m very optimistic about how the Threads community is coming together,” Meta CEO Mark Zuckerberg said in a post on the platform Monday. “Early growth was off the charts, but more importantly 10s of millions of people now come back daily … The focus for the rest of the year is improving the basics and retention.”

    Meta executives acknowledged in the early days after Threads’ launch that getting users to sign up for a buzzy new app is much easier than convincing them to continue engaging there long-term. That’s likely even more true for Threads, which launched as a relatively bare-bones app in an effort to capitalize on a moment of weakness at Twitter and also tapped into Instagram’s network to ease the sign-in process.

    Threads on Tuesday rolled out its first batch of updates to the iOS version of the app, including a translation button, a tab on users’ activity feed dedicated to showing who’s followed them and the option to subscribe and receive notifications from accounts a user doesn’t follow.

    Instagram head Adam Mosseri, who is overseeing the Threads launch, has also hinted at plans to add features such as a desktop version of the app, a feed of only accounts a user follows and an edit button. “We’re clearly way out over our skis on this,” Mosseri said in a Threads post the week of the app’s launch.

    In the meantime, Threads is grappling with a common social media issue — spam. Users have complained of replies to posts filling up with spammy links and offering “giveaways” in exchange for new followers. And on Monday, Mosseri said in a Threads post that the platform was “going to have to get tighter on things like rate limits” because “spam attacks have picked up.”

    This “is going to mean more unintentionally limiting active people (false positives),” Mosseri warned. “If you get caught up [in] those protections let us know.”

    Meta declined to clarify whether Mosseri’s post refers to limits on users’ ability to post or read content, or to provide any additional details. But the comment did prompt some snark from Twitter owner Elon Musk, after backlash to Twitter’s own rate limits — restrictions on how many tweets users can read — helped propel Threads’ early growth.

    Meta shares have jumped more than 6% since the Threads launch, but some analysts who follow the company are skeptical that Threads will quickly contribute to the company’s bottom line, if at all.

    Threads could be a way for Meta to eke additional engagement time out of its massive existing user base. The app could also ultimately supplement Meta’s core advertising business, which could use a boost after facing challenges from a broad decline in the online ad market and changes to Apple’s app privacy practices.

    Meta executives have said they will likely incorporate advertising into the platform, once its user base has reached critical mass. But even if Threads continues to add users, “advertisers could be hesitant and possibly wait before allocating ad dollars to Threads because of their uncertainty about long-run user retention and engagement,” Morningstar senior equity analyst Ali Mogharabi said in a recent investor note.

    Like Twitter, Threads could also struggle to attract advertisers because the nature of a real-time news and public conversations app means the content is sometimes negative or controversial. Even before Musk took over Twitter and alienated advertisers, the platform represented a tiny piece of the ad sales market compared to Meta’s properties.

    Threads, however, likely has a leg up on Twitter because Meta is known as a company that provides clear value for advertisers, said Scott Kessler, global tech sector lead at research firm Third Bridge. If anything, he said, the risk may be that some advertisers may think twice about spending on yet another Meta platform versus diversifying their ad strategy.

    For now, analysts will be awaiting Meta executives’ commentary about Threads during its quarterly earnings call next week, including to see if they offer any hints about whether ads may be rolled out on the app ahead of the crucial holiday shopping season.

    “They launched this in July,” Kessler said. “That should give them enough time to build out sufficient tools for holiday shopping season advertising.”

    [ad_2]

    Source link

  • Pro-Chinese online influence campaign promoted protests in Washington, researchers say | CNN Politics

    Pro-Chinese online influence campaign promoted protests in Washington, researchers say | CNN Politics

    [ad_1]



    CNN
     — 

    A Chinese marketing firm likely organized and promoted protests in Washington last year as part of a wide-ranging pro-Beijing influence campaign, according to new research.

    The Chinese firm also used a network of over 70 fake news websites to promote pro-China content in an example of the more aggressive efforts by pro-China operatives to influence US political debate in recent years, according to security firm Mandiant, which analyzed the activity.

    One of the protests was against a US government ban on goods produced in China’s Xinjiang region, where US officials have accused the Chinese government of systematic repression of the Uyghurs. The other protest was on the sidelines of a June conference on international religious freedom, Mandiant said.

    One of the protests only attracted roughly a dozen people but it showed the scope and ambition of the pro-China efforts.

    The hired protesters, who included self-proclaimed musicians and actors in the Washington, DC, area, apparently had no idea they were being enlisted in a pro-China influence campaign, the Mandiant researchers said.

    The campaign backed by the Chinese firm, Shanghai Haixun Technology Co., Ltd., is “intended to sow discord in US society,” Ryan Serabian, a senior analyst at Mandiant, told CNN.

    In both cases, protesters carry placards and chant slogans about racial discrimination and abortion in the US. Haixun, the Chinese firm, distributed videos of the protesters online to further the influence campaign, according to Mandiant.

    Shanghai Haixun Technology did not respond to a request for comment.

    Liu Pengyu, a spokesperson for the Chinese Embassy in Washington, said he was unaware of the details of the research. “China has always adhered to non-interference in other countries’ internal affairs,” Liu said in an email to CNN.

    The Washington Post first reported on the Mandiant research.

    In the runup to the 2016 US presidential elections, Russian operatives used social media to organize protests on American soil as part of Moscow’s election interference, according to US intelligence officials. Such divisive tactics are no longer confined to the Russians, according to election security experts.

    During the 2022 US midterm elections, pro-China propagandists showed signs of engaging in “Russia-style influence activities” that stoke American divisions, FBI officials told reporters last year. The FBI pointed to Facebook’s shutdown of accounts originating in China that posted memes mocking President Joe Biden and Republican Sen. Marco Rubio of Florida.

    [ad_2]

    Source link

  • Elizabeth Warren and Lindsey Graham want a new agency to regulate tech | CNN Business

    Elizabeth Warren and Lindsey Graham want a new agency to regulate tech | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Two US senators are calling for the creation of a new federal agency to regulate tech companies such as Amazon, Google and Meta, in the latest push by members of Congress to clamp down on Big Tech.

    Under the proposal released Thursday by Sen. Elizabeth Warren, a Massachusetts Democrat, and Sen. Lindsey Graham, a South Carolina Republican, Congress would establish a new regulatory body with the power to sue platforms — or even force them to stop operating — in response to various potential harms to customers, rivals and the general public, including anticompetitive practices, violations of consumer privacy and the spread of harmful online content.

    The new regulator would have broad jurisdiction, covering not just social media platforms or e-commerce but also the rapidly evolving field of artificial intelligence. The bill targets tech platforms including Amazon, Apple, Google, Meta, Microsoft, TikTok and Twitter, which now officially known as X, a Senate aide told CNN, though the companies aren’t directly named in the legislation.

    “For too long, giant tech companies have exploited consumers’ data, invaded Americans’ privacy, threatened our national security, and stomped out competition in our economy,” Warren said in a statement. “This bipartisan bill would create a new tech regulator and it makes clear that reining in Big Tech platforms is a top priority on both sides of the aisle.”

    The push comes after years of stalled attempts to impose new rules on large tech companies and multiple failed efforts to block deals on antitrust grounds. Some AI companies have openly welcomed the creation of a special-purpose AI regulator. Warren and Graham’s legislation, the Digital Consumer Protection Commission Act, would be the first bipartisan bill of its kind, though a similar proposal by Sen. Michael Bennet, a Colorado Democrat, has been circulating since last year. Thursday’s proposal differs from Bennet’s bill, the aide said, in that it is in some ways more specific in its restrictions on the tech industry.

    The new commission would have far-reaching authority under the bill, with the ability to make regulations for the industry, investigate claims of wrongdoing and pursue enforcement actions. For the largest companies under its purview — defined by a mixture of user numbers, revenue figures, market capitalization and other metrics — the commission would issue operating licenses that could be revoked in the case of repeat offenses, according to a copy of the bill text reviewed by CNN.

    “Enough is enough. It’s time to rein in Big Tech,” Graham and Warren wrote in an op-ed in the New York Times Thursday. “And we can’t do it with a law that only nibbles around the edges of the problem. Piecemeal efforts to stop abusive and dangerous practices have failed.”

    The legislation would also ban certain practices outright and direct the new agency to police any violations. For example, companies such as Google would not be able to prioritize its own apps and services at the top of search results or use noncompete agreements to block employees from going to work for a rival startup.

    Companies covered by the legislation would also face restrictions on how they can use Americans’ personal information for targeted advertising, in a privacy-focused move.

    And the legislation seeks to address the type of national security concerns that have been linked to TikTok by forcing “dominant” platforms to be either based in the United States or controlled by US citizens, and by restricting the companies’ ability to store data in certain countries.

    In unveiling the bill, the lawmakers drew parallels between their proposed US agency and other sector-specific regulators such as the Federal Communications Commission, which oversees the telecom and broadcast industries, and the Nuclear Regulatory Commission, which regulates nuclear power.

    But the legislation could also lead to some areas of overlap — for example, with the Federal Trade Commission and the Department of Justice overseeing antitrust issues, as well as with the FTC on consumer protection issues. The Senate aide told CNN that the bill’s intent is to see the new tech-focused commission working together with the FTC and DOJ, and that the legislation ensures both existing agencies will also be able to conduct their own enforcement as well.

    [ad_2]

    Source link

  • Opinion: Utah’s startling new rules for kids and social media | CNN

    Opinion: Utah’s startling new rules for kids and social media | CNN

    [ad_1]

    Editor’s Note: Kara Alaimo, an associate professor of communication at Fairleigh Dickinson University, writes about issues affecting women and social media. Her book, “Over the Influence: Why Social Media Is Toxic for Women and Girls — And How We Can Reclaim It,” will be published by Alcove Press in 2024. The opinions expressed in this commentary are her own. Read more opinion on CNN.



    CNN
     — 

    Utah’s Republican governor, Spencer Cox, recently signed two bills into law that sharply restrict children’s use of social media platforms. Under the legislation, which takes effect next year, social media companies have to verify the ages of all users in the state, and children under age 18 have to get permission from their parents to have accounts.

    Parents will also be able to access their kids’ accounts, apps won’t be allowed to show children ads, and accounts for kids won’t be able to be used between 10:30 p.m. and 6:30 a.m. without parental permission.

    It’s about time. Social networks in the United States have become potentially incredibly dangerous for children, and parents can no longer protect our kids without the tools and safeguards this law provides. While Cox is correct that these measures won’t be “foolproof,” and what implementing them actually looks like remains an open question, one thing is clear: Congress should follow Utah’s lead and enact a similar law to protect every child in this country.

    One of the most important parts of Utah’s law is the requirement for social networks to verify the ages of users. Right now, most apps ask users their ages without requiring proof. Children can lie and say they’re older to avoid some of the features social media companies have created to protect kids — like TikTok’s new setting that asks 13- to 17-year-olds to enter their passwords after they’ve been online for an hour, as a prompt for them to consider whether they want to spend so much time on the app.

    While critics argue that age verification allows tech companies to collect even more data about users, let’s be real: These companies already have a terrifying amount of intimate information about us. To solve this problem, we need a separate (and comprehensive) data privacy law. But until that happens, this concern shouldn’t stop us from protecting kids.

    One of the key components of this legislation is allowing parents access to their kids’ accounts. By doing this, the law begins to help address one of the biggest dangers kids face online: toxic content. I’m talking about things like the 2,100 pieces of content about suicide, self-harm and depression that 14-year-old Molly Russell in the UK saved, shared or liked in the six months before she killed herself last year.

    I’m also talking about things like the blackout challenge — also called the pass-out or choking challenge — that has gone around social networks. In 2021, four children 12 or younger in four different states all died after trying it.

    “Check out their phones,” urged the father of one of these young victims. “It’s not about privacy — this is their lives.”

    Of course, there are legitimate privacy concerns to worry about here, and just as kids’ use of social media can be deadly, social apps can also be used in healthy ways. LGBTQ children who aren’t accepted in their families or communities, for example, can turn online for support that is good for their mental health. Now, their parents will potentially be able to see this content on their accounts.

    I hope groups that serve children who are questioning their gender and sexual identities and those that work with other vulnerable youth will adapt their online presences to try to serve as resources for educating parents about inclusivity and tolerance, too. This is also a reminder that vulnerable children need better access to mental health services like therapy — they’re way too young to be left to their own devices to seek out the support they need online.

    But, despite these very real privacy concerns, it’s simply too dangerous for parents not to know what our kids are seeing on social media. Just as parents and caregivers supervise our children offline and don’t allow them to go to bars or strip clubs, we have to ensure they don’t end up in unsafe spaces on social media.

    The other huge challenge the Utah law helps parents overcome is the amount of time kids are spending on social media. A 2022 survey by Common Sense Media found that the average 8- to 12-year-old is on social media for 5 hours and 33 minutes per day, while the average 13- to 18 year-old spends 8 hours and 39 minutes every day. That’s more time than a full time-job.

    The American Academy of Pediatrics warns that lack of sleep is associated with serious harms in children — everything from injuries to depression, obesity and diabetes. So parents in the US need to have a way to make sure their kids aren’t up on TikTok all night (parents in China don’t have to worry about this because the Chinese version of TikTok doesn’t allow kids to stay on for more than 40 minutes and isn’t useable overnight).

    Of course, Utah isn’t an authoritarian state like China, so it can’t just turn off kids’ phones. That’s where this new law comes in requiring social networks to implement these settings. The tougher part of Utah’s law for tech companies to implement will be a provision requiring social apps to ensure they’re not designed to addict kids.

    Social networks are arguably addictive by nature, since they feed on our desires for connection and validation. But hopefully the threat of being sued by children who say they’ve been addicted or otherwise harmed by social networks — an outcome for which this law provides an avenue — will force tech companies to think carefully about how they build their algorithms and features like bottomless feeds that seem practically designed to keep users glued to their screens.

    TikTok and Snap didn’t respond to requests for comment from CNN about Utah’s law, while a representative for Meta, Facebook’s parent company, said the company shares the goal to keep Facebook safe for kids but also wants it to be accessible.

    Of course, if social networks had been more responsible, it probably wouldn’t have come to this. But in the US, tech companies have taken advantage of a lack of rules to build platforms that can be dangerous for our kids.

    States are finally saying no more. In addition to Utah’s measures, California passed a sweeping online safety law last year. Connecticut, Ohio and Arkansas are also considering laws to protect kids by regulating social media. A bill introduced in Texas wouldn’t allow kids to use social media at all.

    There’s nothing innocent about the experiences many kids are having on social media. This law will help Utah’s parents protect their kids. Parents in other states need the same support. Now, it’s time for the federal government to step up and ensure children throughout the country have the same protections as Utah kids.

    Suicide & Crisis Lifeline: Call or text 988. The Lifeline provides 24/7, free and confidential support for people in distress, prevention and crisis resources for you and your loved ones, and best practices for professionals in the United States. En Español: Linea de Prevencion del Suidio y Crisis: 1-888-628-9454.

    [ad_2]

    Source link

  • Micron Technology: China probes US chip maker for cybersecurity risks as tech tension escalates | CNN Business

    Micron Technology: China probes US chip maker for cybersecurity risks as tech tension escalates | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    China has launched a cybersecurity probe into Micron Technology, one of America’s largest memory chip makers, in apparent retaliation after US allies in Asia and Europe announced new restrictions on the sale of key technology to Beijing.

    The Cyberspace Administration of China (CAC) will review products sold by Micron in the country, according to a statement by the watchdog late on Friday.

    The move is aimed at “ensuring the security of key information infrastructure supply chains, preventing cybersecurity risks caused by hidden product problems, and maintaining national security,” it noted.

    It came on the same day that Japan, a US ally, said it would restrict the export of advanced chip manufacturing equipment to countries including China, following similar moves by the United States and the Netherlands.

    Washington and its allies have announced curbs on China’s semiconductor industry, which strike at the heart of Beijing’s bid to become a tech superpower.

    Last month, the Netherlands also unveiled new restrictions on overseas sales of semiconductor technology, citing the need to protect national security. In October, the United States banned Chinese companies from buying advanced chips and chipmaking equipment without a license.

    Micron told CNN it was aware of the review.

    “We are in communication with the CAC and are cooperating fully,” it said, adding that it stands by the security of its products.

    Shares in Micron sank 4.4% on Wall Street Friday following the news, the biggest drop in more than three months. Micron derives more than 10% of its revenue from China.

    In an earlier filing, the Idaho-based company had warned of such risks.

    “The Chinese government may restrict us from participating in the China market or may prevent us from competing effectively with Chinese companies,” it said last week.

    China has strongly criticized restrictions on tech exports, saying last month it “firmly opposes” such measures.

    In efforts to boost growth and job creation, Beijing is seeking to woo foreign investments as it grapples with mounting economic challenges. The newly minted premier Li Qiang and several top economic officials have been rolling out the welcome wagon for global CEOs and promising they would “provide a good environment and services.”

    But Beijing has also exerted growing pressure on foreign companies to bring them into line with its agenda.

    Last month, authorities closed the Beijing office of Mintz Group, a US corporate intelligence firm, and detained five local staff.

    Days earlier, they suspended Deloitte’s operations in Beijing for three months and imposed a fine of $31 million over alleged lapses in its work auditing a state-owned distressed debt manager.

    [ad_2]

    Source link

  • Academic researchers blast Twitter’s data paywall as ‘outrageously expensive’ | CNN Business

    Academic researchers blast Twitter’s data paywall as ‘outrageously expensive’ | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    After Twitter announced in February it would begin charging third parties to access its platform data, academic researchers warned that the vaguely worded plan could threaten important studies about how misinformation, harassment and other malicious activity spreads online.

    Now, as Twitter has released more pricing information, many of those same academics are saying their fears were well-founded, complaining that Twitter’s new tiered paywall not only charges “outrageously expensive” prices but that it also restricts the amount of accessible data so heavily that what little researchers can see, even on the most expensive tiers, is not useful for studies at any rigorous level.

    Twitter, which has cut much of its public relations team under CEO Elon Musk, automatically responded to a request for comment with an email containing a poop emoji.

    In an open letter this week, the Coalition for Independent Technology Research — a group representing dozens of researchers and civil society organizations — said free and open access to Twitter data has historically enabled systematic, large-scale research on social media’s role in public health initiatives, foreign propaganda, political discourse, and even the bots and spam that Musk has blamed for ruining Twitter.

    But Twitter’s new tiered access system undercuts all of that, the researchers said. The company’s pricing that launched last week, starting at $100 per month for a “basic” amount of data, does not provide nearly enough volume for users at the low end, while the high end “ranges from $42,000 to $210,000 per month [and] is unaffordable for researchers,” the letter said.

    The new basic tier limits users to reading just 10,000 tweets per month. That represents 0.3% of what researchers used to be able to collect in a single day, the letter said.

    Even under the most expensive “enterprise” tier costing upwards of $2.5 million a year, Twitter is offering only a fraction of the tweets it used to, the letter continued. Before the change, researchers could pay about $500 a month for the ability to access up to 10% of the roughly 1 billion tweets a month that flow across Twitter’s platform.

    Now, though, “the most expensive Enterprise tier would cut that by 80% at about 400 times the price,” the researchers’ letter said.

    Asking researchers to pay orders of magnitude more for a fifth of the access they once had represents a barrier to accountability and transparency, the letter added.

    “Under the new pricing plans, studying the communications and interactions of even a small population—such as the 535 Members of the U.S. Congress or the 705 Members of the European Parliament—will be unfeasible,” the letter said. “The new pricing plans will also end at least 76 long-term efforts, including dashboards, tools, or code packages that support other researchers, journalists, first-responders, educators, and Twitter users.”

    [ad_2]

    Source link

  • FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    FTC chair Lina Khan warns AI could ‘turbocharge’ fraud and scams | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Artificial intelligence tools such as ChatGPT could lead to a “turbocharging” of consumer harms including fraud and scams, and the US government has substantial authority to crack down on AI-driven consumer harms under existing law, members of the Federal Trade Commission said Tuesday.

    Addressing House lawmakers, FTC chair Lina Khan said the “turbocharging of fraud and scams that could be enabled by these tools are a serious concern.”

    In recent months, a new crop of AI tools have gained attention for their ability to generate convincing emails, stories and essays as well as images, audio and videos. While these tools have potential to change the way people work and create, some have also raised concerns about how they could be use to deceive by impersonating individuals.

    Even as policymakers across the federal government debate how to promote specific AI rules, citing concerns about possible algorithmic discrimination and privacy issues, companies could still face FTC investigations today under a range of statutes that have been on the books for years, Khan and her fellow commissioners said.

    “Throughout the FTC’s history we have had to adapt our enforcement to changing technology,” said FTC Commissioners Rebecca Slaughter. “Our obligation is to do what we’ve always done, which is to apply the tools we have to these changing technologies … [and] not be scared off by this idea that this is a new, revolutionary technology.”

    FTC Commissioner Alvaro Bedoya said companies cannot escape liability simply by claiming that their algorithms are a black box.

    “Our staff has been consistently saying our unfair and deceptive practices authority applies, our civil rights laws, fair credit, Equal Credit Opportunity Act, those apply,” said Bedoya. “There is law, and companies will need to abide by it.”

    The FTC has previously issued extensive public guidance to AI companies, and the agency last month received a request to investigate OpenAI over claims that the company behind ChatGPT has misled consumers about the tool’s capabilities and limitations.

    [ad_2]

    Source link

  • Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    Snapchat rolls out chatbot powered by ChatGPT to all users | CNN Business

    [ad_1]



    CNN
     — 

    Snapchat is about to give new meaning to the “chat” part of its name.

    Snap, the company behind Snapchat, announced on Wednesday that its customizable My AI chatbot, is now accessible to all users within the app. The feature, which is powered by the viral AI chatbot ChatGPT, was previously only available to paying Snapchat+ subscribers.

    The tool offers recommendations, answers questions, helps users make plans and can write a haiku in seconds, according to the company. It can be brought into conversation with friends when it’s mentioned with “@MyAI.” Users can also give it a name and design a custom Bitmoji avatar for it to personalize it more.

    The move comes more than a month after ChatGPT creator OpenAI opened up access to its chatbot to third-party businesses. Snap, Instacart and tutor app Quizlet were among the early partners experimenting with adding ChatGPT.

    Since its public release in November 2022, ChatGPT has stunned many users with its impressive ability to generate original essays, stories and song lyrics in response to user prompts. The initial wave of attention on the tool helped renew an arms race among tech companies to develop and deploy similar AI tools in their products.

    The initial batch of companies tapping into ChatGPT’s functionality each have slightly different visions for how to incorporate it. Taken together, however, these services may test just how useful AI chatbots can really be in our everyday life and how much people want to interact with them for customer service and other uses across their favorite apps.

    Adding ChatGPT features also may come with some risks. The tool, which is trained on vast troves of data online, can spread inaccurate information and has the potential to respond to users in ways they might find inappropriate.

    In a blog post on Wednesday, Snap acknowledged “My AI is far from perfect but we’ve made a lot of progress.”

    It said, for example, about 99.5% of My AI responses conform to its community guidelines. Snap said it has made changes to “help protect against responses that could be inappropriate or harmful.” The company also said it has added moderation technology and included the new feature to its in-app parental tools.

    “We will continue to use these early learnings to make AI a more safe, fun, and useful experience, and we’re eager to hear your thoughts,” the company said.

    [ad_2]

    Source link

  • Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    Snapchat’s new AI chatbot is already raising alarms among teens and parents | CNN Business

    [ad_1]



    CNN
     — 

    Less than a few hours after Snapchat rolled out its My AI chatbot to all users last week, Lyndsi Lee, a mother from East Prairie, Missouri, told her 13-year-old daughter to stay away from the feature.

    “It’s a temporary solution until I know more about it and can set some healthy boundaries and guidelines,” said Lee, who works at a software company. She worries about how My AI presents itself to young users like her daughter on Snapchat.

    The feature is powered by the viral AI chatbot tool ChatGPT – and like ChatGPT, it can offer recommendations, answer questions and converse with users. But Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it, and bring it into conversations with friends.

    The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear you’re talking to a computer.

    “I don’t think I’m prepared to know how to teach my kid how to emotionally separate humans and machines when they essentially look the same from her point of view,” Lee said. “I just think there is a really clear line [Snapchat] is crossing.”

    The new tool is facing backlash not only from parents but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticisms on social media over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

    While some may find value in the tool, the mixed reactions hint at the risks companies face in rolling out new generative AI technology to their products, and particularly in products like Snapchat, whose users skew younger.

    Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party businesses, with many more expected to follow. Almost overnight, Snapchat has forced some families and lawmakers to reckon with questions that may have seemed theoretical only months ago.

    In a letter to the CEOs of Snap and other tech companies last month, weeks after My AI was released to Snap’s subscription customers, Democratic Sen. Michael Bennet raised concerns about the interactions the chatbot was having with younger users. In particular, he cited reports that it can provide kids with suggestions for how to lie to their parents.

    “These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote. “Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

    In a blog post last week, the company said: “My AI is far from perfect but we’ve made a lot of progress.”

    In the days since its formal launch, Snapchat users have been vocal about their concerns. One user called his interaction “terrifying” after he said it lied about not knowing where the user was located. After the user lightened the conversation, he said the chatbot accurately revealed he lived in Colorado.

    In another TikTok video with more than 1.5 million views, a user named Ariel recorded a song with an intro, chorus and piano chords written by My AI about what it’s like to be a chatbot. When she sent the recorded song back, she said the chatbot denied its involvement with the reply: “I’m sorry, but as an AI language model, I don’t write songs.” Ariel called the exchange “creepy.”

    Other users shared concerns about how the tool understands, interacts with and collects information from photos. “I snapped a picture … and it said ‘nice shoes’ and asked who the people [were] in the photo,” a Snapchat user wrote on Facebook.

    Snapchat told CNN it continues to improve My AI based on community feedback and is working to establish more guardrails to keep its users safe. The company also said that similar to its other tools, users don’t have to interact with My AI if they don’t want to.

    It’s not possible to remove My AI from chat feeds, however, unless a user subscribes to its monthly premium service, Snapchat+. Some teens say they have opted to pay the $3.99 Snapchat+ fee to turn off the tool before promptly canceling the service.

    But not all users dislike the feature.

    One user wrote on Facebook that she’s been asking My AI for homework help. “It gets all of the questions right.” Another noted she’s leaned on it for comfort and advice. “I love my little pocket, bestie!” she wrote. “You can change the Bitmoji [avatar] for it and surprisingly it offers really great advice to some real life situations. … I love the support it gives.”

    ChatGPT, which is trained on vast troves of data online, has previously come under fire for spreading inaccurate information, responding to users in ways they might find inappropriate and enabling students to cheat. But Snapchat’s integration of the tool risks heightening some of these issues, and adding new ones.

    Alexandra Hamlet, a clinical psychologist in New York City, said the parents of some of her patients have expressed concern about how their teenager could interact with Snapchat’s tool. There’s also concern around chatbots giving advice and about mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek out interactions that confirm their unhelpful beliefs.

    “If a teen is in a negative mood and does not have the awareness desire to feel better, they may seek out a conversation with a chatbot that they know will make them feel worse,” she said. “Over time, having interactions like these can erode a teens’ sense of worth, despite their knowing that they are really talking to a bot. In an emotional state of mind, it becomes less possible for an individual to consider this type of logic.”

    For now, the onus is on parents to start meaningful conversations with their teens about best practices for communicating with AI, especially as the tools start to show up in more popular apps and services.

    Sinead Bovell, the founder of WAYE, a startup that helps prepare youth for future with advanced technologies, said parents need to make it very clear “chatbots are not your friend.”

    “They’re also not your therapists or a trusted adviser, and anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say,” she said.

    “Parents should be talking to their kids now about how they shouldn’t share anything personal with a chatbot that they would a friend – even though from a user design perspective, the chatbot exists in the same corner of Snapchat.”

    She added that federal regulation that would require companies to abide by specific protocols is also needed to keep up the rapid pace of AI advancement.

    [ad_2]

    Source link

  • UK citizen extradited to US pleads guilty to 2020 Twitter hack | CNN Business

    UK citizen extradited to US pleads guilty to 2020 Twitter hack | CNN Business

    [ad_1]



    Reuters
     — 

    A citizen of the United Kingdom who was extradited to New York from Spain last month has pleaded guilty to cyberstalking and computer hacking schemes, including the 2020 hack of the social media site Twitter, the U.S. Justice Department said on Tuesday.

    Joseph James O’Connor, 23, was charged in both North Dakota and New York. The North Dakota case was transferred to the U.S. District Court for the Southern District of New York.

    O’Connor pleaded guilty to charges including conspiring to commit computer intrusions, to commit wire fraud and to commit money laundering.

    O’Connor, who was extradited to the U.S. on April 26, will also forfeit more than $794,000 and pay restitution to victims, prosecutors said. He faces a maximum of 77 years in prison at sentencing on June 23.

    “O’Connor’s criminal activities were flagrant and malicious, and his conduct impacted multiple people’s lives. He harassed, threatened, and extorted his victims, causing substantial emotional harm,” Assistant Attorney General Kenneth Polite said in a statement.

    Prosecutors said the schemes included gaining unauthorized access to social media accounts on Twitter in July 2020 as well as a TikTok account in August 2020. Along with his co-conspirators, O’Connor stole at least $794,000 worth of cryptocurrency.

    The July 2020 Twitter attack hijacked a variety of verified accounts, including those of then-Democratic presidential candidate Joe Biden and Tesla CEO Elon Musk, who now owns Twitter.

    The accounts of former President Barack Obama, reality TV star Kim Kardashian, Bill Gates, Warren Buffett, Benjamin Netanyahu, Jeff Bezos, Michael Bloomberg and Kanye West were also hit.

    The alleged hacker used the accounts to solicit digital currency, prompting Twitter to prevent some verified accounts from publishing messages for several hours until security could be restored.

    [ad_2]

    Source link

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • Biden picks Air Force general to lead NSA and Cyber Command | CNN Politics

    Biden picks Air Force general to lead NSA and Cyber Command | CNN Politics

    [ad_1]



    CNN
     — 

    President Joe Biden has nominated an Air Force general to head the nation’s powerful electronic spying agency and the US military command that conducts offensive cyber operations – a crucial position as the US continues to battle Russia, China and other foes in cyberspace.

    Lt. Gen. Timothy Haugh, who has served for years in senior US military cyber positions, is Biden’s choice to replace outgoing Army Gen. Paul Nakasone as head of the National Security Agency and US Cyber Command, an Air Force official confirmed to CNN.

    Politico first reported on Haugh’s nomination.

    The White House did not respond to a request for comment.

    Haugh’s nomination could face a roadblock in the Senate after Republican Sen. Tommy Tuberville of Alabama put a hold on senior military nominations because he objects to the department’s abortion travel policy.

    Haugh is currently deputy of US Cyber Command, a command of thousands of US military personnel who conduct offensive and defensive cyber operations to protect US critical infrastructure. Officials from the command traveled to Ukraine in late 2021 to prepare Kyiv for an onslaught of Russian cyberattacks that accompanied the full-scale Russian invasion.

    The command and NSA also have taken an increasingly active role in helping defend American elections from foreign interference under Nakasone’s leadership over the last five years.

    During the 2020 election, Iranian hackers accessed a US municipal website for reporting unofficial election results and Cyber Command kicked the hackers off the network out of concern that they might post fake results on the website, a senior US military official revealed last month.

    Haugh’s nomination signals a continued emphasis on election security work at Fort Meade, the sprawling military base in Maryland where the NSA and Cyber Command are housed. As a senior US military cyber official, Haugh has been involved in election security discussions in recent midterm and general elections.

    [ad_2]

    Source link

  • Russian-speaking cyber gang claims credit for hack of BBC and British Airways employee data | CNN Business

    Russian-speaking cyber gang claims credit for hack of BBC and British Airways employee data | CNN Business

    [ad_1]



    CNN
     — 

    A group of Russian-speaking cyber criminals has claimed credit for a sweeping hack that has compromised employee data at the BBC and British Airways and left US and UK cybersecurity officials scrambling to respond.

    The hackers, known as the CLOP ransomware gang, say they have “information on hundreds of companies.” They’ve given victims until June 14 to discuss a ransom before they start publishing data from companies they claim to have hacked, according to a dark web posting seen by CNN.

    The extortion threat adds urgency to an already high-stakes security incident that has forced responses from tech firms, corporations and government agencies from the US to Canada and the UK.

    The compromise of employee data at the BBC and British Airways came via a breach of a human resources firm, Zellis, that both organizations use.

    “We are aware of a data breach at our third-party supplier, Zellis, and are working closely with them as they urgently investigate the extent of the breach,” a BBC spokesperson told CNN Wednesday. The spokesperson declined to comment on the hackers’ extortion threat.

    A British Airways spokesperson said the company had “notified those colleagues whose personal information has been compromised to provide support and advice.”

    The hackers — a well-known group whose favored malware emerged in 2019 — last week began exploiting a new flaw in a widely used file-transfer software known as MOVEit, appearing to target as many exposed organizations as they could. The opportunistic nature of the hack left a broad swath of organizations vulnerable to extortion.

    Numerous US state government agencies use the MOVEit software, but it’s unclear how many agencies, if any, have been compromised.

    The US Cybersecurity and Infrastructure Security Agency has ordered all federal civilian agencies to update the MOVEit software in light of the hack. No federal agencies have been confirmed as victims, a CISA spokesperson told CNN.

    Together with the Federal Bureau of Investigation, CISA also released advice on dealing with the CLOP hack. Progress, the US firm that owns the MoveIT software, has also urged victims to update their software packages and has issued security advice.

    CISA Executive Director for Cybersecurity Eric Goldstein said in a statement: “CISA remains in close contact with Progress Software and our partners at the FBI to understand prevalence within federal agencies and critical infrastructure.”

    But the effort to respond to the cyber attack is very much ongoing.

    The CLOP hackers are “overwhelmed with the number of victims,” according to Charles Carmakal, chief technology officer at Mandiant Consulting, a Google-owned firm that has investigated the hack. “Instead of directly reaching out to victims over email or telephone calls like in prior campaigns, they are asking victims to reach out to them via email,” he said on LinkedIn Tuesday night.

    Allan Liska, a ransomware expert at cybersecurity firm Recorded Future, also told CNN: “Unfortunately, the sensitive nature of the data often stored on MOVEit servers means there will likely be real consequences stemming from the [data theft] but it will be months before we understand the full fallout from this attack.”

    [ad_2]

    Source link

  • The largest newspaper publisher in the US sues Google, alleging online ad monopoly | CNN Business

    The largest newspaper publisher in the US sues Google, alleging online ad monopoly | CNN Business

    [ad_1]



    CNN
     — 

    Gannett, the largest newspaper publisher in the United States, is suing Google, alleging the tech giant holds a monopoly over the digital ad market.

    The publisher of USA Today and more than 200 local publications filed the lawsuit in a New York federal court on Tuesday, and is seeking unspecified damages. Gannett argues in court documents that Google and its parent company, Alphabet, controls how publishers buy and sell ads online.

    “The result is dramatically less revenue for publishers and Google’s ad-tech rivals, while Google enjoys exorbitant monopoly profits,” the lawsuit states.

    Google controls about a quarter of the US digital advertising market, with Meta, Amazon and TikTok combining for another third, according to eMarketer. News publishers and other websites combine for the other roughly 40%. Big Tech’s share of the market is beginning to erode slightly, but Google remains by far the largest individual player.

    That means publishers often rely at least in part on Google’s advertising technology to support their operations: Gannett says Google controls 90% of the ad market for publishers.

    Michael Reed, Gannett’s chairman and CEO, said in a statement Tuesday that Google’s dominance in the online advertising industry has come “at the expense of publishers, readers and everyone else.”

    “Digital advertising is the lifeblood of the online economy,” Reed added. “Without free and fair competition for digital ad space, publishers cannot invest in their newsrooms.”

    Dan Taylor, Google’s vice president of global ads, told CNN that the claims in the suit “are simply wrong.”

    “Publishers have many options to choose from when it comes to using advertising technology to monetize – in fact, Gannett uses dozens of competing ad services, including Google Ad Manager,” Taylor said in a statement Tuesday. “And when publishers choose to use Google tools, they keep the vast majority of revenue.”

    He continued: “We’ll show the court how our advertising products benefit publishers and help them fund their content online.”

    The legal action from Gannett comes as Google faces a growing number of antitrust complaints in the United States and the European Union over its advertising business, which remains its central moneymaker.

    EU officials said last week that Google’s advertising business should be broken up, alleging that the tech giant’s involvement in multiple parts of the digital advertising supply chain creates “inherent conflicts of interest” that risk harming competition.

    Earlier this year, the Justice Department and eight states sued Google, accusing the company of harming competition with its dominance in the online advertising market and similarly calling for it to be broken up.

    [ad_2]

    Source link

  • Dylan Mulvaney says Bud Light’s backlash response was ‘worse than not hiring a trans person at all’ | CNN Business

    Dylan Mulvaney says Bud Light’s backlash response was ‘worse than not hiring a trans person at all’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Dylan Mulvaney on Thursday broke her silence about the fallout that occurred after the trans influencer made two Instagram posts sponsored by Bud Light earlier this year.

    Bud Light’s sponsorship of an April 1 Instagram post by Mulvaney set off a firestorm of anti-trans backlash and calls for a boycott. Mulvaney herself also faced a wave of hate and violent threats.

    Now, in a video posted to Instagram Thursday, Mulvaney is calling on Bud Light and other companies not only to work with trans and other queer influencers, but to support them through the process, even as trans rights are under fire across the country and corporations face anti-LGBTQ+ campaigns.

    Mulvaney said she has “been scared to leave my house, and I have been ridiculed in public, I have been followed,” and she criticized Bud Light for not standing by her and the partnership. She said the company never reached out to her in the wake of the backlash.

    “For a company to hire a trans person and then not publicly stand by them is worse in my opinion than not hiring a trans person at all because it gives customers permission to be as transphobic and hateful as they want,” Mulvaney said. “And the hate doesn’t end with me, it has serious and grave consequences for the rest of our community.”

    When the backlash ignited in April, Bud Light first responded with a straightforward explanation of its relationship with social media influencers like Mulvaney. But later it released a vague statement from the CEO that failed to offer support for Mulvaney or the trans community. Bud Light sales dropped in the ensuing weeks, the company lost its top rating from a major LGBTQ+ nonprofit and it placed two marketing executives on leave.

    The controversy over the sponsored posts came as trans rights are under attack. Over 400 anti-LGBTQ+ bills were introduced in state legislatures this year through April 3, according to American Civil Liberties Union, including ones restricting access to gender-affirming care for trans youth. Generally, transgender people are more than four times as likely to be victims of violent crime than cisgender people, according to a study from the UCLA School of Law.

    The Bud Light backlash also coincided with anti-LGBTQ+ campaigns against other big brands, including Target.

    Mulvaney’s statement followed a Wednesday appearance by Brendan Whitworth, CEO of Bud Light owner Anheuser-Busch, on CBS Mornings, in which he repeated the company’s recent statements about wanting to “focus on what we do best, which is brewing great beer for everyone,” and did not directly answer a question about whether the campaign was a mistake.

    “I think the conversation surrounding Bud Light has moved away from beer, and the conversation has become divisive, and Bud Light really does not belong there, Bud Light should be about bringing people together,” Whitworth said.

    In her video, Mulvaney appeared to address that sentiment, saying, “supporting trans people, it shouldn’t be political.”

    “There should be nothing controversial or divisive about working with us, and I know it’s possible because I’ve worked with some fantastic companies who care,” Mulvaney said. “But caring about the LGBTQ+ community requires a lot more than just a donation somewhere during Pride month.”

    She added: “We’re customers, too, I know a lot of trans and queer people who love beer.”

    In a statement responding to Mulvaney’s video, an Anheuser-Busch spokesperson told CNN on Thursday that, “we remain committed to the programs and partnerships we have forged over decades with organizations across a number of communities, including those in the LGBTQ+ community. The privacy and safety of our employees and our partners is always our top priority. As we move forward, we will focus on what we do best — brewing great beer for everyone and earning our place in moments that matter to our consumers.”

    –CNN’s Danielle Wiener-Bronner contributed to this report.

    [ad_2]

    Source link

  • Meta officially launches Twitter rival Threads | CNN Business

    Meta officially launches Twitter rival Threads | CNN Business

    [ad_1]



    CNN
     — 

    Facebook has tried to compete with Twitter in numerous ways over the years, including copying signature Twitter features such as hashtags and trending topics. But now Facebook’s parent company is taking perhaps its biggest swipe at Twitter yet.

    Meta on Wednesday officially launched a new app called Threads, which is intended to offer a space for real-time conversations online, a function that has long been Twitter’s core selling point.

    The app appears to have many similarities to Twitter, from the layout to the product description. The listing, which first appeared earlier this week as a teaser, emphasizes its potential to build a following and connect with like-minded people.

    “The vision for Threads is to create an option and friendly public space for conversation,” Meta CEO Mark Zuckerberg said in a Threads post following the launch. “We hope to take what Instagram does best and create a new experience around text, ideas, and discussing what’s on your mind.”

    Zuckerberg said on his verified Threads account that the app passed 2 million sign-ups in the first two hours. Later on Wednesday, he wrote that Threads “passed 5 million sign ups in the first four hours.”

    He also responded to posts and shared his thoughts on whether Threads will ever be bigger than Twitter.

    “It’ll take some time, but I think there should be a public conversations app with 1 billion+ people on it. Twitter has had the opportunity to do this but hasn’t nailed it,” Zuckerberg wrote on Threads. “Hopefully we will.”

    The app’s listing describes it as a place where communities can come together to discuss everything from the topics they care about today to what’s trending.

    “Whatever it is you’re interested in, you can follow and connect directly with your favorite creators and others who love the same things — or build a loyal following of your own to share your ideas, opinions and creativity with the world,” it reads.

    Meta said messages posted to Threads will have a 500 character limit. The company said it was bringing the app to 100 countries via Apple’s iOS and Android.

    After downloading the app, users are asked to link up their Instagram page, customize their profile and follow the same accounts they already follow on Instagram. The look is similar to Twitter with a familiar layout, text-based feed, the ability repost and quote other Thread posts. But it also blends Instagram’s existing aesthetic and offers the ability to share posts from Threads directly to Instagram Stories. Verified Instagram accounts are also automatically verified on Threads. Thread accounts can also be listed as public or private.

    The new app joins a growing list of Twitter rivals and could pose the biggest threat to Twitter of the bunch, given Meta’s vast resources and its massive audience.

    It also comes amid heightened turmoil at Twitter, which experienced an outage over the weekend, followed by an announcement that the site had imposed temporary limits on how many tweets its users are able to read while using the app.

    In this photo illustration, the app Threads from Meta seen displayed on a mobile phone. Threads is the latest app launched by Meta, which will be available from the 6th of July 2023 and will be a direct rival of social network Twitter, which has been facing a number of issues after the controversial takeover from entrepreneur Elon Musk.

    Twitter owner Elon Musk said these restrictions had been applied “to address extreme levels of data scraping and system manipulation.” Commenting on the launch of Threads Monday, he tweeted: “Thank goodness they’re so sanely run,” parroting reported comments by Meta executives that appeared to take a jab at Musk’s erratic behavior.

    Since acquiring Twitter in October, Musk has turned the social media platform on its head, alienating advertisers and some of its highest-profile users. He is now looking for ways to return the platform to growth. Twitter announced Monday that users would soon need to pay for TweetDeck, a tool that allows people to organize and easily monitor the accounts they follow.

    Twitter is also attempting to encroach on Meta’s domain. In May, Twitter added encrypted messaging and said calls would follow, developments that could allow the platform to compete with Facebook Messenger and WhatsApp, also owned by Meta.

    The escalating rivalry between the two companies only appears to have added to the rivalry between Musk and Meta CEO Mark Zuckerberg.

    In response to a tweet last month from a user about Threads, Musk wrote: “I’m sure Earth can’t wait to be exclusively under Zuck’s thumb with no other options.” In a followup tweet, Musk teased the idea of a cage match with Zuckerberg.

    Zuckerberg fired back in an Instagram story by posting a screenshot of Musk’s tweet overlaid with the caption: “Send Me Location.”

    And after the Threads app debuted, Zuckerberg tweeted an image of two cartoon Spider-Men pointing at each other.

    – CNN’s Hanna Ziady contributed to this report.

    [ad_2]

    Source link

  • Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

    The list of more than 8,000 authors includes some of the world’s most celebrated writers, including Margaret Atwood, Dan Brown, Michael Chabon, Jonathan Franzen, James Patterson, Jodi Picoult and Philip Pullman, among others.

    In an open letter they signed, posted by the Authors Guild Tuesday, the writers accused AI companies of unfairly profiting from their work.

    “Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill,” the letter said. “You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.”

    Tuesday’s letter was addressed to the CEOs of ChatGPT-maker OpenAI, Facebook-parent Meta, Google, Stability AI, IBM and Microsoft. Most of the companies didn’t immediately respond to a request for comment. Meta, Microsoft and Stability AI declined to comment.

    Much of the tech industry is now working to develop AI tools that can generate compelling images and written work in response to user prompts. These tools are built on large language models, which are trained on vast troves of information online. But recently, there has been growing pressure on tech companies over alleged intellectual property violations with this training process.

    This month, comedian Sarah Silverman and two authors filed a copyright lawsuit against OpenAI and Meta, while a proposed class-action suit accused Google of “stealing everything ever created and shared on the internet by hundreds of millions of Americans,” including copyrighted content. Google has called the lawsuit “baseless,” saying it has been upfront for years that it uses public data to train its algorithms. OpenAI did not previously respond to a request for comment on the suit.

    In addition to demanding compensation “for the past and ongoing use of our works in your generative AI programs,” the thousands of authors who signed the letter this week called on AI companies to seek permission before using the copyrighted material. They also urged the companies to pay writers when their work is featured in the results of generative AI, “whether or not the outputs are infringing under current law.”

    The letter also cites this year’s Supreme Court holding in Warhol v Goldsmith, which found that the late artist Andy Warhol infringed on a photographer’s copyright when he created a series of silk screens based on a photograph of the late singer Prince. The court ruled that Warhol did not sufficiently “transform” the underlying photograph so as to avoid copyright infringement.

    “The high commerciality of your use argues against fair use,” the authors wrote to the AI companies.

    In May, OpenAI CEO Sam Altman appeared to acknowledge more needs to be done to address concerns from creators about how AI systems use their works.

    “We’re trying to work on new models where if an AI system is using your content, or if it’s using your style, you get paid for that,” he said at an event.

    – CNN’s Catherine Thorbecke contributed to this report.

    [ad_2]

    Source link

  • OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    OpenAI’s Sam Altman launches Worldcoin crypto project | CNN Business

    [ad_1]

    Worldcoin, a cryptocurrency project founded by OpenAI CEO Sam Altman, launched on Monday.

    The project’s core offering is its World ID, which the company describes as a “digital passport” to prove that its holder is a real human, not an AI bot. To get a World ID, a customer signs up to do an in-person iris scan using Worldcoin’s ‘orb’, a silver ball approximately the size of a bowling ball. Once the orb’s iris scan verifies the person is a real human, it creates a World ID.

    The company behind Worldcoin is San Francisco and Berlin-based Tools for Humanity.

    The project has 2 million users from its beta period, and with Monday’s launch, Worldcoin is scaling up “orbing” operations to 35 cities in 20 countries. As an enticement, those who sign up in certain countries will receive Worldcoin’s cryptocurrency token WLD.

    WLD’s price rose in early trading on Monday. On the world’s largest exchange, Binance, it hit a peak of $5.29 and at 1000 GMT was at $2.49 from a starting price of $0.15, having seen $25.1 million of trading volume, according to Binance’s website.

    Blockchains can store the World IDs in a way that preserves privacy and can’t be controlled or shut down by any single entity, co-founder Alex Blania told Reuters.

    The project says World IDs will be necessary in the age of generative AI chatbots like ChatGPT, which produce remarkably humanlike language. World IDs could be used to tell the difference between real people and AI bots online.

    Altman told Reuters Worldcoin also can help address how the economy will be reshaped by generative AI.

    “People will be supercharged by AI, which will have massive economic implications,” he said.

    One example Altman likes is universal basic income, or UBI, a social benefits program usually run by governments where every individual is entitled to payments. Because AI “will do more and more of the work that people now do,” Altman believes UBI can help to combat income inequality. Since only real people can have World IDs, it could be used to reduce fraud when deploying UBI.

    Altman said he thought a world with UBI would be “very far in the future” and he did not have a clear idea of what entity could dole out money, but that Worldcoin lays groundwork for it to become a reality.

    “We think that we need to start experimenting with things so we can figure out what to do,” he said.

    [ad_2]

    Source link