ReportWire

Tag: iab-computing

  • Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman set to testify before Congress | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI CEO Sam Altman is set to testify before a Senate Judiciary subcommittee on Tuesday after the viral success of ChatGPT, his company’s chatbot tool, renewed an arms race over artificial intelligence and sparked concerns from some lawmakers about the risks posed by the technology.

    “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”

    He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”

    A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech’s biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

    Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”

    Montgomery is expected to urge Congress to adopt a “precision regulation” approach for AI based on specific use cases, and to suggest that lawmakers push companies to test how their systems handle bias and other concerns – and disclose those results.

    As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

    Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House’s efforts to emphasize the importance of ethical and responsible AI development.

    In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. He and his company have pledged to move forward responsibly.

    Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Altman has said he agreed with parts of the letter. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”

    – CNN’s Jennifer Korn contributed to this report.

    [ad_2]

    Source link

  • Twitter loses its top content moderation official at a key moment | CNN Business

    Twitter loses its top content moderation official at a key moment | CNN Business

    [ad_1]



    CNN
     — 

    Twitter has lost its top content moderation official just weeks before the company is set to undergo a regulatory stress test by European Union officials focused on its handling of user content, in the latest sign of turbulence at the company under owner Elon Musk.

    On Thursday, Twitter’s head of trust and safety, Ella Irwin, told Reuters she had left the company. Irwin has not addressed the reasons for her departure, but the move coincided with the company’s content moderation dispute with the Daily Wire, a conservative outlet.

    The dispute focused on the forthcoming release of a self-described documentary, “What Is a Woman?” that Twitter warned would be labeled as “hateful content” due to two instances of misgendering, according to Daily Wire CEO Jeremy Boreing. Musk intervened later Thursday, calling the content moderation decision “a mistake by many people at Twitter” and that the video would be “definitely allowed.”

    Twitter did not immediately respond to a request for comment on Irwin’s departure.

    But the sudden and unexpected vacancy at Twitter could leave the company without a key content moderation official at a sensitive moment. Later this month at Twitter’s San Francisco offices, EU officials are set to review whether the platform is likely to be compliant with a sweeping content moderation law that could eventually trigger millions of dollars in fines for Twitter if it’s found to be noncompliant.

    That law, known as the Digital Services Act, will require so-called “very large online platforms” including Twitter to abide by tough content moderation standards by as early as August. It’s far from clear whether the company can meet those requirements by the deadline, and recent developments at Twitter seem to have further alarmed EU regulators in that respect.

    For months, as Musk has increasingly welcomed more incendiary speech onto the platform Twitter had previously restricted, EU officials have been reminding Twitter of its content moderation obligations under the DSA. The warnings have also come amid mass layoffs at the company that have eliminated entire teams, including much of its content moderation staff.

    Last month, Twitter pulled out of the European Union’s code of conduct on disinformation, a series of voluntary commitments to combat mis- and disinformation that the EU has said would be considered as part of any evaluation of a platform’s compliance with the overall Digital Services Act (DSA).

    Although Twitter said it was “committed to fully complying with the Digital Services Act” and would meet its DSA obligations with respect to misinformation “in a manner that reflects Twitter’s unique service,” the company told EU officials “we feel we have no alternative” but to withdraw from the code.

    The announcement prompted swift backlash from Thierry Breton, a top EU commissioner and digital regulator, who appeared to regard Twitter’s decision as an attempt to evade responsibility.

    “Obligations remain,” Breton said. “You can run but you can’t hide.”

    Irwin’s departure could undercut the EU’s confidence further. Without a trust and safety head who would otherwise be expected to attend the EU stress test, Twitter’s ability to effectively respond to the evaluation may be constrained. A spokesperson for the European Commission didn’t immediately respond to a request for comment.

    On Friday, The Wall Street Journal reported that Twitter’s head of brand safety and ad quality also departed the company this week.

    All of this could be problematic for Twitter and Musk in the long run – and could also create an added headache for Linda Yaccarino just as she takes over as the company’s new CEO.

    Companies that fail to abide by the DSA risk fines of up to 6% of their global annual revenue. For Twitter, which is already struggling to regain its financial footing amid significant debt and an advertiser backlash, that’s a cost it can ill afford.

    [ad_2]

    Source link

  • Charged rhetoric swirls online and off as Trump’s Miami court date looms | CNN Politics

    Charged rhetoric swirls online and off as Trump’s Miami court date looms | CNN Politics

    [ad_1]



    CNN
     — 

    From the halls of Congress to the dark corners of the internet, charged and violent rhetoric is echoing among some Donald Trump sympathizers ahead of the former president’s appearance in a Miami court on Tuesday

    FBI special agents across the country assigned to domestic terrorism squads are actively working to identify any possible threats, four law enforcement sources told CNN, following Trump’s second indictment.

    So far, the FBI is aware of various groups like the Proud Boys discussing traveling to south Florida to publicly show support for Trump, sources said, but there is currently no indication of any specific and credible threat.

    “We have now reached a war phase,” Rep. Andy Biggs, an Arizona Republican and prominent supporter of Trump’s election denialism, tweeted Friday. “Eye for an eye.” Biggs’ office later said his comment was a call for the GOP to “step up and use their procedural tools” to counter “the Left’s weaponization of our federal law enforcement apparatus.”

    Speaking at a Republican event in Georgia on Friday night, Kari Lake, who unsuccessfully ran for governor of Arizona last year and is still spreading falsehoods about that election, said: “If you want to get to President Trump, you’re going to have to go through me and 75 million Americans just like me.”

    “And I’m going to tell you, most of us are card-carrying members of the NRA,” she said to applause, adding, “That’s not a threat, that’s a public service announcement.”

    On some pro-Trump forums, anonymous users were less circumspect. “MAGA will make Waco look like a tea party!” one user posted Friday in an apparent reference to the April 1993 Waco, Texas siege that left 76 people dead.

    On Trump’s social media platform, Truth Social, one anonymous user posted Thursday, “This is a Declaration of War against the American People. It is time We The People exercise our 2nd Amendment rights and burn the corruption out of DC.”

    The former president himself has been posting frequently on Truth Social throughout the weekend. “SEE YOU IN MIAMI ON TUESDAY!!!” he posted Friday.

    Still, at least on public social media forums, there doesn’t appear to be a mass online mobilization effort for people to gather people in Miami this week like there was in the lead-up to the events in Washington, DC, on January 6, 2021.

    However some prominent right-wing figures are calling for Trump supporters to protest in Miami on Tuesday.

    One influential right-wing activist in Florida who has almost half a million followers on Twitter is promoting a flag-waving event outside Trump’s golf course in Doral on Monday and a protest the following day against the “weaponization of government” outside the Wilkie D. Ferguson Jr. Courthouse, where the former president is set to appear.

    Some Trump supporters online have stressed the need for protests to remain peaceful and some have said they will not demonstrate in Miami on Tuesday, fearing it could be a trap. This is an extension of the false belief held by some that the January 6 attack on the US Capitol was a set-up designed to incriminate supporters of the former president.

    But at least one person who has served prison time for his role in the January 6 riot said he will be in Miami to protest on Tuesday.

    Anthime Gionet, a prominent online streamer better known by his moniker “Baked Alaska,” plead guilty to unlawfully protesting after he livestreamed himself breaching the Capitol in a nearly 30-minute video that showed him encouraging others in the mob to enter the building.

    Gionet served a two month sentence and was released at the end of March, according to federal records.

    On Friday, he lamented Trump’s latest indictment in a livestream outside Mar-a-Lago. During the livestream, Gionet said he and another person who was with him outside Mar-a-Lago would both be in Miami on Tuesday. The other person is heard on the stream responding, “we weren’t supposed to talk about that.” Gionet replied, “I know but it leaked so f*** it.”

    The exchange may be illustrative of the shifting ways people use the internet to organize – something that has proven to be a challenge for law enforcement.

    While much of the planning for January 6, 2021, attack on the US Capitol was done on public forums that could be read by anyone, a lot of that communication has since shifted to private channels, experts say.

    The secretive nature of many private forums has caused federal agents working domestic terrorism matters to place greater emphasis on recruiting informants who can report on potential threats discussed online among extremists, law enforcement sources told CNN.

    But even messages posted publicly cannot be accessed by investigators without lawful investigative purposes. The FBI’s own investigative guidelines limit what material can be accessed by agents and analysts, even when it is in the public domain. These policies prevent FBI employees from trawling the internet looking for concerning material, unless a formal assessment or investigation has been authorized and opened.

    The FBI’s investigative efforts to identify possible threats include querying existing confidential human informants reporting on domestic terrorism issues for any indication of potential threats, sources said.

    In addition to working their informant networks, FBI agents and analysts are reviewing publicly available online platforms frequented by domestic extremists for any indication of plans for violence.

    Ben Decker, CEO of Memetica, a threat intelligence company, told CNN on Sunday, “Given the robust and successful grassroots architecture of right-wing culture war campaigns and anti-Pride protests this month, there are concerns that many of these in-person rally groups could pivot directly into more Trump-themed protests around the country over the coming days.”

    But, at this point, Daniel J. Jones, the president of Advance Democracy, a non-profit that conducts public interest research, told CNN that his group had not identified “what we would assess to be specific and credible plans for violence yet.”

    “However,” he added ,”as we saw during the events of January 6, it’s Trump’s statements that drive the online rhetoric and real-world violence. As such, much depends on what Trump says of his perceived opponents, as well as what he asks of his supporters, in the days ahead.”

    Juliette Kayyem, a CNN national security analyst and a former assistant secretary at the Department of Homeland Security, echoed this concern. “We know how incitement to violence works. It is nurtured from the top and given license to spread by leaders. They don’t have to direct it to one place or time. They can simply unleash it, knowing full well that someone may become emboldened to act,” she said.

    Last month, the Department of Homeland Security issued a nationwide bulletin indicating the country “remains in a heightened threat environment,” warning that individuals “motivated by a range of ideological beliefs and personal grievances continue to pose a persistent and lethal threat to the homeland.”

    DHS analysts indicated the motivating factors that could incite extremists to violence include perception about the integrity of the 2024 election cycle, and, while not specifically citing Trump’s legal woes, also pointed to “judicial decisions” in their list of grievances among extremist groups.

    Ahead of Trump’s Tuesday court appearance, law enforcement will continue to remain on alert.

    “We do not want a repeat of [the January 6] violence,” one senior FBI source said.

    [ad_2]

    Source link

  • Google is using AI to change how you shop | CNN Business

    Google is using AI to change how you shop | CNN Business

    [ad_1]



    CNN
     — 

    Google wants to make it easier for online shoppers to know how clothing will look on them before making a purchase.

    The company on Wednesday announced a new virtual try-on feature that uses generative AI, the same technology underpinning a new crop of chatbots and image creation tools, to show clothes on a wide selection of body types.

    With the feature, shoppers can see how an item would drape, fold, cling, stretch or form wrinkles and shadows on a diverse set of models in various poses, according to the company.

    Google is also launching a feature that helps users find similar clothing pieces in different colors, patterns or styles, from merchants across the web, using a visual matching algorithm powered by AI.

    These efforts are part of Google’s bigger push to defend its search engine from the threat posed by a wave of new AI-powered tools in the wake of the viral success of ChatGPT. At the Google I/O developer conference last month, the company spent more than 90 minutes teasing a long list of AI announcements, including expanding access to its existing chatbot Bard and bringing new AI capabilities to Google Search.

    Google said it developed the virtual try-on option using many pairs of images of more than 80 models standing forward and sideways, from sizes XS to XL, and with varying skin tones, body shapes and ethnic backgrounds. The AI-powered tool then learned to match the shape of certain shirts in those positions to generate realistic images of the person from all angles.

    The feature will initially work with women’s tops from brands such as Anthropology, Loft, H&M and Everlane. Google said it will expand to men’s shirts in the future. Google also said the tool will get more precise over time.

    Google isn’t the only e-commerce company blending generative AI into the shopping experience. Some companies such as Shopify and Instacart are using the technology to help inform customers’ shopping decisions. Amazon is experimenting with using artificial intelligence to sum up customer feedback about products on the site, with the potential to cut down on the time shoppers spend sifting through reviews before making a purchase. And eBay recently rolled out an AI tool to help sellers generate product listing descriptions.

    [ad_2]

    Source link

  • OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI, the company behind the viral ChatGPT tool, has been hit with a lawsuit alleging the company stole and misappropriated vast swaths of peoples’ data from the internet to train its AI tools.

    The proposed class action lawsuit, filed Wednesday in a California federal court, claims that OpenAI secretly scraped “massive amounts of personal data from the internet,” according to the complaint. The nearly 160-page complaint alleges that this personal data, including “essentially every piece of data exchanged on the internet it could take,” was also seized by the company without notice, consent or “just compensation.”

    Moreover, this data scraping occurred at an “unprecedented scale,” the suit claims.

    OpenAI did not immediately respond to CNN’s request for comment Wednesday. Microsoft, a major investor into OpenAI, was also named as a defendant in the suit and did not immediately respond to a request for comment.

    “By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone in a zone of risk that is incalculable – but unacceptable by any measure of responsible data protection and use,” Timothy K. Giordano, a partner at Clarkson, the law firm behind the suit, said in a statement to CNN Wednesday.

    The complaint also claims that OpenAI products “use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

    The lawsuit seeks injunctive relief in the form of a temporary freeze on further commercial use of OpenAI’s products. It also seeks payments of “data dividends” as financial compensation to people whose information was used to develop and train OpenAI’s tools.

    OpenAI publicly launched ChatGPT late last year, and the tool immediately went viral for its ability to generate compelling, human-sounding responses to user prompts. The success of ChatGPT spurred an apparent AI arms race in the tech world, as companies big and small are now racing to develop and deploy AI tools into as many products as possible.

    [ad_2]

    Source link

  • Meta officially launches Twitter rival Threads | CNN Business

    Meta officially launches Twitter rival Threads | CNN Business

    [ad_1]



    CNN
     — 

    Facebook has tried to compete with Twitter in numerous ways over the years, including copying signature Twitter features such as hashtags and trending topics. But now Facebook’s parent company is taking perhaps its biggest swipe at Twitter yet.

    Meta on Wednesday officially launched a new app called Threads, which is intended to offer a space for real-time conversations online, a function that has long been Twitter’s core selling point.

    The app appears to have many similarities to Twitter, from the layout to the product description. The listing, which first appeared earlier this week as a teaser, emphasizes its potential to build a following and connect with like-minded people.

    “The vision for Threads is to create an option and friendly public space for conversation,” Meta CEO Mark Zuckerberg said in a Threads post following the launch. “We hope to take what Instagram does best and create a new experience around text, ideas, and discussing what’s on your mind.”

    Zuckerberg said on his verified Threads account that the app passed 2 million sign-ups in the first two hours. Later on Wednesday, he wrote that Threads “passed 5 million sign ups in the first four hours.”

    He also responded to posts and shared his thoughts on whether Threads will ever be bigger than Twitter.

    “It’ll take some time, but I think there should be a public conversations app with 1 billion+ people on it. Twitter has had the opportunity to do this but hasn’t nailed it,” Zuckerberg wrote on Threads. “Hopefully we will.”

    The app’s listing describes it as a place where communities can come together to discuss everything from the topics they care about today to what’s trending.

    “Whatever it is you’re interested in, you can follow and connect directly with your favorite creators and others who love the same things — or build a loyal following of your own to share your ideas, opinions and creativity with the world,” it reads.

    Meta said messages posted to Threads will have a 500 character limit. The company said it was bringing the app to 100 countries via Apple’s iOS and Android.

    After downloading the app, users are asked to link up their Instagram page, customize their profile and follow the same accounts they already follow on Instagram. The look is similar to Twitter with a familiar layout, text-based feed, the ability repost and quote other Thread posts. But it also blends Instagram’s existing aesthetic and offers the ability to share posts from Threads directly to Instagram Stories. Verified Instagram accounts are also automatically verified on Threads. Thread accounts can also be listed as public or private.

    The new app joins a growing list of Twitter rivals and could pose the biggest threat to Twitter of the bunch, given Meta’s vast resources and its massive audience.

    It also comes amid heightened turmoil at Twitter, which experienced an outage over the weekend, followed by an announcement that the site had imposed temporary limits on how many tweets its users are able to read while using the app.

    In this photo illustration, the app Threads from Meta seen displayed on a mobile phone. Threads is the latest app launched by Meta, which will be available from the 6th of July 2023 and will be a direct rival of social network Twitter, which has been facing a number of issues after the controversial takeover from entrepreneur Elon Musk.

    Twitter owner Elon Musk said these restrictions had been applied “to address extreme levels of data scraping and system manipulation.” Commenting on the launch of Threads Monday, he tweeted: “Thank goodness they’re so sanely run,” parroting reported comments by Meta executives that appeared to take a jab at Musk’s erratic behavior.

    Since acquiring Twitter in October, Musk has turned the social media platform on its head, alienating advertisers and some of its highest-profile users. He is now looking for ways to return the platform to growth. Twitter announced Monday that users would soon need to pay for TweetDeck, a tool that allows people to organize and easily monitor the accounts they follow.

    Twitter is also attempting to encroach on Meta’s domain. In May, Twitter added encrypted messaging and said calls would follow, developments that could allow the platform to compete with Facebook Messenger and WhatsApp, also owned by Meta.

    The escalating rivalry between the two companies only appears to have added to the rivalry between Musk and Meta CEO Mark Zuckerberg.

    In response to a tweet last month from a user about Threads, Musk wrote: “I’m sure Earth can’t wait to be exclusively under Zuck’s thumb with no other options.” In a followup tweet, Musk teased the idea of a cage match with Zuckerberg.

    Zuckerberg fired back in an Instagram story by posting a screenshot of Musk’s tweet overlaid with the caption: “Send Me Location.”

    And after the Threads app debuted, Zuckerberg tweeted an image of two cartoon Spider-Men pointing at each other.

    – CNN’s Hanna Ziady contributed to this report.

    [ad_2]

    Source link

  • China-based hackers breached US government email accounts, Microsoft and White House say | CNN Politics

    China-based hackers breached US government email accounts, Microsoft and White House say | CNN Politics

    [ad_1]



    CNN
     — 

    China-based hackers have breached email accounts at two-dozen organizations, including some United States government agencies, in an apparent spying campaign aimed at acquiring sensitive information, according to statements from Microsoft and the White House late Tuesday.

    The full scope of the hack is being investigated, but US officials and Microsoft have been quietly scrambling in recent weeks to assess the impact of the hack, which targeted unclassified email systems, and contain the fallout.

    The federal agency where the Chinese hackers were first detected was the State Department, a person familiar with the matter told CNN. The State Department then reported the suspicious activity to Microsoft, the person said.

    The Department of Commerce, which has sanctioned Chinese telecom firms, was also breached. The hackers accessed Commerce Secretary Gina Raimondo’s email account, one source familiar with the investigation told CNN. The Washington Post first reported on the access of the secretary’s account.

    The Chinese hackers were detected targeting a small number of federal agencies and just a handful of officials’ email accounts at each agency in a hack aimed at specific officials, multiple sources familiar with the investigation told CNN.

    “Microsoft notified the (Commerce) Department of a compromise to Microsoft’s Office 365 system, and the Department took immediate action to respond,” a department spokesperson said in a statement on Wednesday.

    The spokesperson did not immediately reply to a request for comment on the targeting of Raimondo’s email account.

    The hackers targeted email accounts at the House of Representatives, but it was unclear who was targeted and if the breach attempts were successful, two sources familiar with the matter told CNN.

    The breaches add to what is already one of the steepest cybersecurity challenges facing the Biden administration: limiting the ability of Beijing’s formidable hacking teams to access US government and corporate secrets.

    “Last month, US government safeguards identified an intrusion in Microsoft’s cloud security, which affected unclassified systems,” National Security Council spokesperson Adam Hodge said in a statement to CNN.

    “Officials immediately contacted Microsoft to find the source and vulnerability in their cloud service,” Hodge said. “We continue to hold the procurement providers of the US Government to a high security threshold.”

    The State Department “detected anomalous activity, took immediate steps to secure our systems, and will continue to closely monitor and quickly respond to any further activity,” a department spokesperson said on Wednesday.

    US Capitol Police declined to comment, referring CNN to the FBI.

    Hodge did not identify who was behind the hack, but Microsoft executives said in a blog post that the hackers were based in China and focused on espionage.

    In response to the Microsoft and White House statements, the Chinese foreign ministry on Wednesday accused Washington of conducting its own hacking operations.

    US officials have consistently labeled China as the most advanced of US adversaries in cyberspace, a domain that has repeatedly been a source of bilateral tension in recent years. The FBI has said Beijing has a larger hacking program than all other governments combined.

    China has routinely denied the allegations.

    The hacking began in mid-May, when the China-based hackers used a stolen sign-in key to burrow their way into email accounts, according to Microsoft. The tech giant has since blocked the hackers from accessing customer emails using that technique, Microsoft said late Tuesday.

    Secretary of State Antony Blinken visited China in mid-June, but it was not immediately clear if the cyber-espionage campaign was connected to that high-stakes visit.

    Some US officials credited the State Department with investing in more cyber-defense capabilities, allowing the agency to detect the suspicious activity earlier than in past advanced hacks.

    The number of US organizations, public or private, impacted by the hacking campaign is in the “single digits,” a senior US Cybersecurity and Infrastructure Security Agency official told reporters on Wednesday.

    “This appears to have been a very targeted, surgical campaign,” the official said.

    This story has been updated with additional information.

    [ad_2]

    Source link

  • RFK Jr. hearing encapsulates a political era when truth is upside down | CNN Politics

    RFK Jr. hearing encapsulates a political era when truth is upside down | CNN Politics

    [ad_1]



    CNN
     — 

    In a Donald Trump-influenced era of through-the-looking-glass politics, everything seems upside down, traditional loyalties are scrambled, history can be rewritten and truth is just what anyone wants it to be.

    A Republican-run House hearing Thursday encapsulated the current political circus ahead of another tense election. In a head-spinning spectacle, a Kennedy family scion and candidate for the Democratic presidential nomination was greeted as a hero by Republicans. But he was slammed by Democrats, including by House Minority Leader Hakeem Jeffries as “a living, breathing, false flag operation.”

    Robert F. Kennedy Jr. was given a platform by pro-Trump Republicans because his conspiracies about vaccine and Covid-19, and claims that the government has tried to censor him gel with their efforts to shield Trump by claiming that the political weaponization of government is a Democratic and not a GOP transgression.

    The marriage of convenience in a fiery hearing underscored how populism and the bending of truth pioneered on the right by Trump also has significant currency on the left. It illustrated how the character of mainstream American politics is under siege from fringe voices and extremist positions that once struggled to be heard but in recent years found a footing on social media, the campaign trail and even in Congress and the White House.

    As an example of his creation of alternative realities – a tactic frequently used by Trump – Kennedy forcibly denied that he had ever been anti-vaccine, racist or antisemitic. Yet CNN fact checks show he has repeatedly shared unfounded conspiracy theories with a false link between autism and childhood vaccines. He has also claimed that man-made chemicals could be making children gay or transgender. And just last week, he was hit by new claims of conspiracy mongering, racism and antisemitism over remarks at a dinner in New York City in which he claimed that “Covid-19 is targeted to attack Caucasians and Black people. The people who are most immune are Ashkenazi Jews and Chinese.”

    Despite this controversy, Kennedy brazenly appeared to be inventing new truths even during the hearing. He said, for instance, “In my entire life, and while I’m under oath I have never uttered a phrase that was either racist or antisemitic.” At another moment he said: “I’ve never been anti-vaccine,” then added: “But everybody in this room probably believes that I have been because that’s the prevailing narrative.”

    Jack Schlossberg, the grandson of President John F. Kennedy, criticized his relative in a social media video Friday, calling his candidacy an “embarrassment.”

    “I’ve listened to him. I know him. I have no idea why anyone thinks he should be president. What I do know is, his candidacy is an embarrassment. Let’s not be distracted, again, by somebody’s vanity project.” Schlossberg said.

    In an odd flipping of the normal political order, Democrats in the hearing effectively sought to undermine the candidacy of the son and nephew of assassinated party heroes, former Attorney General Robert Kennedy and President John F. Kennedy. The top Democrat on the House Select Committee on the Weaponization of the Federal Government, Virgin Islands Delegate Stacey Plaskett, for instance, condemned committee chair Ohio Rep. Jim Jordan for letting Kennedy air what Democrats regard as extreme views. “It’s a free country. You absolutely have a right to say what you believe,” she said, adding: “But you don’t have the right to a platform, public or private.”

    Plaskett’s comments did raise serious questions about whether there are limits – if any – on a prominent personality’s right to free speech even if they are saying things that are not true, as well as the extent to which misinformation has swamped politics and elections. But most of the hearing stayed away from such topics and was dominated by Republican attempts to score points and shield Trump and Democratic attacks on Kennedy.

    One of the ex-President’s top allies, Rep. Elise Stefanik of New York, the fourth ranking House Republican, revived conservative claims that the Democratic-leaning officials in the federal government suppressed a story about a laptop belonging to Hunter Biden before the last election, a move she argued had been instrumental in his father beating Trump for the presidency. She cited this theory when asking Kennedy whether he believed there was censorship amounting to government interference in the 2020 election.

    Former Twitter executives admitted under oath this year that the social media network temporarily suppressed a story about the laptop but said there was no government interference in the decision. CNN has previously reported that allegations the FBI told Twitter to suppress the story are unsupported, and a half-dozen tech executives and senior staff, along with multiple federal officials familiar with the matter, denied any such directive was given.

    But the specific truth in this case isn’t necessarily important to Republicans who were using Kennedy to further create the impression of government interference to prevent Trump retaining the White House. The more public confusion there is the better it is for the ex-president politically. Of course, claims that Democrats are the ones really guilty of election interference are a direct attempt to whitewash Trump’s own behavior – since he used the tools of his office to try to subvert the 2020 election and to stay in power.

    Thursday’s hearing is not the first time political reality has seemed mixed up or traditional loyalties subverted. Just last week for instance, Republicans subjected FBI Director Christopher Wray to a fearsome grilling in a hearing while Democrats unusually defended the bureau – long regarded as one of the most conservative organs of the US government. The GOP storm was whipped up by allies of Trump who want to discredit investigations into his effort to overturn the 2020 election and his hoarding of classified documents in his Florida resort. Trump has already been indicted in the latter case and there are growing signs he will be charged in the former. He denies any wrongdoing and claims the investigations are politically motivated.

    It’s not that Republicans don’t have genuine ground for oversight. Independent government watchdog reports and internal investigations for instance have found deficiencies and mistakes in some investigations involving Trump. In the Russia probe, there were mistakes in the use of a dossier complied by a former British spy and in applications for surveillance warrants. More recently, an agreement with the Justice Department under which Hunter Biden pleaded guilty to two tax misdemeanors and struck a deal to resolve a felony gun charge is within the right of Congress to investigate. But neither case so far supports the wild claims that a corrupt liberal deep state is conducting schemes designed to suppress conservatives that are often made by Trump and his fellow Republicans.

    There is plentiful evidence that the ex-president is the one who weaponized government to go after his political enemies and to evade accountability. For instance he sacked former FBI chief James Comey and told NBC News it was because of the Russia investigation. He used his position as president and the prospect of military aid to seek to coerce Ukrainian President Volodymyr Zelensky into opening an investigation into Joe Biden and his son in a phone call that later led to his first impeachment. And Trump, by pressuring multiple officials in key swing states and by lambasting poll workers and making claims of widespread voter fraud, apparently used executive power to try to defy the will of voters in 2020.

    Voters also risked being misled by Washington’s hall of mirrors on another occasion this week. In a more frivolous, but still misleading example of the way it’s often hard to work out what is true, the Biden campaign debuted a campaign video that appeared to show one of Trump’s most fervent allies, Georgia Rep. Marjorie Taylor Greene praising Biden as fulfilling the historic mission of great Democratic presidents Franklin Roosevelt and Lyndon Johnson. The words were those of Greene but they were selectively edited from a speech in a video that disguised her true intent, which was to condemn historic government spending by Democrats on education, health care, and social safety net programs that Republicans claim are akin to socialism.

    This example of things being not quite what they seem was more of a cheeky case of campaign trolling than the wholesale refashioning of truth evident Thursday. The hearing at one point degenerated into both Republicans and Democrats accusing each other of trying to censor their questions and witnesses.

    One veteran Democrat, Rep. Gerry Connolly of Virginia, summed up how the session had in itself warped reality. “I never thought we’d descend to this level of Orwellian dystopia. Suddenly, the tools of the trade are not to get at the truth but to distract, distort, to deflect and dissemble,” Connolly said.

    Oddly, several members on the Republican side of the committee nodded their heads in agreement – apparently convinced the Orwellian behavior in question was on the part of what they see as a tyrannical, censoring government rather than in the obvious truths turned upside down.

    This story has been updated with additional information.

    [ad_2]

    Source link

  • Taiwan’s TSMC to invest $2.9 billion in new plant as demand for AI chips soars | CNN Business

    Taiwan’s TSMC to invest $2.9 billion in new plant as demand for AI chips soars | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    TSMC, the world’s largest chipmaker, says it plans to invest nearly 90 billion New Taiwan dollars ($2.9 billion) to build an advanced chip plant in Taiwan, as it expands production to meet booming demand for artificial intelligence (AI) products.

    Last week, CEO C.C. Wei told analysts the company plans to roughly double its capacity for advanced packaging in 2024 compared to 2023, in order to meet “strong demand” for AI chips from its customers, which include Nvidia

    (NVDA)
    and AMD.

    Advanced packaging in the semiconductor industry involves using high-tech methods to aggregate components from various wafers in order to create a more powerful computer chip.

    TSMC

    (TSM)
    said the new plant is expected to create 1,500 jobs.

    “To meet market needs, TSMC is planning to establish an advanced packaging fab in the Tongluo Science Park,” the company told CNN in a statement, referring to fabrication plants — the technical term for semiconductor factories.

    The science park is located in Miaoli County, south of the firm’s main facilities in Hsinchu, near Taipei.

    TSMC on Thursday reported a 23% fall in net profit for the second quarter, compared to the same period last year, as a global economic downturn took a toll on overall demand — even as customers clamored for more of its AI chips.

    Chips manufactured by TSMC for customers like Nvidia are the muscle behind generative AI, a type of artificial intelligence that can create new content, such as text and images, in response to user prompts.

    That’s the kind of AI underlying ChatGPT, Google

    (GOOGL)
    ’s Bard, Dall-E and many of the other new AI technologies.

    TSMC is considered a national treasure in Taiwan, supplying semiconductors to global tech giants including Apple

    (AAPL)
    and Qualcomm

    (QCOM)
    .

    [ad_2]

    Source link

  • Twitter sues hate-speech watchdog, following through on its litigation threat | CNN Business

    Twitter sues hate-speech watchdog, following through on its litigation threat | CNN Business

    [ad_1]


    Washington, DC
    CNN
     — 

    Twitter has sued the Center for Countering Digital Hate, a nonprofit group that has criticized the company’s handling of hate speech, following through on a litigation threat that had been publicly revealed just hours before.

    The lawsuit filed Monday in San Francisco federal court accuses CCDH of deliberately trying to drive advertisers away from Twitter — recently rebranded as “X” — by publishing reports critical of the platform’s response to hateful content.

    It specifically claims CCDH violated Twitter’s terms of service, and federal hacking laws, by scraping data from the company’s platform and by encouraging an unnamed individual to improperly collect information about Twitter that it had provided to a third-party brand monitoring provider.

    The complaint accuses CCDH of engaging in a wide-ranging campaign to silence users of Twitter’s platform by calling attention to the views they post on social media.

    Responding to the complaint’s allegations on Tuesday, CCDH’s CEO Imran Ahmed told CNN that much of the lawsuit, particularly its claim about the unnamed individual, “sounds a bit like a conspiracy theory to me.”

    “The truth is that he’s [Elon Musk] been casting around for a reason to blame us for his own failings as a CEO,” Ahmed said, “because we all know that when he took over, he put up the bat signal to racists and misogynists, to homophobes, to antisemites, saying ‘Twitter is now a free-speech platform.’ … And now he’s surprised when people are able to quantify that there has been a resulting increase in hate and disinformation.”

    “All we do is hold up a mirror to the platform and ask them to consider whether or not they like the reflection they see in it,” Ahmed added. “What Mr. Musk has done is said, ‘I’m going to sue the mirror because I don’t like what I see.’”

    In the past 24 hours, Ahmed said, thousands of people have visited CCDH’s website and many have made donations to the group.

    “That’s what we’re going to need if we’re going to survive this,” he said, adding: “The reason that organizations like CCDH have to rely on methodologies like we do is because there is no transparency on these platforms.”

    The lawsuit comes after CCDH on Monday disclosed Twitter’s original July 20 threat to sue, along with its response to Twitter’s threat calling the company’s claims “ridiculous.”

    “X’s legal threat is a brazen attempt to silence honest criticism and independent research, perhaps in a desperate hope that it can stem the tide of negative stories and rebuild the company’s relationship with advertisers,” Ahmed wrote in an op-ed Monday coinciding with the group’s publication of Twitter’s threat.

    In its own blog post Monday, Twitter said its lawsuit was intended to promote free expression and that it “rejects all claims made by the CCDH.”

    “X is a free public service funded largely by advertisers,” the company said. “Through the CCDH’s scare campaign and its ongoing pressure on brands to prevent the public’s access to free expression, the CCDH is actively working to prevent public dialogue.”

    The July 20 threat indicated Twitter was investigating whether CCDH could be sued for violations of federal laws against false advertising. But Monday’s complaint does not appear to include such an allegation.

    [ad_2]

    Source link

  • Opinion: Utah’s startling new rules for kids and social media | CNN

    Opinion: Utah’s startling new rules for kids and social media | CNN

    [ad_1]

    Editor’s Note: Kara Alaimo, an associate professor of communication at Fairleigh Dickinson University, writes about issues affecting women and social media. Her book, “Over the Influence: Why Social Media Is Toxic for Women and Girls — And How We Can Reclaim It,” will be published by Alcove Press in 2024. The opinions expressed in this commentary are her own. Read more opinion on CNN.



    CNN
     — 

    Utah’s Republican governor, Spencer Cox, recently signed two bills into law that sharply restrict children’s use of social media platforms. Under the legislation, which takes effect next year, social media companies have to verify the ages of all users in the state, and children under age 18 have to get permission from their parents to have accounts.

    Parents will also be able to access their kids’ accounts, apps won’t be allowed to show children ads, and accounts for kids won’t be able to be used between 10:30 p.m. and 6:30 a.m. without parental permission.

    It’s about time. Social networks in the United States have become potentially incredibly dangerous for children, and parents can no longer protect our kids without the tools and safeguards this law provides. While Cox is correct that these measures won’t be “foolproof,” and what implementing them actually looks like remains an open question, one thing is clear: Congress should follow Utah’s lead and enact a similar law to protect every child in this country.

    One of the most important parts of Utah’s law is the requirement for social networks to verify the ages of users. Right now, most apps ask users their ages without requiring proof. Children can lie and say they’re older to avoid some of the features social media companies have created to protect kids — like TikTok’s new setting that asks 13- to 17-year-olds to enter their passwords after they’ve been online for an hour, as a prompt for them to consider whether they want to spend so much time on the app.

    While critics argue that age verification allows tech companies to collect even more data about users, let’s be real: These companies already have a terrifying amount of intimate information about us. To solve this problem, we need a separate (and comprehensive) data privacy law. But until that happens, this concern shouldn’t stop us from protecting kids.

    One of the key components of this legislation is allowing parents access to their kids’ accounts. By doing this, the law begins to help address one of the biggest dangers kids face online: toxic content. I’m talking about things like the 2,100 pieces of content about suicide, self-harm and depression that 14-year-old Molly Russell in the UK saved, shared or liked in the six months before she killed herself last year.

    I’m also talking about things like the blackout challenge — also called the pass-out or choking challenge — that has gone around social networks. In 2021, four children 12 or younger in four different states all died after trying it.

    “Check out their phones,” urged the father of one of these young victims. “It’s not about privacy — this is their lives.”

    Of course, there are legitimate privacy concerns to worry about here, and just as kids’ use of social media can be deadly, social apps can also be used in healthy ways. LGBTQ children who aren’t accepted in their families or communities, for example, can turn online for support that is good for their mental health. Now, their parents will potentially be able to see this content on their accounts.

    I hope groups that serve children who are questioning their gender and sexual identities and those that work with other vulnerable youth will adapt their online presences to try to serve as resources for educating parents about inclusivity and tolerance, too. This is also a reminder that vulnerable children need better access to mental health services like therapy — they’re way too young to be left to their own devices to seek out the support they need online.

    But, despite these very real privacy concerns, it’s simply too dangerous for parents not to know what our kids are seeing on social media. Just as parents and caregivers supervise our children offline and don’t allow them to go to bars or strip clubs, we have to ensure they don’t end up in unsafe spaces on social media.

    The other huge challenge the Utah law helps parents overcome is the amount of time kids are spending on social media. A 2022 survey by Common Sense Media found that the average 8- to 12-year-old is on social media for 5 hours and 33 minutes per day, while the average 13- to 18 year-old spends 8 hours and 39 minutes every day. That’s more time than a full time-job.

    The American Academy of Pediatrics warns that lack of sleep is associated with serious harms in children — everything from injuries to depression, obesity and diabetes. So parents in the US need to have a way to make sure their kids aren’t up on TikTok all night (parents in China don’t have to worry about this because the Chinese version of TikTok doesn’t allow kids to stay on for more than 40 minutes and isn’t useable overnight).

    Of course, Utah isn’t an authoritarian state like China, so it can’t just turn off kids’ phones. That’s where this new law comes in requiring social networks to implement these settings. The tougher part of Utah’s law for tech companies to implement will be a provision requiring social apps to ensure they’re not designed to addict kids.

    Social networks are arguably addictive by nature, since they feed on our desires for connection and validation. But hopefully the threat of being sued by children who say they’ve been addicted or otherwise harmed by social networks — an outcome for which this law provides an avenue — will force tech companies to think carefully about how they build their algorithms and features like bottomless feeds that seem practically designed to keep users glued to their screens.

    TikTok and Snap didn’t respond to requests for comment from CNN about Utah’s law, while a representative for Meta, Facebook’s parent company, said the company shares the goal to keep Facebook safe for kids but also wants it to be accessible.

    Of course, if social networks had been more responsible, it probably wouldn’t have come to this. But in the US, tech companies have taken advantage of a lack of rules to build platforms that can be dangerous for our kids.

    States are finally saying no more. In addition to Utah’s measures, California passed a sweeping online safety law last year. Connecticut, Ohio and Arkansas are also considering laws to protect kids by regulating social media. A bill introduced in Texas wouldn’t allow kids to use social media at all.

    There’s nothing innocent about the experiences many kids are having on social media. This law will help Utah’s parents protect their kids. Parents in other states need the same support. Now, it’s time for the federal government to step up and ensure children throughout the country have the same protections as Utah kids.

    Suicide & Crisis Lifeline: Call or text 988. The Lifeline provides 24/7, free and confidential support for people in distress, prevention and crisis resources for you and your loved ones, and best practices for professionals in the United States. En Español: Linea de Prevencion del Suidio y Crisis: 1-888-628-9454.

    [ad_2]

    Source link

  • Musk’s Twitter promised a purge of blue check marks. Instead he singled out the New York Times | CNN Business

    Musk’s Twitter promised a purge of blue check marks. Instead he singled out the New York Times | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Some VIP Twitter users woke up on Saturday expecting to have lost their coveted blue verification check marks in a previously announced purge by Elon Musk. Instead, Twitter appeared to target a single account from a major publication Musk dislikes and changed the language on its site in a way that obscures why users are verified.

    Twitter had said it would “begin winding down” blue checks granted under its old verification system — which emphasized protecting high-profile users at risk of impersonation — on April 1. In order to stay verified, Musk said, users would have to pay $8 per month to join the platform’s Twitter Blue subscription service, which has allowed accounts to pay for verification since December.

    Most legacy blue check holders found this weekend that their verification marks had not disappeared, but rather had been appended with a new label reading: “This account is verified because it’s subscribed to Twitter Blue or is a legacy verified account.” The language, which shows up when users click on the check mark, makes it unclear whether verified accounts are actually notable individuals or simply users who have paid to join Twitter Blue.

    But one high-profile account did lose its blue check over the weekend: the main account for the New York Times, which had previously told CNN it would not pay for verification.

    After an account that often engages with Musk posted a meme this weekend about the Times declining to pay for verification, Musk responded in a tweet saying, “Oh ok, we’ll take it off then.” Musk then lashed out at the Times — just the latest instance of the billionaire slamming journalists or media outlets — in a series of tweets that claimed the outlet’s coverage is boring and “propaganda.”

    The weekend moves are just the latest example of Twitter creating confusion and whiplash for users over feature changes — and in this case, not just any users, but many of the most high-profile accounts that have long been a key selling point for the platform. It also highlights how Musk often appears to guide decisions about the platform more by whims than by policy.

    Although the New York Times’ main account lost its blue check, its other accounts, such as those for its arts, travel and books content, remained verified. After its blue check was removed, a spokesperson for the New York Times reiterated to CNN that it does not plan to pay for verification.

    Twitter, which laid off most of its public relations staff last fall, did not immediately respond to a request for comment.

    Musk has been threatening to take away “legacy” blue check marks from users verified under Twitter’s old system since shortly after he bought Twitter last fall.

    In early November, Twitter launched the option for people paying for its Twitter Blue subscription service to receive blue checks. The program was quickly put on pause after being plagued by a wave of celebrity and corporate impersonators, and was relaunched in December.

    Twitter also rolled out a color-coded verification system with differently colored marks for companies and government entities, but Musk continued to say that individual users would eventually have to pay for blue checks.

    In the days leading up to the blue check purge that wasn’t, prominent users such as actor William Shatner and anti-bullying activist Monica Lewinksy pushed back against the idea that, as power users that draw attention to the site, they should have to pay for a feature that keeps them safe from impersonation.

    By muddying the reason accounts are verified, the new label could risk making it easier for people to scam or impersonate high-profile users. Experts in inauthentic behavior have also said it’s not clear that reserving verification for paid users will reduce the number of bots on the site, an issue Musk has raised on and off over the past year.

    Musk, for his part, has previously presented changes to Twitter’s verification system as a way of “treating everyone equally.”

    “There shouldn’t be a different standard for celebrities,” he said in a tweet last week. The paid feature could also drive revenue, which could help Musk, who is on the hook for significant debt after buying Twitter for $44 billion.

    Musk last week also said that starting on April 15, only verified accounts would be recommended in users’ “For You” feeds alongside the accounts they follow.

    –CNN’s Oliver Darcy contributed to this report.

    [ad_2]

    Source link

  • Should parents decide what their kids do online? These states think so | CNN Business

    Should parents decide what their kids do online? These states think so | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In the future, when teenagers want to sign up for an account on Facebook or Instagram, they may first need to ask their parent or guardian to give their consent to the social media companies.

    That, at least, is the vision emerging from a growing number of states introducing — and in some cases passing — legislation intended to protect kids online.

    For years, US lawmakers have called for new safeguards to address concerns about social platforms leading younger users down harmful rabbit holes, enabling new forms of bullying and harassment and adding to what’s been described as a teen mental health crisis.

    Now, in the absence of federal legislation, states are taking action, and raising some alarms in the process. The governors of Arkansas and Utah recently signed controversial bills into law that require social media companies to conduct age verification for all state residents and to obtain consent from guardians for minors before they join a platform. Lawmakers in Connecticut and Ohio are also working to pass similar legislation.

    On the surface, providing more guardrails for teens is a step forward that some parents may welcome after years of worrying about the potential harms kids face on social media. But some users, digital rights advocates and child safety experts say the wave of new state legislation risks undermining privacy for teens and adults, puts too much burden on parents and raises serious questions about enforcement.

    Jason Kelley, associate director of digital strategy for nonprofit digital rights group Electronic Frontier Foundation, told CNN he worries about government interference where “the state is telling families how to raise their children” and said it could “trample on the rights of every resident.”

    “Requiring people to get government approval by sharing their private identification before accessing social media will harm everyone’s ability to speak out and share information, regardless of their age,” he added. “Young people should not be used as pawns to fight big tech, and we are disappointed that first Utah, and now Arkansas, are implementing such overbroad laws.”

    Parents have long worried about privacy risks from their kids using social media, but the state legislation raises a new set of privacy concerns, experts say.

    In Arkansas, for example, the law will rely on third-party companies to verify all users’ personal information, such as a driver’s license or photo ID. (The legislation in Arkansas also appeared to contain vast loopholes and exemptions benefiting companies, such as Google and presumably its subsidiary, YouTube, that lobbied on the bill.)

    The impact on privacy is even more stark for teens in some of these states. In addition to requiring parental consent, Utah’s law, for example, will give parents access to “content and interactions” on their teens’ accounts.

    Albert Fox Cahn, founder and executive director of the Surveillance Technology Oversight Project and a fellow at the NYU School of Law, said the bills are problematic because users in these states will no longer remain anonymous, which could lead to fewer people of all ages expressing themselves and seeking information online.

    He believes teens in the LGBTQ+ community will be most impacted by potentially “outing them to homophobic or transphobic parents and cutting them off from their digital community.”

    Lucy Ivey, an 18-year-old TikTok influencer who attends Utah Valley University, echoed those concerns.

    “With a new law like this, they may now be intimidated and discouraged by the legal hoops required to use social media out of fear of authority or their parents, or fear of losing their privacy at a time when teens are figuring out who they are,” Ivey told CNN when the Utah law passed.

    Devorah Heitner, author of Screenwise, Speaker: Raising Kids in the Digital Age, argued teens need to learn how to function in online communities because that is the expectation both going into college and in their professional life.

    “Keeping them off online communities until, in some cases, when they’re finishing their first year of college — but can still have jobs or drive — is backward, if they can’t even have an Instagram or a Discord account where their mom isn’t reading every message.”

    Instead, she believes teens need better digital literacy in schools with a heightened social-emotional component.

    “Literacy should not just be ‘don’t look at pornography’ or ‘stay off bad sites’ or ‘don’t cyberbully;’ that’s so limited,” she said. “It should also be understanding how algorithms work, how teens can respond or what to do when feeling excluded, or if they’re feeling insecure. We need to help kids with all these things.”

    Heitner also said the bills should focus on holding companies more accountable rather than putting the onus on parents to either keep teens off platforms or constantly feel the pressure to police or oversee their activity.

    “Not all parents are passionate, kind and supportive of their kids, and even the ones who are don’t have the capacity or time to deal with the 24/7 nature of social media,” said Heitner. “It’s an unfair burden.”

    Given that the bills are unprecedented, it’s unclear how exactly social media companies will adapt and enforce it.

    Michael Inouye, an analyst at ABI Research, said minors could “steal” identities — such as from family members who don’t use social media — to create accounts that they can access and use without oversight. VPNs could also complicate matching IP addresses to the states of the users, he said.

    Facebook-parent Meta previously told CNN it has the same goals as parents and policymakers, but the company said it also wants young people to have safe, positive experiences online and keep its platforms accessible. It did not address how it would comply with the legislation.

    In a statement provided to CNN, a TikTok spokesperson said it is “committed to providing a safe and secure platform that supports the well-being of teens, and empowers parents with the tools and controls to safely navigate the digital experience.” Representatives from Snap did not respond to a request for comment.

    But even if legislative steps from Utah, Arkansas and other states prove to be flawed, Inouye says “these early efforts are at minimum bringing attention to these issues.”

    Heitner said she is most encouraged by a small but growing number of school districts and families, and one Pennsylvania county, which have filed lawsuits against social media companies for their alleged impact on teen mental health. “These efforts are more productive than putting this on parents,” she said.

    The Arkansas legislation is expected to take effect in September and Utah’s bill aims to be implemented next year. But bills like these could “face years of litigation and injunctions before they ever take effect,” Cahn said.

    “Hopefully Congress will act before then to implement real protections for all Americans,” he said.

    [ad_2]

    Source link

  • Twitter removes transgender protections from hateful conduct policy | CNN Business

    Twitter removes transgender protections from hateful conduct policy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter appears to have quietly rolled back a portion of its hateful conduct policy that included specific protections for transgender people.

    The policy previously stated that Twitter prohibits “targeting others with repeated slurs, tropes or other content that intends to degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.” But the second line was removed earlier this month, according to archived versions of the page from the WayBack Machine.

    Twitter also removed a line from the policy detailing certain groups of people often subject to disproportionate abuse online, including “women, people of color, lesbian, gay, bisexual, transgender, queer, intersex, asexual individuals, and marginalized and historically underrepresented communities.”

    The platform first introduced its policy prohibiting misgendering and deadnaming (referring to a person’s pre-transition name) of transgender people in 2018 as part of a broader overhaul of its hateful conduct policy.

    The change to the hateful conduct policy is one of a number of updates Twitter has made to its safety and content moderation practices since Elon Musk took over the company last fall. Twitter has also restored the accounts of users who had previously been banned for violating its rules, stopped enforcing its Covid-19 misinformation policy, allowed users to purchase blue verification checkmarks and applied controversial new labels to the accounts of several news organizations.

    LGBTQ advocacy group GLAAD called out the hateful conduct policy change in a Tuesday statement.

    “Twitter’s decision to covertly roll back its longtime policy is the latest example of just how unsafe the company is for users and advertisers alike,” GLAAD President and CEO Sarah Kate Ellis said. “This decision to roll back LGBTQ safety pulls Twitter even more out of step with TikTok, Pinterest, and Meta, which all maintain similar policies to protect their transgender users at a time when anti-transgender rhetoric online is leading to real world discrimination and violence.”

    Twitter did not respond to a request for comment about the change, although the platform did announce earlier this week some other updates to how it enforces its hateful conduct policy. The platform said it plans to start applying labels to some tweets that violate its hateful conduct policy and reduce their visibility, a similar practice to the one used under the company’s previous leadership, under which it either reduced the visibility of or removed violative tweets.

    “Restricting the reach of Tweets helps reduce binary ‘leave up versus take down’ content moderation decisions and supports our freedom of speech vs freedom of reach approach,” the company said in a tweet. Twitter also said it will not place ads next to content that has been labeled as violative.

    Musk has been in the process of trying to encourage advertisers to return to the platform, after many paused their spending over concerns about Musk’s policy changes, increased hate speech on the platform and massive cuts to the company’s workforce, threatening the company’s core business.

    The billionaire tried to assuage advertisers about Twitter’s approach to hateful conduct at a marketing conference Tuesday, saying, “If somebody has something hateful to say, it doesn’t mean you should give them a megaphone,” according to a report from the Wall Street Journal.

    Musk has faced a number of criticisms from some in the transgender community, most notably from his transgender daughter Vivian Jenna Wilson. Last year, she petitioned a court in California to change her last name to that of of her mother, Justine Wilson, Musk’s ex-wife and mother of five of his seven children, because she no longer wanted to be related to her father “in any way, shape or form.”

    Musk has also had several tweets where he mocked the idea of use of people choosing the pronouns they want to apply to them. He had one tweet in December 2020, which he later deleted, that said “when you put he/him in your bio” alongside a drawing of an 18th century soldier rubbing blood on his face in front of a pile of dead bodies and wearing a cap that read “I love to oppress.”

    And this past December, a vocal critic of many Covid restrictions and protocols, Musk tweeted, “My pronouns are Prosecute/Fauci.”

    But in other tweets, Musk has insisted he had no problems with transgender people, saying that his problem is with “all these pronouns” which he called an “esthetic nightmare.” He also pointed out that his auto company Tesla

    (TSLA)
    has repeatedly scored a 100% rating from the Human Rights Campaign as being one of the “Best Places to Work for LGBTQ+ Equality.”

    — CNN’s Chris Isidore contributed to this report

    [ad_2]

    Source link

  • AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    AI pioneer quits Google to warn about the technology’s ‘dangers’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Geoffrey Hinton, who has been called the ‘Godfather of AI,’ confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.

    Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it.

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision.

    In a tweet Monday, Hinton said he left Google so he could speak freely about the risks of AI, rather than because of a desire to criticize Google specifically.

    “I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. “Google has acted very responsibly.”

    Jeff Dean, chief scientist at Google, said Hinton “has made foundational breakthroughs in AI” and expressed appreciation for Hinton’s “decade of contributions at Google.”

    “We remain committed to a responsible approach to AI,” Dean said in a statement provided to CNN. “We’re continually learning to understand emerging risks while also innovating boldly.”

    Hinton’s decision to step back from the company and speak out on the technology comes as a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies.

    In March, some prominent figures in tech signed a letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” The letter, published by the Future of Life Institute, a nonprofit backed by Elon Musk,came just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that powers ChatGPT. In early tests and a company demo, GPT-4 was used to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.

    In the interview with the Times, Hinton echoed concerns about AI’s potential to eliminate jobs and create a world where many will “not be able to know what is true anymore.” He also pointed to the stunning pace of advancement, far beyond what he and others had anticipated.

    “The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

    Even before stepping aside from Google, Hinton had spoken publicly about AI’s potential to do harm as well as good.

    “I believe that the rapid progress of AI is going to transform society in ways we do not fully understand and not all of the effects are going to be good,” Hinton said in a 2021 commencement address at the Indian Institute of Technology Bombay in Mumbai. He noted how AI will boost healthcare while also creating opportunities for lethal autonomous weapons. “I find this prospect much more immediate and much more terrifying than the prospect of robots taking over, which I think is a very long way off.”

    Hinton isn’t the first Google employee to raise a red flag on AI. In July, the company fired an engineer who claimed an unreleased AI system had become sentient, saying he violated employment and data security policies. Many in the AI community pushed back strongly on the engineer’s assertion.

    [ad_2]

    Source link

  • A foldable phone, new tablet and lots of AI: What Google unveiled at its big developer event | CNN Business

    A foldable phone, new tablet and lots of AI: What Google unveiled at its big developer event | CNN Business

    [ad_1]



    CNN
     — 

    Google on Wednesday unveiled its latest lineup of hardware products, including its first foldable phone and a new tablet, as well as plans to roll out new AI features to its search engine and productivity tools.

    The updates, announced at its annual Google I/O developer conference, come as the company is simultaneously trying to push beyond its core advertising business with new devices while also racing to defend its search engine from the threat posed by a wave of new AI-powered tools.

    In a sign of where Google’s focus currently lies, the company spent more than 90 minutes teasing a long list of new AI features before mentioning hardware updates.

    Here’s what Google announced at the event.

    Google became the latest tech company to unveil a foldable smartphone. Like other foldables, the $1799 Pixel Fold features a vertical hinge that can be opened to reveal a tablet-like display. But Google calls the Fold the thinnest foldable on the market.

    “It took some clever engineering work redesigning components like our speakers, our battery and haptics,” said George Hwang, a product manager at Google, on a call ahead of the announcement. The company packed a Pixel phone into a less than 6 mm body – about two thirds of the thickness of its other Pixel phones.

    The Pixel Fold is very much a phone first: when it’s unfolded, it opens up into a 7.6-inch screen, and moves on Google’s custom-built 180-degree hinge. That hinge mechanism is moved out entirely from under the display to improve its dust resistance and decrease the device’s overall thickness, according to the company.

    The Google Fold includes features you’d find on a Pixel, such as long exposure, unblur, magic eraser, which lets users remove unwanted or distracting object. It also has Pixel Fold-specific tools such as dual-screen live translate, which lets a user communicate in another language with the help of fast audio and text translations on the outer screen.

    Google said it optimized its top apps to take advantage of the larger screen but “there’s still work to be done” because “optimizing for a new foldable form factor takes time,” Hwang said. “It’s a process that we’re committed to and it requires steep investment with our developer partners across Android,” Hwang added.

    Google is far from the first to embrace foldables, but it’s possible it waited to launch its own version until the technology became more advanced. Early versions of the Samsung Galaxy Z Fold, for example, had issues with the screen and most apps were not well optimized for the design.

    But even now, the future for foldables remains uncertain. Most apps are still not optimized for foldable devices; prices remain very high; and Google’s chief rival, Apple, has yet to embrace the option.

    Despite great consumer interest in foldable phones — and a resurgence in 90s-style flip phones among celebrities and TikTok influencers — the foldable market is relatively small, with Samsung dominating the category, followed by others including Motorola, Lenovo, Oppo, and Huawei. According to ABI Research, foldable and flexible displays made up about 0.7% of the smartphone market in 2021, and in 2022 expected to fall just shy of 2%.

    The Pixel Fold will be available in the US, UK, Germany and Japan. The company said the device will start shipping next month.

    A look at the Google's Pixel 7a lineup

    On the surface, the 7a looks similar to the Pixel 7 and 7 Pro, with the same pixel camera bar along the back. It comes with the typical advancements you’d expect to find with any smartphone upgrade – better display, advanced camera and longer-lasting battery. But the 7a now boasts a Tensor G2 processor and a TItan M2 security chip, which brings advanced processing and new artificial intelligence features. It also offers wireless charging for the first time on an A model.

    The Pixel lineup has long been known for its cameras, and the 7a is no exception. It’s packed with upgrades, including a 64-megapixel main camera – the largest sensor on a Pixel A series to date, which will help with improved image quality, low light performance and other features. It also offers a new 13-megapixel ultra-wide camera for capturing even wider shots and a new 13-megapixel front camera. For the first time, each camera enables 4K video.

    The 7a also supports many significant Pixel features, including unblur, magic eraser and an improved Night Sight that’s two times faster and sharper than its predecessor. It also allows users to capture long exposure and enhanced zoom.

    The Pixel comes in several colors, including charcoal, snow, sea and coral, and starts at $499 via the Google Store on May 10.

    The Pixel Series A line has long been aimed at the cost conscious who want good features at a reasonable price, but its reach is limited. Google sells between eight to 10 million of the Pixel devices each year, according to ABI Research.

    “Generally, the smartphones were really meant for Google to showcase how software, and now AI capabilities, could be effectively optimized on hardware and improve the Android user experience,” said David McQueen, an analyst at ABI Research. “Google has purposely kept volume sales limited as it also has to be mindful of its relationship with other smartphone manufacturers that use the Android OS.”

    The Google Pixel tablet

    While phones were a key focus at the event, Google also refreshed other parts of its hardware lineup.

    Google introduced the Pixel Tablet, which is intended for use around the house, from turning off the lights off in the house to setting the thermostat without getting off the couch.

    The tablet, which has rounded edges and corners, comes in three colors: porcelain, hazel and rose, and starts at $499. It will be available on June 20.

    Under the hood, the 11-inch tablet is powered by Google’s Tensor G2 chips, which bring long-lasting battery life and AI features to the device. It also offers a front-facing camera, an 8-megapixel rear camera, and a charging dock.

    Google is also moving forward with plans to bring AI chat features to its core search engine amid a renewed arms race over the technology in Silicon Valley.

    The company said it is introducing the next evolution of Google Search, which will use an AI-powered chatbot to answer questions “you never thought Search could answer” and to help get users the information they want quicker than ever.

    With the update, the look and feel of Google Search results will be noticeably different. When users type a query into the main search bar, they will automatically see a pop-up an AI-generated response in addition to displaying traditional results.

    Users can now sign up for the new Google Search, which will first launch in the United States, via the Google app or Chrome’s desktop browser. A limited number of users will have access to it in the weeks ahead, according to the company, before it scales upward.

    Google is expanding access to its existing chatbot Bard, which operates outside the search engine and can help users do tasks such as outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

    The tool, which was previously available to early users via a waitlist only in the US, will soon be available for all users in 120 countries and 40 languages.

    Google is also launching extensions for Bard from its own services, such as Gmail, Sheets and Docs, allowing users to ask questions and collaborate with the chatbot within the apps they’re using.

    Google also announced PaLM 2, its latest large language model to rival ChatGPT-creator OpenAI’s GPT-4.

    The move marks a big step forward for the technology that powers the company’s AI products and promises to be better at logic, common sense reasoning and mathematics. It can also generate specialized code in different programming languages.

    [ad_2]

    Source link

  • Montana governor bans TikTok | CNN Business

    Montana governor bans TikTok | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Montana Gov. Greg Gianforte signed a bill Wednesday banning TikTok in the state.

    Gianforte tweeted that he has banned TikTok in Montana “to protect Montanans’ personal and private data from the Chinese Communist Party,” officially making it the first state to ban the social media application.

    The controversial law marks the furthest step yet by a state government to restrict TikTok over perceived security concerns and comes as some federal lawmakers have called for a national ban of TikTok. But it is expected to be challenged in court.

    The bill, which will take effect in January, specifically names TikTok as its target, prohibiting the app from operating within state lines. The law also outlines potential fines of $10,000 per day for violators, including app stores found to host the social media application.

    Last month, lawmakers in Montana’s House of Representatives voted 54-43 to pass the bill, known as SB419, sending it to Gianforte’s desk.

    In a statement to CNN, TikTok said it would push to defend the rights of users in Montana.

    “Governor Gianforte has signed a bill that infringes on the First Amendment rights of the people of Montana by unlawfully banning TikTok, a platform that empowers hundreds of thousands of people across the state. We want to reassure Montanans that they can continue using TikTok to express themselves, earn a living, and find community as we continue working to defend the rights of our users inside and outside of Montana.”

    The law comes as TikTok faces growing criticism for its ties to China. TikTok is owned by China-based ByteDance. Many US officials have expressed fears that the Chinese government could potentially access US data via TikTok for spying purposes, though there is so far no evidence that the Chinese government has ever accessed personal information of US-based TikTok users.

    NetChoice, a technology trade group that includes TikTok as a member, called the Montana bill unconstitutional.

    “The government may not block our ability to access constitutionally protected speech – whether it is in a newspaper, on a website or via an app. In implementing this law, Montana ignores the U.S. Constitution, due process and free speech by denying access to a website and apps their citizens want to use,” said Carl Szabo, NetChoice’s general counsel.

    The ACLU also pushed back on the bill, releasing a statement saying that “with this ban, Governor Gianforte and the Montana legislature have trampled on the free speech of hundreds of thousands of Montanans who use the app to express themselves, gather information, and run their small business in the name of anti-Chinese sentiment.”

    On Wednesday, Gianforte signed an additional bill that prohibits the use of any social media application “tied to foreign adversaries” on government devices, including ByteDance-owned CapCut and Lemon8, and Telegram Messenger, which was founded in Russia.

    [ad_2]

    Source link

  • ‘Verified’ Twitter accounts share fake image of ‘explosion’ near Pentagon, causing confusion | CNN Business

    ‘Verified’ Twitter accounts share fake image of ‘explosion’ near Pentagon, causing confusion | CNN Business

    [ad_1]



    CNN
     — 

    A fake image purporting to show an explosion near the Pentagon was shared by multiple verified Twitter accounts on Monday, causing confusion and leading to a brief dip in the stock market. Local officials later confirmed no such incident had occurred.

    The image, which bears all the hallmarks of being generated by artificial intelligence, was shared by numerous verified accounts with blue check marks, including one that falsely claimed it was associated with Bloomberg News.

    “Large explosion near the Pentagon complex in Washington DC. – initial report,” the account posted, along with an image purporting to show black smoke rising near a large building.

    The account has since been suspended by Twitter. It was unclear who was behind the account or where the image originated. A spokesperson for Bloomberg News said the account is not affiliated with the news organization.

    Under owner Elon Musk, Twitter has allowed anyone to obtain a verified account in exchange for a monthly payment. As a result, Twitter verification is no longer an indicator that an account represents who it claims to represent.

    Twitter did not respond to a request for comment.

    The false reports of the explosion also made their way to air on a major Indian television network. Republic TV reported that an explosion had taken place, showing the fake image on its air and citing reports from the Russian news outlet RT. It later retracted the report when it became clear the incident had not taken place.

    “Republic had aired news of a possible explosion near the Pentagon citing a post & picture tweeted by RT,” the outlet later posted on its Twitter account. “RT has deleted the post and Republic has pulled back the newsbreak.”

    In a statement Tuesday, the RT press office said, “As with fast-paced news verification, we made the public aware of reports circulating and once provenance and veracity were ascertained, we took appropriate steps to correct the reporting.”

    In a post on the Russian social media platform VKontakte Tuesday, RT tried to make light of its apparent error.

    “Is the Pentagon on fire? Look, there’s a picture and everything. It’s not real, it’s just an AI generated image. Still, this picture managed to fool several major news outlets full of clever and attractive people, allegedly,” a post from RT read.

    In the moments after the image began circulating on Twitter, the US stock market took a noticeable dip. The Dow Jones Industrial Average fell about 80 points between 10:06 a.m. and 10:10 a.m., fully recovering by 10:13 a.m. Similarly, the broader S&P 500 went from up 0.02% at 10:06 a.m. to down 0.15% at 10:09 a.m.. By 10:11 a.m., the index was positive again.

    The building in the image does not closely resemble the Pentagon and, according to experts, shows signs it may have been created using AI.

    “This image shows typical signs of being AI-synthesized: there are structural mistakes on the building and fence that you would not see if, for example, someone added smoke to an existing photo,” Hany Farid, a professor at the University of California, Berkeley, and digital forensic expert told CNN.

    The fire department in Arlington, Virginia, later responded in a tweet, stating that it and the Pentagon Force Protection Agency were “aware of a social media report circulating online about an explosion near the Pentagon. There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public.”

    CNN’s David Goldman contributed reporting.

    [ad_2]

    Source link

  • First on CNN: Pornhub asks users and Big Tech for help as states adopt age verification laws | CNN Business

    First on CNN: Pornhub asks users and Big Tech for help as states adopt age verification laws | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    In the two-minute video, adult performer Cherie Deville stares into the camera and intones soberly to viewers, for the second time in a month, that policymakers are coming for their porn.

    “Click the button below to contact your representatives before it is too late,” Deville pleads.

    The call-to-action video, launching Wednesday in multiple states, comes from Pornhub, which last month withdrew from Utah over a new law that requires adult sites to verify their users’ ages and holds them liable for serving their content to minors. Now, as similar legislation is set to take effect next month in Arkansas, Mississippi and Virginia, Pornhub is making a last-ditch effort to galvanize users there in opposition.

    It’s unclear how much Pornhub expects to achieve, as the laws have already been passed and signed. A company spokesperson told CNN it’s “certainly not our goal” to shut down the site in the three states as it did in Utah but hinted at the possibility, saying that “if necessary, we will share next steps in the coming weeks.”

    But the video campaign is only part of a broader unfolding strategy by one of the internet’s highest-profile distributors of adult material.

    The video’s release coincides with a previously unreported effort by Pornhub — and its private equity owners, Ethical Capital Partners (ECP) — to convince the world’s largest tech companies to intervene in the wider debate over age restrictions for digital porn and social media.

    In recent weeks, ECP has lobbied Apple, Google and Microsoft to jointly develop a technological standard that might turn a user’s electronic device into the proof of age necessary to access restricted online content, according to Solomon Friedman, a partner at ECP.

    One possible version of the idea, Friedman told CNN, would be for the tech companies to securely store a person’s age information on a device and for the operating system to provide websites requesting age verification with a yes-or-no answer on the owner’s behalf — allowing sites to block underage users without ever handling anyone’s personal information.

    “We are willing to commit whatever resources are required to work proactively with those companies, with other technical service providers and as well with government,” Friedman said.

    Pornhub’s simultaneous appeals to users and to Big Tech highlight the challenging position the company now finds itself in amid a wave of state legislation. Under many of these laws, adult sites are required to implement “reasonable age verification methods” that could include users submitting pictures of their photo ID, facial scans or other information, either to third-party companies or to the sites themselves.

    Minimum age requirements have emerged as a favored policy tool in statehouses across the country as lawmakers have increasingly become attuned to the potential mental health harms of unregulated social media use. But Pornhub, along with civil liberties and digital rights groups, have broadly warned of the potential pitfalls of age-verification rules.

    Those risks can include the infringement of Americans’ rights to access legal speech under the First Amendment; the leakage of personal information belonging to underage as well as overage internet users; or the loss of online anonymity that safety experts say is crucial for shielding vulnerable individuals.

    Pornhub’s outreach to Big Tech is intended to convince the companies whose operating systems power the world’s smartphones, tablets and computers that their technology is central to the future of online identity management and to draw their political might into a global policy battle that could reshape the internet for millions.

    But it is far from clear the effort is succeeding. Friedman declined to say how, or even if, the companies have responded to Pornhub’s communications. Microsoft declined to comment for this story; Apple and Google didn’t respond to requests for comment.

    Friedman characterized the discussions as being in “early stages,” though his other remarks implied the talks may be largely one-sided.

    “We are willing and ready to work with them proactively to determine best solutions and to lend any technical expertise that we possibly can, whether it be implementation or pilot projects or assistance in any way,” Friedman told CNN. “We are hoping that this dialogue bears fruit and age verification will be addressed once and for all.”

    The adult industry has famously led the charge on technological innovation before. Porn played a decisive role in the battle between the VHS and Betamax videotape platforms, facilitated the rise of online credit card transactions and helped promote streaming video technology writ large.

    Now, Pornhub’s fight could prove to be a bellwether for the growing push to enforce age verification for social media. As with the battle over adult material, debates over how to keep children and teens away from social media have raised substantial questions about user privacy and how effective age restrictions may be when determined kids inevitably try to circumvent the rules.

    The tech industry, for its part, has been making its own strides in digital identity services. In 2021, for example, Apple announced support for adding driver’s licenses from eight states to Apple Wallet. In December, Google announced it was beta testing a similar feature for Android.

    Those services, however, are designed for in-person ID checks such as at travel checkpoints or liquor stores, technology experts said, and are not set up to perform age or identity verification remotely or virtually.

    Josh Golin, executive director of Fairplay, a consumer advocacy group focused on children’s technology use, described calls for device-based age verification as an “intriguing idea” that might ease burdens on websites and internet users. But, he argued, there are less invasive ways of determining a website visitor’s age.

    “It is our position that rather than requiring new, stringent forms of age verification, that we should start by having the platforms use the data they’re already collecting to do age estimation,” Golin said, pointing to how TikTok, for example, reportedly uses behavioral cues and activity algorithms to guess whether a user may be underage.

    Any device-based approach to age verification would immediately run into issues in most households with children, where no device is ever solely used by one person or exclusively by adults, said India McKinney, director of federal affairs for the Electronic Frontier Foundation, a digital rights organization.

    “You would have to assume that children and teens weren’t borrowing their parents’ phones,” McKinney said. “And that’s sharing on purpose. You don’t have to be too sophisticated to think about teens stealing their parent’s device to get around the age-gating.”

    Meanwhile, entrusting large tech companies to be the custodians of even more personal information, and enabling them to be the effective arbiters of what internet users can see online, brings its own challenges, said Udbhav Tiwari, head of global product policy at Mozilla, maker of the popular Firefox web browser.

    Device-based age verification, Tiwari said, could have “very serious privacy connotations, because you now have the largest tech companies in the world having your government ID and all the information present in them linked to individual devices. We’ve seen Twitter use phone numbers they collected for account security for targeting ads in the past, which led to them being subjected to FTC fines.”

    Last year, Twitter agreed to pay $150 million to resolve those Federal Trade Commission allegations.

    But Pornhub argues that the alternative is a world that’s even less safe, where users seeking age-restricted content may simply go to sites without age-gates or other checks.

    “Giving your ID card every time you want to visit an adult platform is not the most effective solution,” Deville says in Wednesday’s video. “In fact, it will put children and your privacy at risk.”

    [ad_2]

    Source link

  • Italy ties China’s hands at Pirelli over fears about chip technology | CNN Business

    Italy ties China’s hands at Pirelli over fears about chip technology | CNN Business

    [ad_1]


    London
    CNN
     — 

    Italy has imposed several curbs on Pirelli’s biggest shareholder, Sinochem, in a move aimed at blocking the Chinese government’s access to sensitive chip technology.

    The Italian government decided last week to make use of its so-called “Golden Power” regulations, designed to protect assets of strategic importance to the country, Pirelli said in a statement Sunday.

    The government order risks inflaming tensions between Europe and Beijing, and follows similar intervention by Germany and the United Kingdom to protect their semiconductor technology.

    Earlier this year, Europe joined a US-led effort to restrict China’s access to the most advanced chipmaking technology when the Netherlands — home to ASML Holding, a key supplier to the global semiconductor industry — said it would introduce export controls.

    Italy’s move comes as US Secretary of State Antony Blinken wraps up a high-stakes visit to China aimed at repairing strained relations between the world’s two biggest economies.

    Sinochem, owned by the Chinese government, is Pirelli’s biggest single shareholder, with a 37% stake, and has 60% of seats on the board of the Italian tire maker. CNN has contacted Sinochem for comment.

    In a statement Friday, the Italian government said Pirelli’s Cyber Tyre, which uses chip technology to collect vehicle data, is “configured as a critical technology of national strategic importance.”

    “Improper use of this technology can pose significant risks not only to the confidentiality of user data, but also to the possible transfer of information relevant to security,” the statement added.

    The order sets a host of limitations on Sinochem’s involvement in Pirelli, including a bar on it devising the company’s strategy and financial plans, or appointing a CEO.

    The government said these curbs would protect the “autonomy” of Pirelli and its management, as well as “information of strategic importance.”

    Europe is heavily reliant on China for trade and investment, but relations have come under strain from ideological differences, including over Russia’s war in Ukraine, and recent moves by European Union regulators and governments to limit China’s access to sensitive technology.

    The order takes a page out of this playbook. It requires that Pirelli refuse any requests from Sinochem’s owner — China’s State-owned Assets Supervision and Administration Commission of the State Council — for information sharing, including any information connected to the “know-how” of proprietary technologies.

    The government said “some” strategic decisions would require approval from at least 80% of board directors, a further limitation on Sinochem’s influence.

    Separately, Rome is also assessing whether to renew its partnership with Beijing on the Belt and Road Initiative — China’s global infrastructure and investment megaproject. Italy is the only Group of Seven nation to have joined the initiative.

    In a further sign of the steps multinational companies are beginning to consider to protect their operations from growing geopolitical friction, drugmaker AstraZeneca

    (AZN)
    has drawn up plans to spin off its China business and list it separately in Hong Kong, according to the Financial Times. AstraZeneca

    (AZN)
    declined to comment.

    Earlier this month, Sequoia Capital, the Silicon Valley venture capital group, said it would separate its China investments into an independent unit.

    On Tuesday, the European Commission will unveil measures — possibly including screening of outbound investments and export controls — to keep prized EU technology from countries such as China, Reuters reported.

    — Laura He in Hong Kong contributed to this article.

    [ad_2]

    Source link