ReportWire

Tag: privacy rights

  • Google update makes it easier for US users to remove some unwanted search results | CNN Business

    Google update makes it easier for US users to remove some unwanted search results | CNN Business

    [ad_1]



    CNN
     — 

    Google unveiled new privacy updates this week that lets US users have a wee bit more control over the search results that pop up about themselves online.

    The tech giant said that it was rolling out a new dashboard that will let you know if web results with your contact information are showing up on its search engine. “Then, you can quickly request the removal of those results from Google — right in the tool,” Danielle Romain, the vice president of Trust at Google, said in a blog post Thursday.

    Romain added that Google will also notify you when new results from the web containing your contact info appear, for added “peace of mind.”

    Google also said it was enabling people to remove any of their personal, explicit images that they no longer wish to be visible in its search engine. For example, if you uploaded explicit content to a website and then subsequently deleted it, you can request its removal from Google’s Search if it’s being published elsewhere without your approval. The policy doesn’t apply, however, to content you are commercializing.

    “More broadly, whether it’s for websites containing personal information, explicit imagery or any other removal requests, we’ve updated and simplified the forms you use to submit requests,” Romain said Thursday.

    “Of course, removing content from Google Search does not remove it from the web or other search engines, but we hope these changes give you more control over private information appearing in Google Search,” she added.

    The moves by Google are essentially limited, but a step toward a US-version of Europe’s legally mandated “right to be forgotten” laws. The US updates do not currently, however, go beyond the scope of personal explicit images or contact information. Digital privacy advocates have long lamented how US policy lags far behind the European Union’s approach. An EU court established the right to be forgotten via a ruling in 2014, though the same court said in 2019 that Google does not have to honor the right outside of the EU.

    The privacy updates unveiled by Google this week, however, notably lack any mention of the latest privacy battleground in Big Tech: generative AI. As companies scramble to create large language models, the technology that underpins generative AI tools, many users and privacy advocates are now imploring tech companies to give users a way to opt-out of having their digital data used to train AI tools.

    [ad_2]

    Source link

  • TikTok fined $368 million in Europe for failing to protect children | CNN Business

    TikTok fined $368 million in Europe for failing to protect children | CNN Business

    [ad_1]



    CNN
     — 

    A major European tech regulator has ordered TikTok to pay a €345 million ($368 million) fine after ruling that the app failed to do enough to protect children.

    The Irish Data Protection Commission, which oversees TikTok’s activities in the European Union, said Friday that the company had violated the bloc’s signature privacy law.

    An investigation by the DPC found that in the latter half of 2020, TikTok’s default settings didn’t do enough to protect children’s accounts. For example, it said, newly-created children’s profiles were set to public by default, meaning anybody on the internet could view them.

    TikTok didn’t sufficiently disclose these privacy risks to kids and also used so-called “dark patterns” to guide users toward giving up more of their personal information, the regulator noted.

    In another violation of EU privacy law, a TikTok feature designed as a parental control and known as Family Pairing did not require that an adult overseeing a child’s account be verified as the child’s actual parent or guardian, the DPC said. The lapse meant that theoretically any adult could weaken a child’s privacy safeguards, the regulator said.

    TikTok introduced Family Pairing in April 2020, allowing adults to link their accounts with child accounts to manage screen time, restrict unwanted content and limit direct messaging to children.

    The DPC’s decision gives the company three months to rectify its violations and includes a formal reprimand.

    TikTok didn’t immediately respond to CNN’s request for comment.

    But in a blog post Friday, the company said it “respectfully” disagreed with several aspects of the ruling.

    “Most of the decision’s criticisms are no longer relevant as a result of measures we introduced at the start of 2021,” wrote TikTok’s European privacy chief Elaine Fox.

    The changes TikTok made in early 2021 included making existing and new accounts private by default for users aged 13 to 15, Fox said. She added that later this month, “we will begin rolling out a redesigned account registration flow for new 16- and 17-year-old users” that will default to private settings.

    TikTok did not say Family Pairing would now be verifying an adult’s relationship to the child. But the company said the feature had been strengthened over time with new options and tools. It added that none of the regulator’s findings concluded that TikTok’s age verification measures violated EU privacy law.

    In April, TikTok was also fined in the United Kingdom for a number of breaches of data protection law, including misusing children’s personal data.

    [ad_2]

    Source link

  • Elon Musk should be forced to testify on X’s ‘chaotic environment,’ US regulator tells court | CNN Business

    Elon Musk should be forced to testify on X’s ‘chaotic environment,’ US regulator tells court | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Elon Musk should be forced to testify in an expansive US government probe of X, the company formerly known as Twitter, the US government said.

    The government said mass layoffs and other decisions Musk made raised questions about X’s ability to comply with the law and to protect users’ privacy.

    The US government’s attempt to compel Musk’s testimony is the latest turn in an investigation that predates Musk’s acquisition of X that has intensified due to Musk’s own actions, according to a court filing by the Justice Department on behalf of the Federal Trade Commission.

    The court filing dated Monday cites depositions with multiple former X executives, including its former chief information security officer and former chief privacy officer, who testified that a barrage of layoffs and resignations following Musk’s $44 billion takeover may have hindered X from meeting its security obligations under a 2011 FTC consent agreement.

    Twitter and its outside attorney didn’t immediately respond to a request for comment.

    According to testimony cited in the filing, there were so few employees left after the departures that anywhere from 37% to 50% of the company’s security program lacked effective management and oversight, with no one available to take responsibility for those controls. Other planned upgrades to the company’s security program were “impaired,” the filing said, citing a deposition by the former chief information security officer, Lea Kissner.

    In another example, Musk personally tried to rush the rollout of Twitter Blue, the company’s paid subscription service, the filing said. That forced the company’s security team to bypass the required security and privacy checks that were a part of Twitter’s own policies and that had been mandated in the FTC order, according to the testimony of Damien Kieran, the former chief privacy officer.

    The filing also alleges that Musk’s move to grant several journalists access to internal company records — access that would culminate in the so-called Twitter Files claiming to show evidence of politically motivated censorship — initially involved a plan that could potentially have led to the exposure of private user data in violation of the FTC order.

    According to the filing, Musk’s plan originally called for providing access through a dedicated company laptop with “elevated privileges beyond just what a[n] average employee might have.”

    “Longtime information security employees intervened and implemented safeguards to mitigate the risks,” the filing said, but even then, the former employees testified, the process raised doubts about Musk’s commitment to privacy and security.

    X has moved to block Musk from being forced to testify and has asked a federal court to invalidate the entire FTC order requiring it to safeguard user privacy, accusing the FTC of asking too many questions in its probe.

    But in its filing, the US government said its interest in Musk’s testimony is well-justified based on the appearance of a “chaotic environment” at X driven by “sudden, radical changes at the company” following Musk’s acquisition.

    “The FTC had every reason to seek information about whether these developments signaled a lapse in X Corp.’s compliance” with the 2011 order, the filing said. Confirmed violations of the FTC order could lead to billions of dollars in fines for X, as well as potential legal ramifications for individual executives such as Musk if they are deemed personally responsible for them.

    The FTC investigation traces back to bombshell allegations — raised by Twitter’s former security chief Peiter “Mudge” Zatko and predating Musk’s acquisition — that for years Twitter has failed to live up to its legally binding commitments to the FTC to protect user privacy and security. Those allegations were first reported last year by CNN and The Washington Post.

    The investigation has proven politically charged as Musk — and his allies including Republicans on the House Judiciary Committee — have responded to the probe by publicly accusing the FTC of harassment and overreach.

    [ad_2]

    Source link

  • Mark Zuckerberg concealed his kids’ faces on Instagram. Should you? | CNN Business

    Mark Zuckerberg concealed his kids’ faces on Instagram. Should you? | CNN Business

    [ad_1]



    CNN
     — 

    When Mark Zuckerberg shared a photo on Instagram of his family on July 4, two things stuck out: the billionaire CEO wore a striped souvenir cowboy hat, and the faces of his children were replaced with happy face emojis.

    Zuckerberg’s post was promptly criticized by some who saw the decision to obscure the faces as a reflection of his privacy concerns for sharing pictures of his children online, despite his creating massive platforms that allow millions of other parents to do just that.

    Meta, Instagram’s parent company, has long been scrutinized over how it handles user privacy and for the way its algorithms can be used to lead young users down potentially harmful rabbit hoes.

    But the choice also highlights a broader trend among some social media users, and particularly among high-profile individuals, to be more cautious in sharing identifiable pictures of their children online.

    For years, celebrities from Kristen Bell and Gigi Hadid to Chris Pratt and Orlando Bloom have been blurring images or using emojis to help protect their kids’ privacy on social media. Zuckerberg, too, had previously posted pictures of the back of his daughters’ heads and their side profiles rather than showing their entire faces.

    It’s more rare for everyday users to take a similar approach — but perhaps it shouldn’t be.

    “By modeling for us that he was careful not to share his family’s location or childrens’ identities, he may be communicating that it is the end users’ responsibility to protect themselves online,” said Alexandra Hamlet, a New York City-based psychologist who closely follows the impact of social media on young users.

    Meta did not respond to a request for comment.

    Few things are as central to the parenting experience as showing numerous, possibly embarrassing, pictures of your children with anyone who will stop and look. But over the years, a growing number of parents and experts have raised concerns about the risks of sharing these pictures on social media, including the possibility of exposing kids to identify theft and facial recognition technology, as well as creating an internet history that could follow them into adulthood.

    Some parents choose to either restrict how much they share about their kids or limit sharing to less public platforms. Others adopt more clever hacks like obscuring their children’s faces.

    Leah Plunkett, author of “Sharenthood” and associate dean of learning experience and innovation (LXI) at Harvard Law School, said blocking a child’s face is a symbol that you’re giving them control over their own narrative.

    “Every time you post about your kids, you are chipping away at allowing them to tell their own stories about who they are and who they want to become,” she said. “We grow up making mischief and more than a few mistakes and grow up better having made them. If we lose the privacy of teens and kids to play and explore, and to live and through trial and error, we will deprive them of the ability to develop and tell stories [on their own terms].”

    Noticeably, Zuckerberg did not obscure the face of his infant daughter, which might suggest less concern with the risks for a baby’s face than a young child. However, Plunkett said artificial intelligence technology can be used to trace a face’s changes over time and may still be able to later connect any child, even a baby, to an image of them when older.

    Plunkett believes social media companies can do more, such as offering a setting that automatically blurs kids’ faces or prevents any picture with a child from being used for marketing or advertising purposes.

    For now, however, the onus remains on parents to limit or abstain sharing photos of their kids online.

    “It’s not just parents – grandparents, coaches, teachers and other trusted adults should also keep kids out of photos and videos to protect their privacy, safety, future and current opportunities, and their ability to figure out their own story about themselves and for themselves,” she said.

    [ad_2]

    Source link

  • When you’re talking to a chatbot, who’s listening? | CNN Business

    When you’re talking to a chatbot, who’s listening? | CNN Business

    [ad_1]


    New York
    CNN
     — 

    As the tech sector races to develop and deploy a crop of powerful new AI chatbots, their widespread adoption has ignited a new set of data privacy concerns among some companies, regulators and industry watchers.

    Some companies, including JPMorgan Chase

    (JPM)
    , have clamped down on employees’ use of ChatGPT, the viral AI chatbot that first kicked off Big Tech’s AI arms race, due to compliance concerns related to employees’ use of third-party software.

    It only added to mounting privacy worries when OpenAI, the company behind ChatGPT, disclosed it had to take the tool offline temporarily on March 20 to fix a bug that allowed some users to see the subject lines from other users’ chat history.

    The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post.

    And just last week, regulators in Italy issued a temporary ban on ChatGPT in the country, citing privacy concerns after OpenAI disclosed the breach.

    “The privacy considerations with something like ChatGPT cannot be overstated,” Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN. “It’s like a black box.”

    With ChatGPT, which launched to the public in late November, users can generate essays, stories and song lyrics simply by typing up prompts.

    Google and Microsoft have since rolled out AI tools as well, which work the same way and are powered by large language models that are trained on vast troves of online data.

    When users input information into these tools, McCreary said, “You don’t know how it’s then going to be used.” That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, McCreary said, “I think the opportunity for company trade secrets to get dropped into these different various AI’s is just going to increase.”

    Steve Mills, the chief AI ethics officer at Boston Consulting Group, similarly told CNN that the biggest privacy concern that most companies have around these tools is the “inadvertent disclosure of sensitive information.”

    “You’ve got all these employees doing things which can seem very innocuous, like, ‘Oh, I can use this to summarize notes from a meeting,’” Mills said. “But in pasting the notes from the meeting into the prompt, you’re suddenly, potentially, disclosing a whole bunch of sensitive information.”

    If the data people input is being used to further train these AI tools, as many of the companies behind the tools have stated, then you have “lost control of that data, and somebody else has it,” Mills added.

    OpenAI, the Microsoft-backed company behind ChatGPT, says in its privacy policy that it collects all kinds of personal information from the people that use its services. It says it may use this information to improve or analyze its services, to conduct research, to communicate with users, and to develop new programs and services, among other things.

    The privacy policy states it may provide personal information to third parties without further notice to the user, unless required by law. If the more than 2,000-word privacy policy seems a little opaque, that’s likely because this has pretty much become the industry norm in the internet age. OpenAI also has a separate Terms of Use document, which puts most of the onus on the user to take appropriate measures when engaging with its tools.

    OpenAI also published a new blog post Wednesday outlining its approach to AI safety. “We don’t use data for selling our services, advertising, or building profiles of people — we use data to make our models more helpful for people,” the blogpost states. “ChatGPT, for instance, improves by further training on the conversations people have with it.”

    Google’s privacy policy, which includes its Bard tool, is similarly long-winded, and it has additional terms of service for its generative AI users. The company states that to help improve Bard while protecting users’ privacy, “we select a subset of conversations and use automated tools to help remove personally identifiable information.”

    “These sample conversations are reviewable by trained reviewers and kept for up to 3 years, separately from your Google Account,” the company states in a separate FAQ for Bard. The company also warns: “Do not include info that can be used to identify you or others in your Bard conversations.” The FAQ also states that Bard conversations are not being used for advertising purposes, and “we will clearly communicate any changes to this approach in the future.”

    Google also told CNN that users can “easily choose to use Bard without saving their conversations to their Google Account.” Bard users can also review their prompts or delete Bard conversations via this link. “We also have guardrails in place designed to prevent Bard from including personally identifiable information in its responses,” Google said.

    “We’re still sort of learning exactly how all this works,” Mills told CNN. “You just don’t fully know how information you put in, if it is used to retrain these models, how it manifests as outputs at some point, or if it does.”

    Mills added that sometimes users and developers don’t even realize the privacy risks that lurk with new technologies until it’s too late. An example he cited was early autocomplete features, some of which ended up having some unintended consequences like completing a social security number that a user began typing in — often to the alarm and surprise of the user.

    Ultimately, Mills said, “My view of it right now, is you should not put anything into these tools you don’t want to assume is going to be shared with others.”

    [ad_2]

    Source link

  • TikTok collects a lot of data. But that’s not the main reason officials say it’s a security risk | CNN Business

    TikTok collects a lot of data. But that’s not the main reason officials say it’s a security risk | CNN Business

    [ad_1]



    CNN
     — 

    After TikTok CEO Shou Chew testified for more than five hours on Thursday before a Congressional committee, one thing was clear: US lawmakers remain convinced that TikTok is an urgent threat to national security.

    The hearing, Chew’s first appearance before Congress, kicked off with a lawmaker calling for TikTok to be banned and remained combative throughout. A number of lawmakers expressed deep skepticism about TikTok’s efforts to safeguard US user data and ease concerns about its ties to China. Nothing Chew said appeared to move the needle.

    The rhetoric inside and outside the hearing room highlighted the growing, bipartisan momentum for cracking down on the app in the United States. As the hearing was taking place, House Speaker Kevin McCarthy said he supports legislation that would effectively ban TikTok; Secretary of State Antony Blinken said TikTok should be “ended one way or another,” and the Treasury Department issued a statement vowing to “safeguard national security,” without mentioning TikTok by name.

    Concerns about TikTok’s connections to China have led governments worldwide to ban the app on official devices, and those fears have factored into the increasingly tense US-China relationship. But the remarks across the federal government on Thursday, combined with a prior threat from the Biden administration to impose a nationwide ban unless TikTok’s Chinese owners sell their stakes, shows that a complete ban of the hugely popular app very much remains a live possibility.

    However, more than two years after the Trump administration first issued a similar threat to TikTok, evidence remains unclear about whether the app is a national security threat. Security experts say the government’s fears, while serious, currently appear to reflect only the potential for TikTok to be used for foreign intelligence, not that it has been. There is still no public evidence the Chinese government has actually spied on people through TikTok.

    TikTok doesn’t operate in China. But since the Chinese government enjoys significant leverage over businesses under its jurisdiction, the theory goes that ByteDance, and thus indirectly, TikTok, could be forced to cooperate with a broad range of security activities, including possibly the transfer of TikTok data.

    “It’s not that we know TikTok has done something, it’s that distrust of China and awareness of Chinese espionage has increased,” said James Lewis, an information security expert at the Center for Strategic and International Studies. “The context for TikTok is much worse as trust in China vanishes.”

    When Rob Joyce, the National Security Agency’s director of cybersecurity, was asked by reporters in December to articulate his security concerns about TikTok, he offered a general warning rather than a specific allegation.

    “People are always looking for the smoking gun in these technologies,” Joyce said. “I characterize it much more as a loaded gun.”

    Technical experts also draw a distinction between the TikTok app — which appears to operate very similarly to American social media in the amount of user tracking and data collection it performs — and TikTok’s approach to governance and ownership. It’s the latter that’s been the biggest source of concern, not the former.

    The US government has said it’s worried China could use its national security laws to access the significant amount of personal information that TikTok, like most social media applications, collects from its US users.

    The laws in question are extraordinarily broad, according to western legal experts, requiring “any organization or citizen” in China to “support, assist and cooperate with state intelligence work,” without defining what “intelligence work” means.

    Should Beijing gain access to TikTok’s user data, one concern is that the information could be used to identify intelligence opportunities — for example, by helping China uncover the vices, predilections or pressure points of a potential spy recruit or blackmail target, or by building a holistic profile of foreign visitors to the country by cross-referencing that data against other databases it holds. Even if many of TikTok’s users are young teens with seemingly nothing to hide, it’s possible some of those Americans may grow up to be government or industry officials whose social media history could prove useful to a foreign adversary.

    Another concern is that if China has a view into TikTok’s algorithm or business operations, it could try to exert pressure on the company to shape what users see on the platform — either by removing content through censorship or by pushing preferred content and propaganda to users. This could have enormous repercussions for US elections, policymaking and other democratic discourse.

    Security experts say these scenarios are a possibility based on what’s publicly known about China’s laws and TikTok’s ownership structure, but stress that they are hypothetical at best. To date, there is no public evidence that Beijing has actually harvested TikTok’s commercial data for intelligence or other purposes.

    Chew, the TikTok CEO, has publicly said that the Chinese government has never asked TikTok for its data, and that the company would refuse any such request. In Thursday’s hearing, Chew said that what US officials fear is a hypothetical scenario that has not been proven.

    “I think a lot of risks that are pointed out are hypothetical and theoretical risks,” Chew said. “I have not seen any evidence. I am eagerly awaiting discussions where we can talk about evidence and then we can address the concerns that are being raised.”

    If there’s a risk, it’s primarily concentrated in the relationship between TikTok’s Chinese parent, ByteDance, and Beijing. The main issue is that the public has few ways of verifying whether or how that relationship, if it exists, might have been exploited.

    TikTok has been erecting technical and organizational barriers that it says will keep US user data safe from unauthorized access. Under the plan, known as Project Texas, the US government and third-party companies such as Oracle would also have some degree of oversight of TikTok’s data practices. TikTok is working on a similar plan for the European Union known as Project Clover.

    But that hasn’t assuaged the doubts of US officials. Multiple lawmakers at the hearing specifically said they were not persuaded by Project Texas. That’s likely because no matter what TikTok does internally, China would still theoretically have leverage over TikTok’s Chinese owners. Exactly what that implies is ambiguous, and because it is ambiguous, it is unsettling.

    In congressional testimony, TikTok has sought to assure US lawmakers it is free from Chinese government influence, but it has not spoken to the degree that ByteDance may be susceptible. TikTok has also acknowledged that some China-based employees have accessed US user data, though it’s unclear for what purpose, and it has disclosed to European users that China-based employees may access their data as part of doing their jobs.

    Multiple privacy and security researchers who’ve examined TikTok’s app say there aren’t any glaring flaws suggesting the app itself is currently spying on people or leaking their information.

    In 2020, The Washington Post worked with a privacy researcher to look under the hood at TikTok, concluding that the app does not appear to collect any more data than your typical mainstream social network. The following year, Pellaeon Lin, a Taiwan-based researcher at the University of Toronto’s Citizen Lab, performed another technical analysis that reached similar conclusions.

    But even if TikTok collects about the same amount of information as Facebook or Twitter, that’s still quite a lot of data, including information about the videos you watch, comments you write, private messages you send, and — if you agree to grant this level of access — your exact geolocation and contact lists. TikTok’s privacy policy also says the company collects your email address, phone number, age, search and browsing history, information about what’s in the photos and videos you upload, and if you consent, the contents of your device’s clipboard so that you can copy and paste information into the app.

    TikTok’s source code closely resembles that of its China-based analogue, Douyin, said Lin in an interview. That implies both apps are developed on the same code base and customized for their respective markets, he said. Theoretically, TikTok could have “privacy-violating hidden features” that can be turned on and off with a tweak to its server code and that the public might not know about, but the limitations of trying to reverse-engineer an app made it impossible for Lin to find out whether those configurations or features exist.

    If TikTok used unencrypted communications protocols, or if it tried to access contact lists or precise geolocation data without permission, or if it moved to circumvent system-level privacy safeguards built into iOS or Android, then that would be evidence of a problem, Lin said. But he found none of those things.

    “We did not find any overt vulnerabilities regarding their communication protocols, nor did we find any overt security problems within the app,” Lin said. “Regarding privacy, we also did not see the TikTok app exhibiting any behaviors similar to malware.”

    TikTok has cited Lin’s research as part of its defense. But Citizen Lab came out swinging this week at the company’s characterizations of the paper, saying in a statement that TikTok has presented the research as “somehow exculpatory” when a key finding was that Lin couldn’t see what happens to user data after it is collected.

    Chew, in a rare moment of apparent frustration, told lawmakers at the hearing that TikTok and Citizen Lab were really saying a version of the same thing. “Citizen Lab is saying they cannot prove a negative, which is what I’ve been trying to do for the last four hours,” he said.

    TikTok has faced claims that its in-app browser tracks its users’ keyboard entries, and that this type of conduct, known as keylogging, could be a security risk. The privacy researcher who performed the analysis last year, Felix Krause, said that keylogging is not an inherently malicious activity, but it theoretically means TikTok could collect passwords, credit card information or other sensitive data that users may submit to websites when they visit them through TikTok’s in-app browser.

    There is no public evidence TikTok has actually done that, however. TikTok has said the keylogging function is used for “debugging, troubleshooting, and performance monitoring,” as well as to detect bots and spam. Other research has shown that the use of keyloggers is extremely widespread in the technology industry. That does not necessarily excuse TikTok or its peers for using a keylogger in the first place, but neither is it proof positive that TikTok’s product, by itself, is any more of a national security threat than other websites.

    There have also been a number of studies that report TikTok is tracking users around the internet even when they are not using the app. By embedding tracking pixels on third-party websites, TikTok can collect information about a website’s visitors, the studies have found. TikTok has said it uses the data to bolster its advertising business. And in this respect, TikTok is not unique: the same tool is used by US tech giants including Facebook-parent Meta and Google on a far larger scale, according to Malwarebytes, a leading cybersecurity firm.

    At the hearing, Chew said the company does keystroke logging to “identify bots,” not to track what users say. He also repeatedly noted that TikTok does not collect more user data than most of its peers in the industry.

    As with the keylogging tech, the fact TikTok uses tracking pixels does not on its own transform the company into a national security threat; the risk is that the Chinese government could compel or influence TikTok, through ByteDance, to abuse its data collection capabilities.

    Separately, a report last year found TikTok was spying on journalists, snooping on their user data and IP addresses to find out when or if certain reporters were sharing the same location as company employees. TikTok later confirmed the incident and ByteDance fired several employees who had improperly accessed the TikTok data of two journalists.

    The circumstances surrounding the incident suggest it was not the type of wide-scale, government-directed intelligence effort that US national security officials primarily fear. Instead, it appeared to be part of a specific internal effort by some ByteDance employees to hunt down leaks to the press, which may be deplorable but hardly uncommon for an organization under public scrutiny. (Nevertheless, the US government is reportedly investigating the incident.)

    Joyce, the NSA’s top cyber official, told reporters in December that what he really worries about is “large-scale influence” campaigns leveraging TikTok’s data, not “individualized targeting through [TikTok] to do malicious things.”

    To date, however, there’s no public evidence of that taking place.

    TikTok may collect an extensive amount of data, much of it quietly, but as far as researchers can tell, it isn’t any more invasive or illegal than what other US tech companies do.

    According to security experts, that’s more a reflection of the broad leeway we’ve given to tech companies in general to handle our data, not an issue that’s unique or specific to TikTok.

    “We have to trust that those companies are doing the right thing with the information and access we’ve provided them,” said Peiter “Mudge” Zatko, a longtime ethical hacker and Twitter’s former head of security who turned whistleblower. “We probably shouldn’t. And this comes down to a concern about the ultimate governance of these companies.”

    Lin told CNN that TikTok and other social media companies’ appetite for data highlights policy failures to pass strong privacy laws that regulate the tech industry writ large.

    “TikTok is only a product of the entire surveillance capitalism economy,” Lin said. “And governments around the world are ignoring their duty to protect citizens’ private information, allowing big tech companies to exploit user information for gain. Governments should try to better protect user information, instead of focusing on one particular app without good evidence.”

    Asked how he would advise policymakers to look at TikTok instead, Lin said: “What I would call for is more evidence-based policy.”

    [ad_2]

    Source link

  • Lawmakers say TikTok is a national security threat, but evidence remains unclear | CNN Business

    Lawmakers say TikTok is a national security threat, but evidence remains unclear | CNN Business

    [ad_1]



    CNN
     — 

    As TikTok CEO Shou Zi Chew prepares for his first congressional grilling on Thursday, much of the focus will undoubtedly be on the short-form video app’s potential national security risks.

    Concerns about TikTok’s connections to China have led governments worldwide to ban the app on official devices, and those fears have factored into the increasingly tense US-China relationship. The Biden administration has threatened TikTok with a nationwide ban unless its Chinese owners sell their stakes in the company.

    But more than two years after the Trump administration first issued a similar threat to TikTok, security experts say the government’s fears, while serious, currently appear to reflect only the potential for TikTok to be used for foreign intelligence, not that it has been. There is still no public evidence the Chinese government has actually spied on people through TikTok.

    TikTok doesn’t operate in China. But since the Chinese government enjoys significant leverage over businesses under its jurisdiction, the theory goes that ByteDance, and thus indirectly, TikTok, could be forced to cooperate with a broad range of security activities, including possibly the transfer of TikTok data.

    “It’s not that we know TikTok has done something, it’s that distrust of China and awareness of Chinese espionage has increased,” said James Lewis, an information security expert at the Center for Strategic and International Studies. “The context for TikTok is much worse as trust in China vanishes.”

    When Rob Joyce, the National Security Agency’s director of cybersecurity, was asked by reporters in December to articulate his security concerns about TikTok, he offered a general warning rather than a specific allegation.

    “People are always looking for the smoking gun in these technologies,” Joyce said. “I characterize it much more as a loaded gun.”

    Technical experts also draw a distinction between the TikTok app — which appears to operate very similarly to American social media in the amount of user tracking and data collection it performs — and TikTok’s approach to governance and ownership. It’s the latter that’s been the biggest source of concern, not the former.

    The US government has said it’s worried China could use its national security laws to access the significant amount of personal information that TikTok, like most social media applications, collects from its US users.

    The laws in question are extraordinarily broad, according to western legal experts, requiring “any organization or citizen” in China to “support, assist and cooperate with state intelligence work,” without defining what “intelligence work” means.

    Should Beijing gain access to TikTok’s user data, one concern is that the information could be used to identify intelligence opportunities — for example, by helping China uncover the vices, predilections or pressure points of a potential spy recruit or blackmail target, or by building a holistic profile of foreign visitors to the country by cross-referencing that data against other databases it holds. Even if many of TikTok’s users are young teens with seemingly nothing to hide, it’s possible some of those Americans may grow up to be government or industry officials whose social media history could prove useful to a foreign adversary.

    Another concern is that if China has a view into TikTok’s algorithm or business operations, it could try to exert pressure on the company to shape what users see on the platform — either by removing content through censorship or by pushing preferred content and propaganda to users. This could have enormous repercussions for US elections, policymaking and other democratic discourse.

    Security experts say these scenarios are a possibility based on what’s publicly known about China’s laws and TikTok’s ownership structure, but stress that they are hypothetical at best. To date, there is no public evidence that Beijing has actually harvested TikTok’s commercial data for intelligence or other purposes.

    Chew, the TikTok CEO, has publicly said that the Chinese government has never asked TikTok for its data, and that the company would refuse any such request.

    If there’s a risk, it’s primarily concentrated in the relationship between TikTok’s Chinese parent, ByteDance, and Beijing. The main issue is that the public has few ways of verifying whether or how that relationship, if it exists, might have been exploited.

    TikTok has been erecting technical and organizational barriers that it says will keep US user data safe from unauthorized access. Under the plan, known as Project Texas, the US government and third-party companies such as Oracle would also have some degree of oversight of TikTok’s data practices. TikTok is working on a similar plan for the European Union known as Project Clover.

    But that hasn’t assuaged the doubts of US officials, likely because no matter what TikTok does internally, China would still theoretically have leverage over TikTok’s Chinese owners. Exactly what that implies is ambiguous, and because it is ambiguous, it is unsettling.

    In congressional testimony, TikTok has sought to assure US lawmakers it is free from Chinese government influence, but it has not spoken to the degree that ByteDance may be susceptible. TikTok has also acknowledged that some China-based employees have accessed US user data, though it’s unclear for what purpose, and it has disclosed to European users that China-based employees may access their data as part of doing their jobs.

    Multiple privacy and security researchers who’ve examined TikTok’s app say there aren’t any glaring flaws suggesting the app itself is currently spying on people or leaking their information.

    In 2020, The Washington Post worked with a privacy researcher to look under the hood at TikTok, concluding that the app does not appear to collect any more data than your typical mainstream social network. The following year, Pellaeon Lin, a Taiwan-based researcher at the University of Toronto’s Citizen Lab, performed another technical analysis that reached similar conclusions.

    But even if TikTok collects about the same amount of information as Facebook or Twitter, that’s still quite a lot of data, including information about the videos you watch, comments you write, private messages you send, and — if you agree to grant this level of access — your exact geolocation and contact lists. TikTok’s privacy policy also says the company collects your email address, phone number, age, search and browsing history, information about what’s in the photos and videos you upload, and if you consent, the contents of your device’s clipboard so that you can copy and paste information into the app.

    TikTok’s source code closely resembles that of its China-based analogue, Douyin, said Lin in an interview. That implies both apps are developed on the same code base and customized for their respective markets, he said. Theoretically, TikTok could have “privacy-violating hidden features” that can be turned on and off with a tweak to its server code and that the public might not know about, but the limitations of trying to reverse-engineer an app made it impossible for Lin to find out whether those configurations or features exist.

    If TikTok used unencrypted communications protocols, or if it tried to access contact lists or precise geolocation data without permission, or if it moved to circumvent system-level privacy safeguards built into iOS or Android, then that would be evidence of a problem, Lin said. But he found none of those things.

    “We did not find any overt vulnerabilities regarding their communication protocols, nor did we find any overt security problems within the app,” Lin said. “Regarding privacy, we also did not see the TikTok app exhibiting any behaviors similar to malware.”

    TikTok has faced claims that its in-app browser tracks its users’ keyboard entries, and that this type of conduct, known as keylogging, could be a security risk. The privacy researcher who performed the analysis last year, Felix Krause, said that keylogging is not an inherently malicious activity, but it theoretically means TikTok could collect passwords, credit card information or other sensitive data that users may submit to websites when they visit them through TikTok’s in-app browser.

    There is no public evidence TikTok has actually done that, however. TikTok has said the keylogging function is used for “debugging, troubleshooting, and performance monitoring,” as well as to detect bots and spam. Other research has shown that the use of keyloggers is extremely widespread in the technology industry. That does not necessarily excuse TikTok or its peers for using a keylogger in the first place, but neither is it proof positive that TikTok’s product, by itself, is any more of a national security threat than other websites.

    There have also been a number of studies that report TikTok is tracking users around the internet even when they are not using the app. By embedding tracking pixels on third-party websites, TikTok can collect information about a website’s visitors, the studies have found. TikTok has said it uses the data to bolster its advertising business. And in this respect, TikTok is not unique: the same tool is used by US tech giants including Facebook-parent Meta and Google on a far larger scale, according to Malwarebytes, a leading cybersecurity firm.

    As with the keylogging tech, the fact TikTok uses tracking pixels does not on its own transform the company into a national security threat; the risk is that the Chinese government could compel or influence TikTok, through ByteDance, to abuse its data collection capabilities.

    Separately, a report last year found TikTok was spying on journalists, snooping on their user data and IP addresses to find out when or if certain reporters were sharing the same location as company employees. TikTok later confirmed the incident and ByteDance fired several employees who had improperly accessed the TikTok data of two journalists.

    The circumstances surrounding the incident suggest it was not the type of wide-scale, government-directed intelligence effort that US national security officials primarily fear. Instead, it appeared to be part of a specific internal effort by some ByteDance employees to hunt down leaks to the press, which may be deplorable but hardly uncommon for an organization under public scrutiny. (Nevertheless, the US government is reportedly investigating the incident.)

    Joyce, the NSA’s top cyber official, told reporters in December that what he really worries about is “large-scale influence” campaigns leveraging TikTok’s data, not “individualized targeting through [TikTok] to do malicious things.”

    To date, however, there’s no public evidence of that taking place.

    TikTok may collect an extensive amount of data, much of it quietly, but as far as researchers can tell, it isn’t any more invasive or illegal than what other US tech companies do.

    According to security experts, that’s more a reflection of the broad leeway we’ve given to tech companies in general to handle our data, not an issue that’s unique or specific to TikTok.

    “We have to trust that those companies are doing the right thing with the information and access we’ve provided them,” said Peiter “Mudge” Zatko, a longtime ethical hacker and Twitter’s former head of security who turned whistleblower. “We probably shouldn’t. And this comes down to a concern about the ultimate governance of these companies.”

    Lin told CNN that TikTok and other social media companies’ appetite for data highlights policy failures to pass strong privacy laws that regulate the tech industry writ large.

    “TikTok is only a product of the entire surveillance capitalism economy,” Lin said. “And governments around the world are ignoring their duty to protect citizens’ private information, allowing big tech companies to exploit user information for gain. Governments should try to better protect user information, instead of focusing on one particular app without good evidence.”

    Asked how he would advise policymakers to look at TikTok instead, Lin said: “What I would call for is more evidence-based policy.”

    [ad_2]

    Source link

  • Mental health startup exposes the personal data of more than 3 million people | CNN Politics

    Mental health startup exposes the personal data of more than 3 million people | CNN Politics

    [ad_1]


    Washington
    CNN
     — 

    A mental health startup exposed the personal data of as many as 3.1 million people online. In some cases, possibly sensitive information on mental health treatment was leaked, according to a company statement and a Department of Health and Human services filing.

    Cerebral, a California-based firm that connects people suffering from anxiety and depression with mental health professionals via video calls, said it discovered the “inadvertent” data exposure more than three years after it started using “pixels” – a common method that companies and advertisers use to track user behavior for marketing purposes.

    The company determined in January that tracking pixels had been sharing client and user data to “third-party platforms” and “subcontractors” that it didn’t name, according to a privacy notice near the bottom of its website.

    Cerebral said it was unaware of any misuse of the protected health information that was disclosed. But privacy advocates have for years warned that such data troves can be used to aggressively market products at consumers and infringe on their privacy.

    Some of the data potentially exposed in the Cerebral breach includes answers to online “self-assessments” about mental health that Cerebral asks prospective clients to fill out. That can include questions on whether someone is experiencing panic attacks, abusing alcohol or has a personality disorder, CNN’s review of the online assessments found.

    Cerebral said in a statement to CNN on Friday that it was “committed to correcting historical errors and leading the industry in privacy standards moving forward.”

    Cerebral notified the Department of Health and Human Services (HHS), which said in a filing this month that the breach affects over 3.1 million users. The department investigates potential violations of the Health Insurance Portability and Accountability Act (HIPAA), a law that requires medical providers to safeguard patient data.

    Rachel Seeger, a spokesperson for the HHS Office for Civil Rights, said the office typically “does not comment on open or potential investigations.”

    Cerebral said in its public statement that it had disabled the tracking pixels on its platforms and stopped sharing data with subcontractors “not able to meet all HIPAA [Health Insurance Portability and Accountability Act] requirements.”

    “It is important to note that Cerebral never impermissibly transmitted clinician generated notes or clinician communications,” the company told CNN.

    Cerebral spokesperson Chris Savarese did not respond to emailed questions about which and how many platforms and contractors to which the company disclosed the client health information.

    Some analysts argue that the broader market for data tracking tools is out of control. A group of conservative Catholics has spent millions of dollars to buy mobile data that identified priests who used gay dating and hookup apps, the Washington Post reported this week.

    Andrea Downing, who has done extensive research on pixel tracking and privacy, said patients are often unaware of how much personal data health care startups collect and potentially transmit to other parties.

    “What is in the fine print or the details of how data is being shared for advertising is not apparent to us when we’re going through the trauma of a diagnosis and seeking knowledge,” said Downing, who is co-founder of Light Collective, a digital rights nonprofit.

    “The only thing that is incentivizing change right now is the threat of liability,” Downing told CNN.

    [ad_2]

    Source link

  • What is doxxing? | CNN

    What is doxxing? | CNN

    [ad_1]

    Editor’s Note: This story is part of ‘Systems Error’, a series by CNN As Equals, investigating how your gender shapes your life online. For information about how CNN As Equals is funded and more, check out our FAQs.



    CNN
     — 

    In 2017, Kyle Quinn enjoyed the anonymity any engineering professor typically would until he became a target of doxxing. Angry social media users mistakenly identified him as having attended a White nationalist rally in Charlottesville, Virginia. His pictures, home address and employer’s name quickly made rounds across social networks, frightening Quinn and his wife and sending them to a colleague’s home for refuge, the New York Times reported.

    Quinn is one of many victims of doxxing, a form of online invasion of personal privacy that can lead to devastating consequences.

    According to the International Encyclopedia of Gender, Media, and Communication, doxxing is the intentional revelation of a person’s private information online without their consent, often with malicious intent. This includes the sharing of phone numbers, home addresses, identification numbers and essentially any sensitive and previously private information such as personal photos that could make the victim identifiable and potentially exposed to further harassment, humiliation and real-life threats including stalking and unwanted encounters in person.

    There are multiple etymologies for the term, but the cybersecurity firm Kapersky reports that one explanation is that doxxing came from the phrase ”dropping documents” and gradually ”documents” became ”dox” which has been used as a verb to refer to the practice. Originally a form of online attack used by hackers, the firm wrote, doxxing has been around since the 1990s.

    Doxxing can happen in many ways online and on other platforms.

    According to the International Encyclopedia of Gender, Media, and Communication, in 2014, the gaming industry experienced a watershed moment known as Gamergate, a year-long culture war led by far right trolls online. After Eron Gjoni, ex-boyfriend of game developer Zoe Quinn uploaded a blog post about their break up, accused her of cheating on him, and shared screenshots of their private communications on an online forum, Quinn became one of many gamers to be a high-profile target of doxxing and rape threats, followed by many other female game developers who raised their voices, according to The Guardian.

    One of the victims, the American game developer Brianna Wu wrote in the magazine Index on Censorship: ”The truth is there is no free speech when speaking about your experiences leads to death threats, doxxing and having armed police sent to your house.”

    In 2014, Wu tweeted about escaping her home out of fear for her safety along with screenshots of death threats sent to her account.

    In 2019, the South African journalist and broadcaster Karima Brown missent a message meant for her producer to a WhatsApp group run by the Economic Freedom Fighters (EFF) political party in which journalists are able to get media statements from the EFF, according to the Committee for the Protection of Journalists (CPJ). Julius Malema, the party leader, accused her of spying on the party, and reacted by tweeting her phone number to his 2.3 million followers. Brown reportedly received rape and murder threats, including graphic messages 7]. The high court in Johannesburg later ruled the doxxing was a violation of the country’s Electoral Act, according to the CPJ, with Brown telling the non-profit that the court’s ruling was “a victory for democracy and media freedom, and a blow against misogyny and toxic masculinity.”

    Facebook’s parent company Meta does not explicitly use the term ”doxxing” in its privacy violations policy, but said in a statement to CNN that it considers users sharing ”personally identifiable information” about others a violation of its community standards. The company says it reviews any piece of content against its community standards and may remove private information such as home addresses that could result in tangible harm unless this information is publicly available through news coverage, press releases or other sources. Facebook users can use a specific reporting channel when they are concerned about their image privacy on the platform.

    TikTok clearly defines doxxing in its community guidelines which ban both the collection and publication of individuals’ personal information for malicious intent. Users can report a specific item on the platform and follow the instructions.

    Twitter’s app and desktop versions allow you to report other users who tweet private information and media about themselves or somebody else without permission by clicking on the three dots in the corner of an offending tweet, then Report Tweet and following the instructions. Users found in violation of the policy are required to remove the content in question and temporarily locked out of their account. Twitter says permanent suspension may result from a second violation. Users can also file a separate form to report such violations.

    It depends on the jurisdiction. In Asia, Singapore outlawed most forms of intentional harassment or distress in 2014, which includes doxxing, and violators can be fined up to SGD $5,000 (nearly $3,800 US) and/or jailed for up to 6 months.

    In Indonesia, activists told CNN that doxxing cases have been on the rise, especially those targeting women human rights defenders and journalists. Damar Juniarto, the executive director of Southeast Asia Freedom of Expression Network, a network of digital rights activists, said the term doxxing ”is not known in the Indonesia legal system” causing some doxxing cases to not be taken seriously by police. But he explained that the Personal Data Protection law, passed in September, punishes people who use and share personal information without a person’s consent, which can include doxxing.

    In the UK, there are clear guidelines for prosecutors to handle cases, particularly cases of violence against women and girls, which involve threats to post personal information on social media and the disclosure of private sexual images without consent, and the punishments vary.

    In the US, measures to combat doxxing vary across states. Last year, Nevada passed a bill that bans doxxing and allows victims to bring a civil action against the perpetrators. In California, cyber harassment including doxxing with the intent to put others and their immediate family in danger can put violators in county jail for up to one year or impose a fine of up to $1,000, or both.

    In 2021, Hong Kong authorities amended the data privacy law to include doxxing, with people facing jail sentences of up to five years and fines of up to HK$1 million ($129,000 US). This followed the doxxing of many officials and police officers during the 2019 protests against the Hong Kong government’s proposed bill to allow extraditions to mainland China. Critics argued that doxxing can be legally defended if sharing information about government officials out of public interest.

    Lauren Krapf, the technology policy and advocacy counsel for the Anti-Defamation League in the US, said whether doxxing is criminal depends on the intent.

    ”I think in certain circumstances, it is probably appropriate that [doxxers] have some level of criminal liability or civil liability,” Krapf told CNN, but emphasized that doxxing is not a black and white situation. The activity itself can be an empowerment tool for people engaging in protests to share information about extremists to others, she explained.

    Across the US, “state laws vary greatly and there is no federal statute outlawing doxxing,” Krapf told CNN, meaning “there isn’t currently one specific standard codified.”

    While anyone can be doxxed, experts believe women are more likely to be targets of mass online attacks, leaks of their sensitive media, such as sexually explicit imagery that was stolen or shared without consent and unsolicited and sexualized messages.

    A 2020 report by UN Women focusing on India, Malaysia, Pakistan, the Philippines, and South Korea found that women experience many forms of online violence simultaneously such as trolling, doxxing and social media hacks.

    A 2020 global report by The Economist Intelligence Unit (EIU), found that online violence against women is startlingly prevalent in the 51 countries surveyed, with 45% of Generation Z and Millennial women reporting being affected, compared to 31% of Generation X women and Baby Boomers, while 85% of women surveyed overall report witnessing online violence against women. While online violence is alarmingly common globally, the study shows significant regional differences, with Africa, Latin America and the Caribbean, and the Middle East showing at least 90% of women surveyed having been affected.

    While the responsibility to prevent doxxing rests with those who would violate another’s privacy, and not with the victim, it is useful to take some preventative steps to protect yourself online.

    It can help to be familiar with doxxing-related policies on the online platforms you use as well as how to report abuse more generally. Consider making it harder for people to track you online by restricting the accessibility of any information that can identify you online and offline. For example, check who can see your personal email, phone number, home addresses and other physical locations on your social media accounts.

    The University of Berkeley, PEN America and Artist at Risk Connection provide thorough online privacy guides.

    [ad_2]

    Source link

  • Meta agrees to pay $725 million to settle lawsuit over Cambridge Analytica data leak | CNN Business

    Meta agrees to pay $725 million to settle lawsuit over Cambridge Analytica data leak | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook parent company Meta has agreed to pay $725 million to settle a longstanding class action lawsuit accusing it of allowing Cambridge Analytica and other third parties to access private user information and misleading users about its privacy practices.

    The proposed settlement would end the legal battle that began four years ago, shortly after the company disclosed that the private information of as many as 87 million Facebook users was obtained by Cambridge Analytica, a data analytics firm that worked with the Trump campaign. The data leak sparked an intense international scandal for Facebook, drawing the scrutiny of regulators on both sides of the Atlantic.

    The lawsuit involved obtaining millions of pages of documents from Facebook and other related parties and hundreds of hours of depositions, including dozens of current and former Facebook employees.

    The users settling with Facebook called the agreement the “largest recovery ever achieved in a data privacy class action and the most Facebook has ever paid to resolve a private class action” in a motion to approve the settlement filed Thursday. They estimated that between 250 and 280 million people may be eligible for payments as part of the class action settlement.

    The settlement is pending approval from a judge, who will hear the motion in March.

    “We pursued a settlement as it’s in the best interest of our community and shareholders,” Meta spokesperson Dina Luce said in a statement. “Over the last three years we revamped our approach to privacy and implemented a comprehensive privacy program. We look forward to continuing to build services people love and trust with privacy at the forefront.”

    Meta did not admit wrongdoing as part of the settlement. In the motion to approve the settlement, the users who brought the suit pointed to changes Facebook has made in the wake of the Cambridge Analytica breach, including restricting third-party access to user data and improving communications to users about how their data is collected and shared.

    The Cambridge Analytica leak began with a psychology professor who harvested data on millions of Facebook users through an app offering a personality test, then gave it to a service promising to use vague and sophisticated techniques to influence voters during a high-stakes election where the winning presidential candidate won narrowly in several key states.

    A 2020 report by the UK Information Commissioner’s Office later cast significant doubt on Cambridge Analytica’s capabilities, suggesting many of them had been exaggerated. But the improper sharing of Facebook data triggered a cascade of events that has culminated in investigations and lawsuits.

    The scandal prompted a global outcry that led to hearings, an apology tour from Zuckerberg and various changes to the platform. Facebook agreed in 2019 to a $5 billion privacy settlement with the US Federal Trade Commission over the privacy breach, and to a $100 million settlement with the US Securities and Exchange Commission over claims that it misled investors about the risks of misuse of user data.

    [ad_2]

    Source link

  • ‘Fortnite’ maker Epic Games to pay $520 million in record-breaking FTC settlement | CNN Business

    ‘Fortnite’ maker Epic Games to pay $520 million in record-breaking FTC settlement | CNN Business

    [ad_1]



    CNN
     — 

    Epic Games, maker of the hit video game “Fortnite,” has agreed to pay a total of $520 million to settle US government allegations that it misled millions of players, including children and teens, into making unintended purchases and that it violated a landmark federal children’s privacy law.

    As part of the agreement, Epic will pay $275 million to the US government to resolve claims it violated the Children’s Online Privacy Protection Act (COPPA) by gathering the personal information of kids under the age of 13 without first receiving their parents’ verifiable consent. It is the largest fine the FTC has ever imposed for a rule that it enforces, the agency said Monday.

    In a second and separate settlement, Epic will pay $245 million as refunds to consumers who were allegedly harmed by user-interface design choices the FTC claimed were deceptive. That agreement is the largest administrative order in FTC history, the FTC added.

    In a blog post addressing the twin settlements, Epic said the agreement reflects an evolution in how US laws are applied to the video gaming industry.

    “No developer creates a game with the intention of ending up here,” Epic said in the blog post. “We accepted this agreement because we want Epic to be at the forefront of consumer protection and provide the best experience for our players.”

    FTC Chair Lina Khan said the settlement reflects the agency’s heightened focus on privacy and so-called “dark patterns,” a term used to describe design elements intended to nudge users toward a company’s preferred result.

    “Protecting the public, and especially children, from online privacy invasions and dark patterns is a top priority for the Commission, and these enforcement actions make clear to businesses that the FTC is cracking down on these unlawful practices,” Khan said in a statement.

    The FTC’s complaint and proposed settlement dealing with children’s privacy was filed in the US District Court for the Eastern District of North Carolina. In addition to the alleged illegal collection of children’s data, the FTC also claimed that Epic’s default settings for matchmaking and in-game communications exposed children to bullying and harassment.

    The allegations of Epic’s deceptive design choices were filed as an FTC administrative complaint. The complaint claims Epic made it extremely easy for children to purchase in-game items with a single click or button press without parental approval, resulting in more than one million parental complaints to Epic about unwanted charges.

    The FTC further alleged that Epic made it more difficult to cancel purchases of in-game items by burying the option at the bottom of the screen and by requiring consumers to push and hold a button on their controllers to complete the cancellation. Those design choices were allegedly implemented after surveys showed that, when the cancel button was more prominently displayed, accidental charges were the “number one ‘reason’” users clicked on the button, the FTC said.

    Epic’s agreement with the FTC, which is not yet final, prohibits the company from using dark patterns or charging consumers without their consent, and also forbids Epic from locking players out of their accounts in response to users’ chargeback requests with credit card companies disputing unwanted charges. The agreement will last for 20 years from the time it is adopted.

    In its blog post, Epic said it has agreed with the FTC to implement a feature that explicitly asks Fortnite users whether to save their payment information for future use. The feature is currently live, it added. The company also recently rolled out a more limited version of “Fortnite” for younger players that allows them to access some features while awaiting parental consent but that restricts chat and purchases.

    The FTC said that as part of its children’s privacy settlement, Epic may no longer enable text and voice chat by default for teenage Fortnite players or those under the age of 13. The company must also establish a comprehensive privacy program and delete the data it allegedly gathered in violation of COPPA.

    “We share the underlying principles of fairness, transparency and privacy that the FTC enforces, and the practices referenced in the FTC’s complaints are not how Fortnite operates,” Epic wrote. “We will continue to be upfront about what players can expect when making purchases, ensure cancellations and refunds are simple, and build safeguards that help keep our ecosystem safe and fun for audiences of all ages.”

    [ad_2]

    Source link

  • Indiana Attorney General files lawsuits against TikTok | CNN Business

    Indiana Attorney General files lawsuits against TikTok | CNN Business

    [ad_1]


    New York
    CNN Business
     — 

    Indiana Attorney General Todd Rokita on Wednesday announced he has filed two separate lawsuits against TikTok, which accuse the company of making false claims about the safety of user data, and age-appropriate content.

    “The TikTok app is a malicious and menacing threat unleashed on unsuspecting Indiana consumers by a Chinese company that knows full well the harms it inflicts on users,” Rokita said in a statement. “With this pair of lawsuits, we hope to force TikTok to stop its false, deceptive, and misleading practices, which violate Indiana law.”

    The lawsuits mark the most serious action taken yet by a state against TikTok, amid increasing attention to and concern about TikTok from state and federal officials in recent months. Also on Tuesday, Texas Governor Greg Abbott ordered state agencies to ban TikTok use on government-issued devices, citing the threat of “gaining access to critical U.S. information and infrastructure,” following the lead of several other states, including South Dakota and Maryland.

    TikTok does not comment on pending litigation, but said, “the safety, privacy and security of our community is our top priority,” according to a statement from a company spokesperson.

    “We build youth well-being into our policies, limit features by age, empower parents with tools and resources, and continue to invest in new ways to enjoy content based on age-appropriateness or family comfort,” the spokesperson said. “We are also confident that we’re on a path in our negotiations with the U.S. Government to fully satisfy all reasonable U.S. national security concerns, and we have already made significant strides toward implementing those solutions.”

    The first lawsuit, which was filed on Tuesday, alleges that TikTok lures children onto the platform by falsely claiming it is friendly for users between 13 to 17 years old.

    American teens spend an average of 99 minutes per day on the app, the lawsuit claims, and in that time they’re exposed to content that can contain drug and alcohol use, nudity and intense profanity.

    The suit claims this exposure can negatively influence the behaviors of minors.

    A second lawsuit, filed on Wednesday, alleges that TikTok has “reams” of highly sensitive data and personal information about consumers in Indiana and that the company “has deceived those consumers to believe that this information is protected from the Chinese government and Communist Party,” the media release said.

    The lawsuit claims that TikTok’s European privacy policy has been updated “to clearly state that it permits individuals outside of Europe, including China, to access European user data,” while the company has “made no such update to its U.S privacy policy, which applies to Indiana consumers, explicitly informing them that their data is accessed by individuals and entities in China.”

    In both suits, monetary civil penalties against TikTok, along with injunctive relief are being sought.

    TikTok has for years grappled with bipartisan concerns in Washington about the possibility that US user data could find its way to the Chinese government and be used to undermine US interests, thanks to a national security law in that country that compels companies located there to cooperate with data requests. And there has been renewed criticism of TikTok this year, stemming from a Buzzfeed News report in June that said some US user data has been repeatedly accessed from China. The reporting cited leaked audio recordings of dozens of internal TikTok meetings, including one where a TikTok employee allegedly said, “Everything is seen in China.”

    In a response to the report, TikTok previously said it “has consistently maintained that our engineers in locations outside of the US, including China, can be granted access to US user data on an as-needed basis under those strict controls.” The Committee on Foreign Investment in the United States, a multi-agency government body charged with reviewing business deals involving foreign ownership, has spent months negotiating with TikTok on a proposal to resolve concerns that Chinese government authorities could seek to gain access to the data TikTok holds on US citizens.

    A TikTok executive testified before a Senate panel earlier this year that it doesn’t share information with the Chinese government and that a US-based security team decides who can access US user data from China, but stopped short of committing to cut off flows of US user data to China.

    The popular video-based app has also faced questions about the safety of young users after rocketing to popularity during the Covid-19 pandemic. Last year, TikTok’s Head of Public Policy Michael Beckerman joined executives from Snap and YouTube in a Senate hearing about children’s safety, during which he said TikTok is working to “keep its platform safe and create age appropriate experiences” but added “we do know trust must be earned.”

    And earlier this year, a group of state attorneys general announced an investigation into TikTok’s impact on young Americans focused on the app’s user engagement techniques and alleged risks that the platform may pose to the mental health of children. (At the time, TikTok said that it limits its features by age, provides tools and resources to parents, and designs its policies with the well-being of young people in mind.)

    [ad_2]

    Source link

  • Irish regulator fines Meta $275 million for violations of Europe’s data privacy law | CNN Business

    Irish regulator fines Meta $275 million for violations of Europe’s data privacy law | CNN Business

    [ad_1]


    Washington
    CNN Business
     — 

    Meta has been fined roughly $275 million by Ireland’s data privacy regulator for failing to prevent hackers from siphoning off personal information from more than 500 million Facebook users in a 2019 data leak.

    Monday’s announcement marked the fourth time in about a year that Facebook

    (FB)
    ’s parent company has been penalized by the Irish Data Protection Commission, the chief privacy regulator overseeing Meta’s operations in Europe. The decision to impose the fine was made last Friday, the commission said.

    Since the fall of 2021, Ireland’s DPC has slapped Meta with 912 million euros in fines, going after the social media titan and its other subsidiaries, Instagram and WhatsApp, for alleged violations of Europe’s signature data privacy law, known as the General Data Protection Regulation (GDPR).

    Earlier this fall, Meta was hit with a 405 million euro fine over Instagram’s handling of children’s data, the second-largest GDPR fine in history. Other enforcement actions, in March 2022 and September 2021, led to fines of 17 million euros and 225 million euros, respectively.

    In a statement Monday, a Meta spokesperson said it was reviewing the DPC’s decision “carefully” and that it had cooperated fully with the agency’s investigation.

    The probe began last April after Business Insider reported that more than half a billion Facebook users’ details had been posted on an underground hacker website. At the time, Facebook said malicious actors had abused its contact importer tool to match known phone numbers against the profiles of Facebook users before harvesting additional information from their profiles.

    “Protecting the privacy and security of people’s data is fundamental to how our business works,” Meta said in Monday’s statement. “We made changes to our systems during the time in question, including removing the ability to scrape our features in this way using phone numbers. Unauthorised data scraping is unacceptable and against our rules and we will continue working with our peers on this industry challenge.”

    The Irish DPC’s decision comes amid broad criticism by privacy advocates that regulators have moved slowly and hesitantly to enforce GDPR, which went into effect in 2018.

    The largest GDPR fine to date was imposed last year on Amazon

    (AMZN)
    for 746 million euros by privacy regulators in Luxembourg who said the way the e-commerce company processes personal data does not comply with the law. Amazon

    (AMZN)
    is fighting the penalty.

    [ad_2]

    Source link

  • What a Republican-controlled House could mean for Silicon Valley | CNN Business

    What a Republican-controlled House could mean for Silicon Valley | CNN Business

    [ad_1]


    Washington
    CNN Business
     — 

    With Republicans projected to take control of the House as a result of the midterm elections, tech giants such as Amazon, Google and Meta, who’ve been in the crosshairs of Democrats in recent years, are soon set to face a very different — but no less hostile — political climate in Washington.

    Under the current Democratic-led Congress, top tech executives have been hauled before lawmakers to testify on everything from their companies’ market dominance to social media’s impact on teen mental health. Democrats have hammered away at online platforms’ handling of hate speech and white nationalism, while promoting legislation that could drastically affect the business models of big tech companies.

    In the lame duck session, Democratic lawmakers could renew their attempts at passing tech-focused antitrust legislation that the industry’s biggest players have spent millions lobbying against.

    Republicans aren’t likely to let up the pressure, policy analysts say. But a change in power in the House would likely mean renewed focus on some political priorities — primarily allegations of anti-conservative social media bias — and perhaps an increased emphasis on China and related national security risks, too.

    Here’s what the results of the midterm elections could mean for Big Tech and the push to regulate it.

    In general, tech companies may face more political noise with a Republican House but potentially less policy risk.

    “Republican gains would be good for megacap tech like Google and Apple,” said Paul Gallant, an industry analyst at Cowen Inc. “Republicans will hold hearings about content bias, but they’re not likely to pass antitrust legislation, which is the biggest threat the companies have faced in years.”

    Expect more of the uncomfortable ritual grillings that have made tech CEOs and their lieutenants a frequent sight in Washington, said one industry official who requested anonymity in order to speak more freely.

    “I think the content moderation debate will not just look at how companies make decisions on their platforms, but also how they’ve interacted with the Biden administration,” the official predicted. “The focus will be, ‘Are you too cozy with, and is your content moderation policy led by, feedback you get from the Biden administration?’”

    One company that may see a reprieve is Twitter, whose new owner, Elon Musk, has won plaudits from conservatives for suggesting he could restore former President Donald Trump’s banned Twitter account, among others, and has used his account to endorse voting for Republicans in the 2022 midterm elections.

    The hearings could culminate in more sweeping proposals to roll back Section 230 of the Communications Decency Act, the federal law that grants tech platforms broad latitude to moderate online content as they see fit.

    In the past, Democrats have called for narrowing Section 230, thus exposing tech platforms to more lawsuits, for not removing hate speech and extremist content more aggressively. Republicans have called for expanding platform liability over allegations that social media companies unfairly remove conservative speech.

    Previous legislative proposals to scale back Section 230 have tended to run into constitutionality questions or failed to attract bipartisan support, and those hurdles still remain. But some digital rights advocates who have defended Section 230 aren’t taking anything for granted, saying that if they squint, they can still see a path forward for legislation that might curtail the law.

    “The thing I’m most worried about in the next Congress is a bad Section 230 bill that’s framed as being about ‘protecting kids’ or ‘stopping opioid sales’ or something that sounds non-controversial, but could have far-reaching negative effects” that may unintentionally result in more conservative speech being removed, not less, said Evan Greer, deputy director of Fight for the Future, a digital privacy group.

    Given President Joe Biden’s criticism of Section 230 — a position the White House reiterated as recently as September — he might even be willing to sign such a hypothetical bill. But that scenario is far too premature to consider right now, according to other analysts who point to the Supreme Court, not Congress, as the center of gravity on Section 230.

    There are two high-profile cases pending before the Court that could powerfully affect the law’s scope. The cases have to do with whether tech platforms can be sued in connection with federal anti-terrorism laws; if the Court finds that they can, it would effectively mean a significant narrowing of Section 230’s protections. And it could create openings for others to continue chipping away at the law.

    “Republicans in Congress certainly have their views on content moderation, but the big thing to look for is what the Supreme Court does,” said Andy Halataei, executive vice president of government affairs for the Information Technology Industry Council, a tech-backed advocacy group. “That will drive either the opportunity or the consensus for Congress to move forward.”

    Both parties have been hawkish on China, but expect Republicans to make it a pillar of their agenda. Within the first few days, Republicans could seek to establish a new select committee devoted to China and its impact on US supply chains, according to the industry official.

    The new committee would likely look at the economic leverage China may have over the United States and the national security risks that could pose, ranging from China’s dominance in the rare earth minerals market to agricultural products, the official said.

    And while Republicans would likely bring even greater scrutiny to businesses with links to China, including TikTok, they also would have a substantial impact on the semiconductor industry by exploring further ways to restrict Chinese access to US technology.

    “Republican gains wouldn’t be great for the chips and tools companies because the China hawks will gain power,” said Gallant.

    In a subsequent research note for investors, Gallant added: “For some China hawks — including likely House Foreign Affairs Chair Mike McCaul — Biden can’t go far enough,” suggesting Republicans could try to introduce even more restrictions on China exports through legislation.

    Multiple Congress-watchers told CNN that support for federal privacy legislation is still bipartisan and the area remains one of a handful where lawmakers could make progress in the next Congress.

    One proposal, known as the American Data Privacy and Protection Act, would enshrine the nation’s first-ever consumer data privacy right into US law. It was approved by a key House committee this year and policy analysts say it could see more opportunities to advance next year.

    The privacy issue is becoming more salient to consumers by the day, said Greer, as the Supreme Court’s decision to overturn Roe v. Wade has made the security of location data, search histories and other personal information a critical safety matter.

    “Hot button tech policy fights like data privacy, antitrust, and content moderation have massive implications for core issues like abortion access, voting rights, racial justice, and LGBTQ+ protections,” Greer said.

    [ad_2]

    Source link

  • TikTok makes clear European data can be accessed by China-based employees | CNN Business

    TikTok makes clear European data can be accessed by China-based employees | CNN Business

    [ad_1]


    Washington
    CNN Business
     — 

    TikTok updated its privacy policies for European users on Tuesday, adding explicit disclosures that personal data from the app may be viewed by employees in China.

    The update aligns with what TikTok executives have said publicly. But the addition reflects the intense scrutiny TikTok has faced over its international data flows.

    The announcement, which TikTok said was aimed at providing greater transparency, applies to users in the European Economic Area, the UK and Switzerland — not the United States, though TikTok said it does store European users’ data in the US and in Singapore.

    In addition to China, TikTok data may be handled by employees in countries including Brazil, Canada, Israel, Japan, Malaysia, the Philippines, Singapore, South Korea and the US, the company said. Access to European user data, TikTok added, is allowed for “certain employees within our corporate group” and is “based on a demonstrated need to do their job.”

    TikTok also said those employees’ access is governed by “robust security controls” and occurs “by way of methods that are recognized under the GDPR,” the European Union’s signature privacy law.

    “In order to operate a global platform designed for sharing joyful content, we rely on a global workforce to ensure that our community’s TikTok experience is consistent, enjoyable and safe,” Elaine Fox, TikTok’s head of privacy in Europe, wrote in the company’s announcement.

    US policymakers have grown increasingly vocal about concerns the Chinese government could pressure TikTok or its parent company, ByteDance, to hand over users’ personal data under the country’s national security laws.

    Amid those fears, TikTok has spent months negotiating with the federal government on a possible national security agreement that would allow it to continue operating in the United States. TikTok has also migrated US user data from proprietary servers in the US and Singapore to cloud-based servers hosted by Oracle.

    But that has not dampened criticism that user data could still be accessed by China-based individuals subject to that country’s security laws, a practice TikTok would not commit to stopping and further emphasized would continue with Tuesday’s European policy update.

    [ad_2]

    Source link

  • The White House released an ‘AI Bill of Rights’ | CNN Business

    The White House released an ‘AI Bill of Rights’ | CNN Business

    [ad_1]



    CNN
     — 

    The White House on Tuesday released a set of guidelines it hopes will spur companies to make and deploy artificial intelligence more responsibly and limit AI-based surveillance, despite the fact that there are few US laws compelling them to do so.

    The guidelines, which have been in the works for a year, are not binding in any way. But the White House hopes it will convince tech companies to take additional steps to protect consumers, including clearly explaining how and why an automated system is in use and designing AI systems to be equitable. The blueprint joins a number of other voluntary efforts to adopt rules regarding transparency and ethics in AI, which have come from government agencies, companies and non-government groups.

    Though the use of AI has proliferated in recent years — being used for everything from confirming people’s identities for unemployment benefits to generating a highly realistic picture in response to a written prompt — the US legislative landscape has not kept pace. There are no federal laws specifically regulating AI or applications of AI, such as facial-recognition software, which has been criticized by privacy and digital rights groups for years over privacy issues and leading to the wrongful arrests, of at least several Black men, among other issues.

    A handful of individual states have their own rules. Illinois, for instance, has a law known as the Biometric Information Privacy Act (BIPA), which forces companies to get permission from people before collecting biometric data like fingerprints or scans of facial geometry. It also allows Illinois residents to sue companies for alleged violations of the law. Since 2019, a number of communities and some states have also banned the use of facial-recognition software in various ways, though a few have since pulled back on such rules.

    The Blueprint for an AI Bill of Rights includes five principles: That people should be protected from systems deemed “unsafe or ineffective;” that people shouldn’t be discriminated against via algorithms and that AI-driven systems should be made and used “in an equitable way;” that people should be kept safe “from abusive data practices” by safeguards built in to AI systems and have control over how data about them is used; that people should be aware when an automated system is in use and be aware of how it could affect them; and that people should be able to opt out of such systems “where appropriate” and get help from a person instead of a computer.

    “Much more than a set of principles, this is a blueprint to empower the American people to expect better and demand better from their technologies,” said Alondra Nelson, the deputy director of the White House Office of Science and Technology Policy, during a press briefing.

    While some privacy and technology advocates responded positively to the guidelines, they also pointed out that they are just that, guidelines — and not legally binding.

    In a statement, Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, a Washington, DC-based nonprofit, said, “Today’s agency actions are valuable, but they would be even more effective if they were built on a foundation set up by a comprehensive federal privacy law.”

    In a separate statement, ReNika Moore, director of the American Civil Liberties Union’s Racial Justice Program, called the principles “an important step in addressing the harms of AI” and added that “there should be no loopholes or carve-outs for these protections.”

    “It’s critical that the Biden administration use all levers available to make the promises of the Bill of Rights blueprint a reality,” Moore said.

    [ad_2]

    Source link

  • Biden signs order to implement EU-US data privacy framework | CNN Business

    Biden signs order to implement EU-US data privacy framework | CNN Business

    [ad_1]



    Reuters
     — 

    President Joe Biden on Friday signed an executive order to implement a European Union-United States data transfer framework announced in March that adopts new American intelligence gathering privacy safeguards.

    The deal seeks to end the limbo in which thousands of companies found themselves after Europe’s top court threw out two previous pacts due to concerns about U.S. surveillance.

    U.S. Commerce Secretary Gina Raimondo told reporters the executive order “is the culmination of our joint effort to restore trust and stability to transatlantic data flows” and “will ensure theprivacy of EU personal data.”

    The framework addresses the concerns of the Court of Justice of the European Union which struck down the prior EU-U.S. Privacy Shield framework as a valid data transfer mechanism under EU law.

    The White House said “transatlantic data flows are critical to enabling the $7.1 trillion EU-U.S. economic relationship” and the framework “will restore an important legal basis for transatlanticdata flows.”

    The White House said Biden’s order bolstered current “privacy and civil liberties safeguards” for U.S. intelligence gathering and created an independent, binding multi-layer redress mechanism for individuals who believe their personal data was illegally collected by U.S. intelligence agencies.

    EU officials said it would take about six months for this to complete a complex approval process, noting the previous system only had redress to an ombudsperson inside the U.S. administration, which the EU court rejected.

    Biden’s order adopts new safeguards on the activities of U.S. intelligence gathering, requiring they do only what is necessary and proportionate, and creates a two-step system of redress – first to an intelligence agency watchdog then to a court with independent judges, whose decisions would bind intelligence agencies.

    Biden and European Commission President Ursula von der Leyen in March said the provisional agreement offered stronger legal protections and addressed the EU court’s concerns.

    Raimondo on Friday will transmit a series of letters to the EU from U.S. agencies “outlining the operation and enforcement of the EU-U.S. data privacy framework” that “will form the basis for the European Commission’s assessment in a new adequacy decision,” she said.

    Under the order, the Civil Liberties Protection Officer (CLPO) in the U.S. Office of the Director of National Intelligence will investigate complaints and make decisions.

    The U.S. Justice Department is establishing a Data Protection Review Court to independently review CLPO’s decisions. Judges with experience in data privacy and national security will be appointed from outside the U.S. government.

    European privacy activists have threatened to challenge the framework if they did not think it adequately protects privacy.

    [ad_2]

    Source link

  • Why ‘Ring Nation’ may be the most dystopian show on TV | CNN Business

    Why ‘Ring Nation’ may be the most dystopian show on TV | CNN Business

    [ad_1]



    CNN Business
     — 

    Anyone watching the first episode of “Ring Nation” this week would have seen short clips of a man finding out his wife was pregnant with triplets, an uninvited iguana showing up at someone’s front door and an unsuspecting teenage boy being chased down by a crane in his driveway.

    “Ring Nation,” marketed as a modern take on the classic “America’s Funniest Home Videos” franchise, quietly premiered on Monday on dozens of cable channels in over 70 US cities. But despite the light subject matter, it may be among the most controversial productions currently on television.

    The show repurposes clips captured by Amazon-owned Ring doorbell cameras, as well as other home videos, and is produced by Amazon-owned MGM Studios. Advocacy groups have criticized “Ring Nation” both as an example of the e-commerce giant’s vast reach into consumers’ lives and for effectively making light of surveillance technology.

    Ring devices, which are intended to provide additional security at home, have long faced scrutiny from lawmakers for how their footage can be accessed and used by law enforcement. As of July, Ring had provided surveillance footage to law enforcement without a warrant or the consent of doorbell-owners 11 times in 2022, according to a letter Amazon sent to Congress that month.

    Ahead of the show’s premiere, tens of thousands of people signed an online petition calling for “Ring Nation” to be canceled.

    “The show is making a mockery of the very real harms caused by Ring devices by essentially rebranding surveillance as entertainment,” said Myaisha Hayes, an organizer with MediaJustice, one of the creators of the petition. “With ‘Ring Nation,’ they’re trying to make viral videos trendy and entertaining this way, so more people buy these devices.”

    Beyond that, Hayes also said the show highlights “Amazon’s monopoly power.” As she put it: “This is an Amazon-owned studio producing a show about an Amazon surveillance product.”

    A Ring spokesperson told CNN in a statement that the program “showcases a wide variety of videos like the silly ways a dad picks up his daughter from school recorded on a smart phone and a man telling jokes to his family via video doorbell.” The spokesperson added: “We think that viewers will be delighted by these memorable moments shared by others.”

    The company said privacy is foundational to the show, and “Ring Nation” secures permission to use the video from both the owner and anyone identifiable in the clip.

    Still, privacy advocates say these cameras can potentially be used to capture far more sensitive footage than cute animal interactions and dad jokes. The show’s debut also comes at a time when the stakes for digital privacy have arguably never been higher. With the overturning of Roe v. Wade, privacy experts have warned that digital data could be used to punish abortion seekers.

    Evan Greer, the director of digital privacy group Fight for the Future, which is also sponsoring the petition calling for the show’s cancellation, said much of Amazon’s business depends on collecting data and engaging in forms of surveillance, whether it be through its website, smart speakers or doorbell cameras.

    “Surveillance as a kind of ethos really runs throughout every single thing that Amazon does,” Greer said. With that in mind, Greer argues the light-hearted format of “Ring Nation” is an “incredibly insidious attempt” to make this mass surveillance “feel not just normal, but fun.”

    Ultimately, Greer views the growing surveillance network of Ring cameras as “a threat not just to our civil rights, but to our understanding of what type of future we want to live in.”

    In other words: It may make for entertaining TV, but it doesn’t make for a better society.

    [ad_2]

    Source link

  • Microsoft to pay $20 million to settle Xbox Live privacy allegations | CNN Business

    Microsoft to pay $20 million to settle Xbox Live privacy allegations | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Microsoft will pay $20 million to settle US government allegations that the tech giant violated children’s privacy by illegally collecting their personal information through its Xbox Live gaming service.

    According to the Federal Trade Commission, Microsoft broke the law by failing to tell parents about the full breadth of information it gathered from kids under the age of 13.

    That information, the FTC said in a lawsuit filed Monday, included the fact that children may share images of themselves in their account profiles, as well as video and audio recordings of themselves, their real names and logs of their activity on the platform.

    Microsoft also allegedly kept for years the personal information of millions of people, including children, who started creating accounts with Xbox Live but who never completed the sign-up process.

    “Even when a user indicated that they were under 13, they were also asked, until late 2021, to provide additional personal information including a phone number and to agree to Microsoft’s service agreement and advertising policy, which until 2019 included a pre-checked box allowing Microsoft to send promotional messages and to share user data with advertisers,” the FTC said in a release.

    In a statement, Microsoft said: “We recently entered into a settlement with the U.S. Federal Trade Commission (FTC) to update our account creation process and resolve a data retention glitch found in our system. We are committed to complying with the order.”

    Parental settings give adults some control over what their children’s accounts show to other users. For example, Xbox Live’s default settings restrict who children can interact with on the service, the FTC said. But other default settings, the agency alleged, allow kids to access third-party games and apps with minimal friction.

    Microsoft failed to sufficiently disclose to parents what information the company was collecting from kids and how it was being used, the FTC said, alleging violations of the Children’s Online Privacy Protection Act (COPPA).

    In agreeing to settle the claims, Microsoft committed to several additional measures beyond the financial penalty.

    Microsoft agreed to delete any personal information it collects from kids if they don’t complete the account registration process. It also agreed to tell third-party game publishers when a user may be a child, effectively putting the third-party publishers on notice to comply with COPPA in handling the user’s information.

    The settlement comes as the FTC has challenged Microsoft’s $69 billion acquisition of video game giant Activision-Blizzard, a proposed deal that would turn Microsoft into the world’s third-largest game publisher and give it control over popular franchises such as “Call of Duty” and “World of Warcraft.”

    US and UK officials have alleged that Microsoft’s acquisition could give it anti-competitive control over the games industry by being able to withhold titles from rival platforms, particularly in the nascent cloud gaming sector. To address the concerns, Microsoft has struck licensing deals with other companies to ensure their customers continue to have access to Activision games following the deal’s close.

    Those concessions have convinced the European Union to approve the deal, but litigation to block the deal involving US and UK regulators remains ongoing.

    [ad_2]

    Source link

  • What the chaos at Twitter means for the future of social movements | CNN Business

    What the chaos at Twitter means for the future of social movements | CNN Business

    [ad_1]

    Editor’s Note: The CNN Original Series “The 2010s” looks back at a turbulent era marked by extraordinary political and social upheaval. New episodes air at 9 p.m. ET/PT Sundays.



    CNN
     — 

    When thousands of Egyptians marched through the streets during the Arab Spring of 2011, they had a tool at their disposal that earlier social movements didn’t: Twitter.

    A key group of activists used the platform to form networks and organize protests against the authoritarian regime, while many more demonstrators used it to disseminate information and images from the ground for the rest of the world to see. Months later, organizers from the Occupy Wall Street movement took to Twitter to coordinate protests in New York and beyond.

    Twitter fostered public conversation around the Black Lives Matter movement after the 2014 police killing of Michael Brown in Ferguson, Missouri, and again after the 2020 police killing of George Floyd. It amplified #MeToo in the aftermath of the sexual assault allegations against Hollywood producer Harvey Weinstein, and catapulted other revolutionary movements around the world to global attention.

    “You can’t underestimate the impact of Twitter to social movements,” Amara Enyia, manager of policy and research for the Movement for Black Lives, told CNN.

    Twitter has often been heralded as a democratizing force, bringing previously marginalized voices to the forefront and giving the public a platform to demand accountability from leaders. (It has also enabled the spread of misinformation, extremist ideas and abusive content.)

    But since Elon Musk acquired Twitter last year and the platform plunged into chaos, some organizers and digital media experts have been bracing for the impact that his controversial policy changes and mass layoffs may have on social movements going forward.

    Though Twitter has often been referred to as a public square, some of Musk’s recent moves challenge that description.

    Through Twitter, organizers and political groups have had a level of direct access to policymakers and leaders that wouldn’t have been possible in person, said Rachel Kuo, an assistant professor of media and cinema studies at the University of Illinois, Urbana-Champaign. Verified activists were able to promote certain messages that the algorithm then pushed to the top of users’ feeds, organizers could launch campaigns that caught the attention of high-profile figures and the public could follow along for real-time updates.

    “There are now issues in how people see Twitter as a source of information and a source of political community,” said Kuo, whose research focuses on race, social movements and digital technologies. “It isn’t seen in the same way anymore.”

    Elon Musk's controversial policy changes at Twitter could have implications for social movements, some activists say.

    Musk upended traditional Twitter verification and turned it into a pay-for-play system, leading to the impersonation of government accounts and the spread of fake images. For organizers who opt not to pay the monthly subscription fee for a blue check, that also means a loss of credibility and visibility, Kuo added.

    Twitter, which has cut much of its public relations team under Musk, did not respond to a request for comment.

    Twitter’s role in information-sharing has been disrupted in other ways, too.

    The platform has been plagued by technical glitches after mass layoffs and departures at the company, frustrating many users. People have also reported that the “for you” timeline is showing them content they aren’t interested in.

    As a result of these issues and others, some are leaving Twitter altogether – more than 32 million users are projected to exit the platform in the two years following Musk’s takeover, according to a December 2022 forecast from the market research agency Insider Intelligence. (Twitter reported having 238 million monetizable daily active users last year before Musk acquired it.)

    With fewer people on Twitter, the platform becomes less centralized and the information landscape more fractured, said Sarah Aoun, a privacy and security researcher who works on cybersecurity for the Movement for Black Lives. That makes it harder for activists to connect, exchange tactics and build solidarity in the way they once did.

    Protesters in Cairo gather in Tahrir Square in November 2011.

    Musk’s approach to content moderation has also made Twitter a more hostile environment, Aoun said. Twitter has never been a completely safe space for marginalized voices – women, people of color, LGBTQ people and other vulnerable groups have long been targets of online harassment and abuse – but reports from the Center for Countering Digital Hate and Anti-Defamation League indicate an increase in hate speech on the platform under Musk’s leadership. (Musk has previously pushed back at that characterization by focusing on a different metric.)

    Some are also disillusioned over Musk’s decision to reinstate users who were previously suspended for violating the platform’s rules, including former President Donald Trump and GOP Rep. Marjorie Taylor Greene.

    “The lack of verification, the mass exodus, the inability to coordinate the way that we used to be able to coordinate and the content moderation (gutting) makes it a very difficult platform to be on at the moment,” Aoun said.

    Musk has stepped back as Twitter’s CEO, a role now held by former NBCUniversal marketing executive Linda Yaccarino. But he will maintain significant control over the platform as the company’s owner, executive chairman and chief technology officer.

    The changes at Twitter have prompted some activists and organizers to reassess their relationships with the platform.

    Rich Wallace, executive director of the Chicago-based organization Equity and Transformation (EAT), said that previously, he used to see robust engagement on tweets about social injustice or racial inequity, whether it was from those who agreed with him or didn’t. Now, he finds that substantive posts barely get traction as opposed to tweets he considers more mundane.

    Wallace said his organization, which seeks to build social and economic equity for Black workers in the informal economy, still shares information about community events on Twitter, but the potential to find new allies or engage in meaningful conversation on the platform is largely a thing of the past.

    Twitter is no longer a space for education and community building that it once was, Wallace said. It’s a shift in how he once viewed the platform, but he isn’t especially concerned. For his organization, it simply means a re-emphasis on the grassroots, in-person work they were already doing.

    People raise their fists in June 2020 as they protest the police killing of George Floyd.

    “As organizers, we’ve been creative in how we organize around barriers,” he said. “This is just one of the newer barriers that we have to assess and organize through.”

    As Kuo sees it, the ways that the changes at Twitter will affect organizing and activism will vary widely. Hyperlocal community organizers or those who work with populations that don’t speak English aren’t typically using Twitter in their day-to-day work, and so the recent shifts likely won’t affect them drastically. But she predicts that mid-to-large nonprofit organizations with communications staff might be rethinking their strategy on the platform.

    “It’s very dependent on organizational structure, form, strategies for change and political vision,” Kuo said.

    Enyia said that on a personal level, she finds that she’s engaging with people on Twitter less often and moreso using the platform to keep up with news. But in her advocacy work with the Movement for Black Lives, it remains an important tool.

    “For us, its utility is in the fact that it creates more access points to our policy platform, to the issues that we’re advocating on,” she said. “And in that regard, it’s still very, very useful.”

    When Musk first took over Twitter, some organizers and activists flocked to other alternatives, such as Mastodon or Bluesky (an app backed by Twitter co-founder and former CEO Jack Dorsey).

    Neither appears to be fulfilling the same purpose that Twitter once did, Aoun and others said. Mastodon and Bluesky are decentralized and fewer people are using them, making it more difficult to build community. And while their numbers are growing, they’re still far smaller than Twitter.

    The Bluesky app is seen on a phone and laptop in June 2023.

    In the case of Mastodon, there are privacy and security issues that concern some activists. Because the social network allows users to join different servers run by various groups and individuals, Aoun said “the privacy, security and content moderation is basically as good as the person behind the server.” Twitter – at least before Musk took over – had dedicated privacy and security teams, offering more transparency about how their systems worked.

    Some activists are using popular social networks such as Instagram and TikTok, but the visual nature of those platforms versus the text-based medium of Twitter changes how people are able to interact and engage with each other, Kuo said.

    Twitter has been an incredibly powerful tool for social movements, Enyia said. But ultimately, the platform is just that – a tool.

    “There is no panacea for just the nuts and bolts work that it takes to meet people, to engage people, to organize and talk to people,” Enyia said. “So even if we recognize that social media is a tool, we don’t put all of our eggs in that basket.”

    Social media platforms come and go, and the same could happen to Twitter. So while Enyia’s organization continues to use the platform for its own ends, it’s prepared for a reality in which Twitter is less relevant.

    “We have to stay on top of it to make sure that the tools are serving their purpose as it relates to our work,” Enyia said. “But then we have to be ready to evolve or to move on or to adapt to different tools when it becomes clear that that’s the direction we have to go.”

    [ad_2]

    Source link