Paris, France — French authorities have asked Elon Musk to appear to answer questions as part of a probe into his social media platform X, the Paris prosecutor’s office said Monday, as authorities searched X’s office in the French capital.
“Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,” the Paris prosecutor’s office said in a statement.
French cybercrime authorities were carrying out a search, meanwhile, at X’s offices in Paris, the prosecutor’s office said.
The summonses for Musk and Yaccarino and the search at the X office were related to an investigation launched in January 2025 over complaints about how X’s algorithm recommends content to users and gathers data, the prosecutor’s office said. Officials have previously raised concern that the way X works could amount to political interference.
The investigation is to ensure that X is in compliance with French laws, and the prosecutor added that it was broadened last year after reports that X was allowing users to share nonconsensual, AI-generated sexually explicit imagery, and holocaust denial content.
Elon Musk, CEO of Tesla and SpaceX, and Shivon Zilis, a venture capitalist, arrive to attend the wedding of Dan Scavino, White House Deputy Chief of Staff, and Erin Elmore, the Department of State Director of Art in Embassies, at President Trump’s Mar-a-Lago resort in Palm Beach, Florida, Feb. 1, 2026.
SAUL LOEB/AFP/Getty
X and Musk have dismissed the French investigation, and similar probes by European Union and British authorities, as baseless, politically motivated attacks on free speech.
Yaccarino resigned as CEO of X in July last year after two years at the helm of the company.
The investigation is being led by the cybercrime unit of the prosecutor’s office, in conjunction with French police and the joint European policing agency Europol.
A CBS News investigation found late last month that the Grok AI tool on Musk’s X platform still allowed users in the U.S., U.K. and EU to digitally undress people without their consent, despite public pledges from the company to stop the function.
The Grok chatbot, both via its standalone app and for premium X account holders using the platform, allowed people to use artificial intelligence to edit images of real people and show them in revealing clothing such as bikinis.
A request for comment on the findings of CBS News’ investigation was met with an apparent auto-reply from Musk’s company xAI, saying only: “Legacy media lies.”
Scrutiny of the Grok feature has mounted rapidly in recent months, with the British government warning X could face a U.K.-wide ban if it fails to block the “bikini-fy” tool, and EU regulators announcing their own investigation into the Grok AI editing function on in late January.
CBS News found Grok was still enabling users to digitally undress people in photos weeks after X said, earlier in January, that it had, “implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.”
The police department in the Northern California city of Mountain View is suspending the use of automated license plate reader cameras after the discovery of unauthorized access to data by federal and state agencies, the police chief said on Monday.
In a letter to the community, Mountain View Police Chief Mike Canfield said he decided to turn off all Flock Safety ALPR cameras in the Silicon Valley hub because he no longer has confidence in the Flock system. Last week, it was disclosed that hundreds of federal and state law enforcement agencies had accessed the city’s ALPR data without the department’s knowledge.
“Like many of you, I was deeply disappointed to learn that Flock Safety did not meet the City’s requirements regarding our data access control and transparency,” Canfield stated in the letter. “The existence of access by out of state agencies, without the City’s awareness, that circumvented the protections we purposefully built and believed were in place is frankly unacceptable to me and to the dedicated people of the MVPD.”
On Jan. 30, the City of Mountain View said an audit of its ALPR system showed that the first ALPR camera deployed had been set to a “nationwide” setting by Flock Safety without MVPD’s permission or knowledge. As a result, between August and November 2024, data from the camera was accessed by the U.S. Bureau of Alcohol, Tobacco, Firearms and Explosives offices in Kentucky and Nashville, TN; Langley Air Force Base in Virginia; the U.S. GSA Office of Inspector General; Lake Mead National Recreation Area in Nevada; and an Ohio Air Force Base, according to the city, adding that it was unclear whether the searches resulted in license plate information being shared.
The audit also showed that a “statewide” search function was enabled on 29 of 30 cameras that were deployed, which was against protocols established for the pilot program, the city said. This function allowed Flock to enable access to state law enforcement agencies not approved by MVPD.
The statewide setting was immediately disabled on January 5 once the MVPD identified the issue, the city said.
“This is a system failure on Flock Safety’s part,” the city said in a statement last week. “MVPD has a policy and controls in place for the ALPR pilot. MVPD worked closely with Flock Safety during the outset of the program to design a model that strictly prohibited out-of-state data sharing and ensured that any agency receiving access to Mountain View’s data was approved by the Police Chief or his designee.”
Mountain View’s first ALPR camera went online in August 2024, and the final camera was installed last month.
“The council voted to put this in unanimously in 2024, and we were given a lot of assurances that we would have control over our data and who gets access to it, and it definitely would not be used by anyone in the federal government, and that clearly wasn’t the case,” said Mayor Emily Ann Ramos.
Canfield said the suspension of the Flock camera system was effective immediately and would remain inactive until the City Council provides further direction about the future of the pilot program.
“I share your anger and frustration regarding how Flock Safety’s system enabled out-of-state agencies to search our license plate data, and I am sorry that such searches occurred. I know how essential transparency is for maintaining trust and for community policing. This is why MVPD has been so open about what we learned and why we are pausing this program until our City Council can weigh in.
The City Council was anticipated to discuss the ALPR system at its Feb. 24 meeting.
“We’ll have to make sure that our police department can maintain a high level of service for our residents,” said Ramos. “I’m just not sure we’re willing to make that trade-off with the LPRs again.”
Canfield said that despite the unauthorized ALPR access, the cameras enhance community safety and have helped officers investigate burglaries, home break-ins, and a reported kidnapping. He added that his department was looking into alternative vendors with a stronger track record of data protection, oversight, and transparency.
In an emailed statement to CBS News Bay Area, Flock Safety spokesperson Holly Beilin said, “We are working through Mountain View’s specific questions and concerns directly with the city, and will continue to engage with our partners in the Police Department and city government to resolve these issues. We look forward to resuming our successful partnership following the upcoming Council meeting.”
The City of Mountain View said last week that Flock had assured the city that its systems had been improved and were no longer enabling access outside of the State of California.
California law prohibits any ALPR information from being sold, shared, or transferred to out-of-state or federal agencies without a court order or warrant issued by a California court. The American Civil Liberties Union has warned that ALPR cameras can infringe on civil rights and potentially violate the U.S. Constitution’s Fourth Amendment by facilitating unreasonable searches and pervasive surveillance.
A six-month investigation by CBS News showed more than a dozen cases of ALPR errors leading to incidents of wrongful stops or instances of the technology being abused.
TikTok — now in U.S. hands after the social media service split from China-based ByteDance earlier this year — is raising concerns among some users about its new privacy policy, prompting questions about the scope of its data collection.
TikTok on Jan. 22 confirmed that a new U.S.-based entity was in control of the app, with the venture formed to sidestep a federal law that forced ByteDance to either sell its stake in the platform or be cut off from the U.S. market. That same day, the company posted its new user terms and conditions and privacy policy.
Backlash to the new policies quickly spread on social media, with some users saying they deleted the app over privacy fears, while others flagged the changes for their followers. One complaint: a new provision stating that TikTok may collect “precise location information” from users’ devices if they enable location services in their device settings.
Some social media users attacked the new privacy policy as “beyond invasive and predatory,” while others decried the app’s “surveillance.”
A shift under U.S. owners
TikTok’s new geolocation practices are a change from its previous policy under ByteDance, experts said.
“The change in location data is the most stark because the previous privacy policy had explicitly said that the current versions of the app do not collect precise GPS information,” Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center (EPIC), a public interest research center focused on data privacy.
She added, “Folks should be concerned about that. Your precise location data can be down to your address or even what floor you’re on in an apartment building.”
TikTok updated its privacy policy to include clearer language about location information, and plans to soon allow U.S. users to share their precise location with TikTok or opt out of that feature, according to a TikTok official. The company plans to use the precise location data to provide new services and features to users, the official noted.
TikTok’s new ownership includes software maker Oracle, private equity firm Silver Lake and Abu Dhabi-based investment firm MGX, which will own a combined 45% of the company.
Another 35% stake in TikTok will be owned by eight other investors, including Dell CEO Michael Dell’s personal investment office. ByteDance will retain 19.9% of the business, just below the 20% ownership cap allowed under federal law.
What TikTok collects under its new privacy policy
Some TikTok users are also expressing concern about the types of personal information the app says it may collect, although its previous privacy policy disclosed that it might collect the same types of data. Under both the new and previous policies, TikTok said it may collect users:
• Racial or ethnic origin • National origin • Religious beliefs • Mental health diagnosis • Physical health diagnosis • Sexual life • Sexual orientation • Status as transgender • Status as nonbinary • Citizenship status • Immigration status • Financial information • Government-issued identification numbers, such as a driver’s license number
But the new policy also changes how TikTok describes its handling of sensitive data. The company now states that it “processes such sensitive personal information in accordance with applicable law.”
The earlier policy framed this more narrowly, saying it used sensitive information only when needed to run the service or to comply with legal requirements — for example, using payment details to process a purchase or a driver’s license to verify a user’s identity.
The new language mirrors that of the California Consumer Privacy Act, a law that requires businesses to disclose what types of information they collect, including race and ethnic origin, religious or philosophical beliefs, and their sex life.
What about geolocation tracking?
The biggest change between TikTok’s current and previous privacy policies lies in their treatment of location tracking, which is now explicitly classified as sensitive data in the 2026 version.
“We may also collect precise location data, depending on your settings and as explained below,” the latest privacy policy states.
Both the older and newer versions note that TikTok may determine a user’s “approximate location” based on signals such as their SIM card region or IP address. But the new policy adds that TikTok is also allowed to collect a user’s “precise location” if the person has enabled location services for TikTok.
The new policy notes that users can “turn off location services from your device settings at any time.”
By contrast, the pre-2026 privacy policy explicitly stated that current versions of the app do “not collect precise or approximate GPS information from U.S. users.”
TikTok doesn’t yet have a toggle to allow people to switch off their precise location data because the company hasn’t yet added that tracking functionality to the app, the TikTok official said.
When the app rolls out the feature, users will see a prompt that asks whether they want to share their location, the TikTok official said.
Consumer advocates generally recommend that people turn off precise location tracking within the apps they use. For instance, X users can go to their “privacy and safety settings” and then click “location services” to see if they have enabled the app to track their exact location. It can be disabled by toggling the switch off.
However, even if precise location tracking is disabled, apps can still narrow down your general location through your IP address, according to Consumer Reports.
Can users opt out of TikTok’s new policies?
Since Jan. 22, when TikTok officially came under U.S. ownership, the app has presented users with a pop-up screen alerting them to the new terms of service and privacy policies. To continue to use the service, users must click “agree,” or else they are blocked from using TikTok.
Since splitting from its Chinese parent company in January 2026, TikTok has introduced new terms of service and privacy policies, including some changes that are sparking concerns among users.
Aimee Picchi
“If the only choice is to accept the unnecessary collection and use of your location data, your citizenship data and other sensitive data, or not use the app at all, that’s not a real choice,” EPIC’s Fitzgerald told CBS News.
Do other social media apps track personal data?
Other social media apps also track personal data, including Meta and X. The latter’s privacy policy specifies that users can “choose to share your current precise location or places where you’ve previously used X by enabling these settings in your account.”
Americans are notoriously lax about providing apps with access to their personal data, although about 8 in 10 say they’re concerned about how corporations use the data they collect about them, Pew Research Center found in a 2023 survey.
Still, more than half of consumers agree to companies’ privacy policies without reading them, the study found.
New Delhi — India’s government revoked an order on Wednesday that had directed smartphone makers such as Apple and Samsung to install a state-developed and owned security app on all new devices. The move came after two days of criticism from opposition politicians and privacy organizations that the “Sanchar Saathi” app was an effort to snoop on citizens through their phones.
“Government has decided not to make the pre-installation mandatory for mobile manufacturers,” India’s Ministry of Communications said in a statement Wednesday afternoon.
The initial order, issued privately to phone makers by the ministry late last month, was leaked to Indian media outlets on Monday. It directed all phone makers to preinstall the Sanchar Saathi (which means Communication Partner in Hindi) app on new phones within 90 days, and also on older phones through software updates.
A man installs the state-owned and run cybersecurity application Sanchar Saathi on his mobile phone in Srinagar, Jammu and Kashmir, India, Dec. 2, 2025.
Firdous Nazir/NurPhoto/Getty
The order, reported from Monday by numerous Indian media outlets and later acknowledged by the government, had asked manufacturers to ensure that the functions of the app could not be “disabled or restricted.”
There was an immediate backlash on Monday, with opposition political parties quickly labelling the government software a “snooping app” and drawing parallels to Pegasus, the hacking spyware developed, marketed and licensed to governments around the world by the Israeli company NSO Group.
On Tuesday, India’s national Minister of Communications Jyotiraditya Scindia insisted to journalists outside the parliament that the Sanchar Sathi app was non-compulsory and in line with democratic principles. He said smartphone owners could activate the app at their convenience to access its benefits, and they could also delete it from devices at any time.
He did not, however, say anything on Tuesday to deny or change the order to phone makers to ensure the app was pre-installed.
On Wednesday, Scindia insisted that “neither is snooping possible, nor it will be done” with the app.
India’s Minister of Communications Jyotiraditya M. Scindia speaks during a news conference at the National Media Center, in an Oct. 17, 2025 file photo taken in New Delhi, India.
Vipin Kumar/Hindustan Times/Getty
While the order for it to be installed universally was revoked, the government continued defending the app on Wednesday, saying the intent had been to “provide access to cybersecurity to all citizens,” and insisting that it was “secure and purely meant to help citizens.”
Opposition politicians say “it is a snooping app”
The government’s U-turn came after sharp criticism from opposition political parties and digital rights advocates.
“It is a snooping app. It’s ridiculous. Citizens have the right to privacy. Everyone must have the right to privacy to send messages to family, friends, without the government looking at everything,” Priyanka Gandhi, leader of the opposition Congress party, told reporters outside India’s parliament on Tuesday.
“They brought in Pegasus and have been unable to keep it under control. MPs and MLAs all say that their phones are being tapped. For the last 11 years, basic rights of the Indians have been taken away… This is the real violation of National Security,” said Renuka Chowdhury, another Congress member.
Digital privacy advocates also raised concerns about the government order, saying it would breach citizens’ right to privacy in a country with more than 1.2 billion cell phone users.
“No government will ever be expected to acknowledge that a government app is a snooping tool, even in China and Russia, where such apps have been mandated,” Indian technology analyst Prasanto K. Roy told CBS News on Wednesday. “A government statement alone is not adequate to inspire confidence in this.”
Roy said the government should restrict the default permissions settings that enable the app to access data on smartphones to the absolute minimum, and explain why those permissions were deemed necessary. He added that the code for the app should be open-source and published online, to enable independent security professionals to scrutinise it.
“In plain terms, this converts every smartphone sold in India into a vessel for state-mandated software that the user cannot meaningfully refuse, control, or remove,” the Internet for Freedom organization said in a statement Tuesday, before the government revoked its order. “For this to work in practice, the app will almost certainly need system level or root level access … so that it cannot be disabled. That design choice erodes the protections that normally prevent one app from peering into the data of others, and turns Sanchar Saathi into a permanent, non-consensual point of access sitting inside the operating system of every Indian smartphone user.”
Technology analyst Roy told CBS News the real issue was “not about faith in the government’s benevolence,” but rather “concerns about potential access to a wide range of data by many junior or mid-level officials in government or law enforcement,” as there was no clarity about what data could be accessed via the app, or who would have access to it.
Major phone makers did not publicly react to the government order, but the Reuters news agency reported that Apple had planned to refuse to comply.
Indian government says it’s just trying to help
The government argues that the app allows users to track, block and recover lost or stolen smartphones using the device’s International Mobile Equipment Identity (IMEI), a unique code assigned to all handsets sold around the world.
It also enables users to check how many unique mobile data connections are registered under their name, which it says will help people identify and disable fraudulent numbers and accounts opened by scammers.
Other features include tools to report suspected fraudulent calls and to verify the authenticity of devices being used to make purchases, according to officials.
The government said in its multiple statements that the app had already been downloaded 14 million times, and used to help trace 2.6 million lost or stolen phones. It said Sanchar Sathi had helped in the disconnection of over 4 million fraudulent connections, based on citizen reports.
The sensitive personal details of more than 450 people holding “top secret” US government security clearances were left exposed online, new research seen by WIRED shows. The people’s details were included in a database of more than 7,000 individuals who have applied for jobs over the last two years with Democrats in the United States House of Representatives.
While scanning for unsecured databases at the end of September, an ethical security researcher stumbled upon the exposed cache of data and discovered that it was part of a site called DomeWatch. The service is run by the House Democrats and includes videostreams of House floor sessions, calendars of congressional events, and updates on House votes. It also includes a job board and résumé bank.
After the researcher attempted to notify the House of Representatives’ Office of the Chief Administrator on September 30, the database was secured within hours, and the researcher received a response that simply said, “Thanks for flagging.” It is unclear how long the data was exposed or if anyone else accessed the information while it was unsecured.
The independent researcher, who asked to remain anonymous due to the sensitive nature of the findings, likened the exposed database to an internal “index” of people who may have applied for open roles. Résumés were not included, they say, but the database contained details typical of a job application process. The researcher found data including applicants’ short written biographies and fields indicating military service, security clearances, and languages spoken, along with details like names, phone numbers, and email addresses. Each individual was also assigned an internal ID.
“Some people described in the data have spent 20 years on Capitol Hill,” the researcher tells WIRED, noting that the information went beyond a list of interns or junior staffers. This is what made the finding so concerning, the researcher says, because they fear that if the data had fallen into the wrong hands—perhaps those of a hostile state or malicious hackers—it could have been used to compromise government or military staffers who have access to potentially sensitive information. “From the perspective of a foreign adversary, that is a gold mine of who you want to target,” the security researcher says.
WIRED reached out to the Office of the Chief Administrator and House Democrats for comment. Some staff members WIRED contacted were unavailable because they have been furloughed as a result of the ongoing US government shutdown.
“Today, our office was informed that an outside vendor potentially exposed information stored in an internal site,” Joy Lee, spokesperson for House Democratic whip Katherine Clark, told WIRED in a statement on October 22. DomeWatch is under the purview of Clark’s office. “We immediately alerted the Office of the Chief Administration Officer, and a full investigation has been launched to identify and rectify any security vulnerabilities.” Lee added that the outside vendor is “an independent consultant who helps with the backend” of DomeWatch.
Most AI assistants save a complete record of your conversations, making them easily visible to anyone with access to your devices. Those conversations are also stored online, often indefinitely, so they could be exposed due to bugs or security breaches. In some cases, AI providers can even send your chats along to human reviewers.
All of this should give you pause, especially if you plan to share your innermost thoughts with AI tools or use them to process personal information. To better protect your privacy, consider making some tweaks to your settings, using private conversation modes, or even turning to AI assistants that protect your privacy by default.
To help make sense of the options, I looked through all the privacy settings and policies of every major AI assistant. Here’s what you need to know about what they do with your data, and what you can do about it:
An Inc.com Featured Presentation
ChatGPT
By default: ChatGPT uses your data to train AI, and warns that its “training data may incidentally include personal information.” Can humans review your chats? OpenAI’s ChatGPT FAQ says it may “review conversations” to improve its systems. The company also says it now scans conversations for threats of imminent physical harm, submitting them to human reviewers and possibly reporting them to law enforcement. Can you disable AI training? Yes. Go to Settings > Data controls > Improve the model for everyone. Is there a private chat mode? Yes. Click “Turn on temporary chat” in the top-right corner to keep a chat out of your history and avoid having it used to train AI. Can you share chats with others? Yes, by generating a shareable link. (OpenAI launched, then removed, a feature that let search engines index shared chats.) Are your chats used for targeted ads? OpenAI’s privacy policy says it does not sell or share personal data for contextual behavioral advertising, doesn’t process data for targeted ads, and doesn’t process sensitive personal data to infer characteristics about consumers. How long does it keep your data? Up to 30 days for temporary and deleted chats, though even some of those may be kept longer for “security and legal obligations.” All other data is stored indefinitely.
Google Gemini
By default: Gemini uses your data to train AI. Can humans review your chats? Yes. Google says not to enter “any data you wouldn’t want a reviewer to see.” Once a reviewer sees your data, Google keeps it for up to three years—even if you delete your chat history. Can you disable AI training? Yes. Go to myactivity.google.com/product/gemini, click the “Turn off” drop-down menu, then select either “Turn off” or “Turn off and delete activity.” Is there a private chat mode? Yes. In the left sidebar, hit the chat bubble with dashed lines next to the “New chat” button. (Alternatively, disabling Gemini Apps Activity will hide your chat history from the sidebar, but re-enabling it without deleting past data will bring your history back.) Can you share chats with others? Yes, by generating a shareable link. Are your chats used for targeted ads? Google says it doesn’t use Gemini chats to show you ads, but the company’s privacy policy allows for it. Google says it will communicate any changes it makes to this policy. How long does it keep your data? Indefinitely, unless you turn on auto-deletion in Gemini Apps Activity.
Anthropic Claude
By default: From September 28 onward, Anthropic will use conversations to train AI unless you opt out. Can humans review your chats? No, though Anthropic reviews conversations flagged as violating its usage policies. Can you disable AI training? Yes, Head to Settings > Privacy and disable “Help improve Claude.” Is there a private chat mode? No. You must delete past conversations manually to hide them from your history. Can you share chats with others? Yes, by generating a shareable link. Are your chats used for targeted ads? Anthropic doesn’t use conversations for targeted ads. How long does it keep your data? Up to two years, or seven years for prompts flagged for trust and safety violations.
Microsoft Copilot
By default: Microsoft uses your data to train AI. Can humans review your chats? Yes. Microsoft’s privacy policy says it uses “both automated and manual (human) methods of processing” personal data. Can you disable AI training? Yes, though the option is buried. Click your profile image > your name > Privacy and disable “Model training on text.” Is there a private chat mode? No. You must delete chats one by one or clear your history from Microsoft’s account page. Can you share chats with others? Yes, by generating a shareable link. Note that shared links can’t be unshared without deleting the chat. Are your chats used for targeted ads? Microsoft uses your data for targeted ads and has discussed integrating ads with AI. You can disable this by clicking your profile image > your name > Privacy and disabling “Personalization and memory.” A separate link disables all personalized ads for your Microsoft account. How long does it keep your data? Data is stored for 18 months, unless you delete it manually.
xAI Grok
By default: Uses your data to train AI. Can humans review your chats? Yes. Grok’s FAQ says a “limited number” of “authorized personnel” may review conversations for quality or safety. Can you disable AI training? Yes. Click your profile image and go to Settings > Data Controls, then disable “Improve the Model.” Is there a private chat mode? Click the “Private” button at the top right to keep a chat out of your history and avoid having it used to train AI. Can you share chats with others? Yes, by generating a shareable link. Note that shared links can’t be unshared without deleting the chat. Are your chats used for targeted ads? Grok’s privacy policy says it does not sell or share information for targeted ad purposes. How long does it keep your data? Private Chats and even deleted conversations are stored for 30 days. All other data is stored indefinitely.
By default: Uses your data to train AI. Can humans review your chats? Yes. Meta’s privacy policy says it uses manual review to “understand and enable creation” of AI content. Can you disable AI training? Not directly. U.S. users can fill out this form. Users in the EU and U.K. can exercise their right to object. Is there a private chat mode? No. Can you share chats with others? Yes. Shared links automatically appear in a public feed and can show up in other Meta apps as well. Are your chats used for targeted ads? Meta’s privacy policy says it targets ads based on the information it collects, including interactions with AI. How long does it keep your data? Indefinitely.
Perplexity
By default: Uses your data to train AI. Can humans review your chats? Perplexity’s privacy policy does not mention human review. Can you disable AI training? Yes. Go to Account > Preferences and disable “AI data retention.” Is there a private chat mode? Yes. Click your profile icon, then select “Incognito” under your account name. Can you share chats with others? Yes, by generating a shareable link. Are your chats used for targeted ads? Yes. Perplexity says it may share your information with third-party advertising partners and may collect from other sources (for instance, data brokers) to improve its ad targeting. How long does it keep your data? Until you delete your account.
Duck.AI
By default: Duck.AI doesn’t use your data to train AI, thanks to deals with major providers. Can humans review your chats? No. Can you disable AI training? Not applicable. Is there a private chat mode? No. You must delete previous chats individually or all at once through the sidebar. Can you share chats with others? No. Are your chats used for targeted ads? No. How long does it keep your data? Model providers keep anonymized data for up to 30 days, unless needed for legal or safety reasons.
Proton Lumo
By default: Proton Lumo doesn’t use your data to train AI. Can humans review your chats? No. Can you disable AI training? Not applicable. Is there a private chat mode? Yes. Click the glasses icon at the top right. Can you share chats with others? No. Are your chats used for targeted ads? No. How long does it keep your data? Proton does not store logs of your chats.
By Jared Newman
This article originally appeared in Inc.’s sister publication, Fast Company.
Fast Company is the world’s leading business media brand, with an editorial focus on innovation in technology, leadership, world changing ideas, creativity, and design. Written for and about the most progressive business leaders, Fast Company inspires readers to think expansively, lead with purpose, embrace change, and shape the future of business.
The flags fly in front of Sacramento’s Capital BuildingCredit: Christopher Boswell via Adobe Stock
If you’ve ever been rejected for a job by an algorithm, denied an apartment by a software program, or had your health coverage questioned by an automated system, California just voted to change the rules of the game. On July 24, 2025, the California Privacy Protection Agency (CPPA) voted to finalize one of the most consequential privacy rulemakings in U.S. history. The new regulations—covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT)—are the product of nearly a year of public comment, political pressure, and industry lobbying.
They represent the most ambitious expansion of U.S. privacy regulation since voters approved the California Privacy Rights Act (CPRA) in 2020 and its provisions took effect in 2023, adding for the first time binding obligations around automated decision-making, cybersecurity audits, and ongoing risk assessments.
How We Got Here: A Contentious Rulemaking
The CPPA formally launched the rulemaking process in November 2024. At stake was how California would regulate technologies often grouped under the “AI” umbrella-term. The CPPA opted to focus narrowly on automated decision-making technology (ADMT), rather than attempting to define AI in general. This move generated both relief and frustration among stakeholders. The groups weighing in ranged from Silicon Valley giants to labor unions and gig workers, reflecting the numerous corners of the economy that automated decision-making touches.
Early drafts had explicitly mentioned “artificial intelligence” and “behavioral advertising.” By the time the final rules were adopted, those references were stripped out. Regulators stated that they sought to avoid ambiguity and not encompass too many technologies. Critics said the changes weakened the rules.
The comment period drew over 575 pages of submissions from more than 70 organizations and individuals, including tech companies, civil society groups, labor advocates, and government officials. Gig workers described being arbitrarily deactivated by opaque algorithms. Labor unions argued the rules should have gone further to protect employees from automated monitoring. On the other side, banks, insurers, and tech firms warned that the regulations created duplicative obligations and legal uncertainty.
The CPPA staff defended the final draft as one that “strikes an appropriate balance,” while acknowledging the need to revisit these rules as technology and business practices evolve. After the July 24 vote, the agency formally submitted the package to the Office of Administrative Law, which has 30 business days to review it for procedural compliance before the rules take effect.
At today’s meeting, the CPPA Board unanimously voted to adopt a proposed rulemaking package on ADMT, cybersecurity audits, risk assessments, insurance, and CCPA updates. Now, the proposed regulations will be filed with the Office of Administrative Law. pic.twitter.com/A8IB38E66l
— California Privacy Protection Agency (@CalPrivacy) July 24, 2025
Scroll to continue reading
Automated Decision-Making Technology (ADMT): Redefining AI Oversight
The centerpiece of the regulations is the framework for ADMT. The rules define ADMT as “any technology that processes personal information and uses computation to replace human decisionmaking, or substantially replace human decisionmaking.”
The CPPA applies these standards to what it calls “significant decisions:” choices that determine whether someone gets a job or contract, qualifies for a loan, secures housing, is admitted to a school, or receives healthcare. In practice, that means résumé-screening algorithms, tenant-screening apps, loan approval software, and healthcare eligibility tools all fall within the law’s scope.
Companies deploying ADMT for significant decisions will face several new obligations. They must provide plain-language pre-use notices so consumers understand when and how automated systems are being applied. Individuals must also be given the right to opt out or, at minimum, appeal outcomes to a qualified human reviewer with real authority to reverse the decision. Businesses are further required to conduct detailed risk assessments, documenting the data inputs, system logic, safeguards, and potential impacts. In short, if an algorithm decides whether you get hired, approved for a loan, or accepted into housing, the company has to tell you up front, offer a meaningful appeal, and prove that the system isn’t doing more harm than good. Liability also cannot be outsourced: with the business itself, firms remain responsible even when they rely on third-party vendors.
Some tools are excluded—like firewalls, anti-malware, calculators, and spreadsheets—unless they are actually used to make the decision. Additionally, the CPPA tightened what counts as “meaningful human review.” Reviewers must be able to interpret the system’s output, weigh other relevant information, and have genuine authority to overturn the result.
Compliance begins on January 1, 2027.
Cybersecurity Audits: Scaling Expectations
Another pillar of the new rules is the requirement for annual cybersecurity audits. For the first time under state law, companies must undergo independent assessments of their security controls.
The audit requirement applies broadly to larger data-driven businesses. It covers companies with annual gross revenue exceeding $26.6 million that process the personal information of more than 250,000 Californians, as well as firms that derive half or more of their revenue from selling or sharing personal data.
Audits must be conducted by independent professionals who cannot report to a Chief Information Security Officer (CISO) or other executives directly responsible for cybersecurity to ensure objectivity.
The audits cover a comprehensive list of controls, from encryption and multifactor authentication to patch management and employee training, and must be certified annually to the CPPA or Attorney General if requested.
Deadlines are staggered:
April 1, 2028: $100M+ businesses
April 1, 2029: $50–100M businesses
April 1, 2030: <$50M businesses
By codifying this framework and embedding these requirements into law, California is effectively setting a de facto national cybersecurity baseline: one that may exceed federal NIST standards and ripple into vendor contracts nationwide. For businesses, these audits won’t just be about checking boxes: they could become the new cost of entry for doing business in California. Because companies can’t wall off California users from the rest of their customer base, these standards are likely to spread nationally through vendor contracts and compliance frameworks.
Privacy Risk Assessments: Accountability in High-Risk Processing
The regulations also introduce mandatory privacy risk assessments, required annually for companies engaged in high-risk processing.
Triggering activities include:
Selling or sharing personal information
Processing sensitive personal data (including neural data, newly classified as sensitive)
Deploying ADMT for significant decisions
Profiling workers or students
Training ADMT on personal data
Each assessment must document categories of personal information processed, explain the purpose and benefits, identify potential harms and safeguards, and be submitted annually to the CPPA starting April 21, 2028, with attestations under penalty of perjury (a high-stakes accountability mechanism). This clause is designed to prevent “paper compliance.” By requiring executives to sign off under penalty of perjury, California is telling companies this isn’t paperwork. Leaders will be personally accountable if their systems mishandle sensitive data. Unlike voluntary risk assessments, California’s system ties accountability directly to the personal liability of signatories.
Other Notable Provisions
Beyond these headline rules, the CPPA also addressed sector-specific issues and tied in earlier reforms. For the insurance industry, the regulations clarify how the CCPA applies to companies that routinely handle sensitive personal and health data—an area where compliance expectations were often unclear. The rules also fold in California’s Delete Act, which takes effect on August 1, 2026. That law will give consumers a single, one-step mechanism to request deletion of their personal information across all registered data brokers, closing a major loophole in the data marketplace and complementing the broader CCPA framework. Together, these measures reinforce California’s role as a privacy trendsetter, creating tools that other states are likely to copy as consumers demand similar rights.
Implications for California
California has long served as the nation’s privacy laboratory, pioneering protections that often ripple across the country. This framework places California among the first U.S. jurisdictions to regulate algorithmic governance. With these rules, the state positions itself alongside the EU AI Act and the Colorado AI Act, creating one of the world’s most demanding compliance regimes.
However, the rules also set up potential conflict with the federal government. The America’s AI Action Plan, issued earlier this year, emphasizes innovation over regulation and warns that restrictive state-level rules could jeopardize federal AI funding decisions. This tension may play out in future policy disputes.
For California businesses, the impact is immediate. Companies must begin preparing governance frameworks, reviewing vendor contracts, and updating consumer-facing disclosures now. These compliance efforts build on earlier developments in California privacy law, including the creation of a dedicated Privacy Law Specialization for attorneys. This specialization will certify legal experts equipped to navigate the state’s intricate web of statutes and regulations, from ADMT disclosures to phased cybersecurity audits. Compliance will be expensive, but it will also drive demand for new privacy officers, auditors, and legal specialists. Mid-sized firms may struggle, while larger companies may gain an edge by showing early compliance. For businesses outside California, the ripple effects may be unavoidable because national companies will have to standardize around the state’s higher bar.
The CPPA’s finalized regulations mark a structural turning point in U.S. privacy and AI governance. Obligations begin as early as 2026 and accelerate through 2027–2030, giving businesses a narrow window to adapt. For consumers, the rules promise greater transparency and the right to challenge opaque algorithms. For businesses, they establish California as the toughest compliance environment in the country, forcing firms to rethink how they handle sensitive data, automate decisions, and manage cybersecurity. California is once again setting the tone for global debates on privacy, cybersecurity, and AI. Companies that fail to keep pace will not only face regulatory risk but could also lose consumer trust in the world’s fifth-largest economy. Just as California’s auto emissions standards reshaped national car design, its privacy rules are likely to shape national policy on data and AI. Other states will borrow from California, and Washington will eventually have to decide whether to match it or rein it in.
What starts in Sacramento rarely stays there. From Los Angeles to Silicon Valley, California just set the blueprint for America’s data and AI future.
Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.
No matter who you are and what your field is, data is everything these days. From critical business documents to precious family memories, your files deserve the best protection and accessibility. That’s why this offer for lifetime access to 1TB of Koofr Cloud Storage is something every business owner (or avid family photographer) should consider.
For a limited time, you can get an extra $40 off with code Koofr at checkout and pay the one-time price of just $119.97 (reg. $810). Not sure what 1TB means in reality? It translates to around 200,000 pictures or a million documents. And unlike other cloud storage services that require ongoing payments, Koofr offers a lifetime subscription for this one price.
In an era where data privacy is a growing concern, Koofr offers a refreshing approach. It says it is the only cloud storage provider that does not track user activities, giving you peace of mind that your data and actions remain private. For privacy-conscious individuals, this is a significant advantage that sets Koofr apart from other cloud storage providers.
This cloud storage solution also goes beyond traditional service by allowing users to connect and access files from existing cloud accounts like Dropbox, Google Drive, Amazon, and OneDrive. This integration provides centralized access to all your files across multiple platforms, making it easier to manage your data from one convenient location.
Koofr has earned a stellar reputation for reliability and performance. With 4.3/5 stars on Trustpilot, you can trust Koofr to protect your valuable data.
London — The European Union said Friday that blue checkmarks from Elon Musk’s X are deceptive and that the online platform falls short on transparency and accountability requirements, in the first charges against a tech company since the bloc’s new social media regulations took effect.
The European Commission outlined the preliminary findings from its investigation into X, formerly known as Twitter, under the 27-nation bloc’s Digital Services Act.
The rulebook, also known as the DSA, is a sweeping set of regulations that requires platforms to take more responsibility for protecting their European users and cleaning up harmful or illegal content and products on their sites, under threat of hefty fines.
Regulators took aim at X’s blue checks, saying they constitute “dark patterns” that are not in line with industry best practice and can be used by malicious actors to deceive users.
Before Musk’s acquisition, the checkmarks mirrored verification badges common on social media and were largely reserved for celebrities, politicians and other influential accounts. After Musk bought the site in 2022, it started issuing them to anyone who paid $8 per month for one.
“Since anyone can subscribe to obtain such a ‘verified” status’ it negatively affects users’ ability to make free and informed decisions about the authenticity of the accounts and the content they interact with,” the commission said.
An email request for comment to X resulted in an automated response that said “Busy now, please check back later.” Its main spokesman reportedly left the company in June.
“Back in the day, BlueChecks used to mean trustworthy sources of information,” European Commissioner Thierry Breton said in a statement. “Now with X, our preliminary view is that they deceive users and infringe the DSA.”
The commission also charged X with failing to comply with ad transparency rules. Under the DSA, platforms must publish a database of all digital advertisements that they’ve carried, with details such as who paid for them and the intended audience.
But X’s ad database isn’t “searchable and reliable” and has “design features and access barriers” that make it “unfit for its transparency purpose,” the commission said. The database’s design in particular hinders researchers from looking into “emerging risks” from online ads, it said.
The company also falls short when it comes to giving researchers access to public data, the commission said. The DSA imposes the provisions so that researchers can scrutinize how platforms work and how online risks evolve.
But researchers can’t independently access data by scraping it from the site, while the process to request access from the company through an interface “appears to dissuade researchers” from carrying out their projects or gives them no choice but to pay high fees, it said.
X now has a chance to respond to the accusations and make changes to comply, which would be legally binding. If the commission isn’t satisfied, it can levy penalties worth up to 6% of the company’s annual global revenue and order it to fix the problem.
The findings are only a part of the investigation. Regulators are still looking into whether X is failing to do enough to curb the spread of illegal content — such as hate speech or incitement of terrorism — and the effectiveness of measures to combat “information manipulation,” especially through its crowd-sourced Community Notes fact-checking feature.
TikTok, e-commerce site AliExpress and Facebook and Instagram owner Meta Platforms are also facing ongoing DSA investigations.
The tech giant disclosed Thursday that a database was accessed through a Dell portal, which contains a database of customer information. CBS News’ John Dickerson has the details.
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
While everyone’s favorite acronym—AI—seems to be the hottest topic as #CoSN2024 kicks off here in Miami, Ashley May, M.S., M.Ed., CETL, Director, Educational Technology Spring Branch ISD (TX), reminds us what is truly the most urgent and present concern for all edtech leaders. Security—whether online or in-person— is always issue number one.
eSchool News was able to interview Ashley about various aspects of ensuring student safety online, where she emphasizes the importance of collaboration between technology services and academic teams when it comes to data privacy, culture building, parental involvement, and the evolving landscape of educational technology. Have a listen:
Hello, and welcome back to Equity, a podcast about the business of startups, where we unpack the numbers and nuance behind the headlines. This is our Monday show, where we dig into the weekend and take a peek at the week that is to come.
Now that we are finally past Y Combinator’s demo day — though our Friday show is worth listening if you haven’t had a chance yet — we can dive into the latest news. So, this morning on Equity Monday we got into the chance that the United States might pass a real data privacy law. There’s movement to report, but we’re still very, very far from anything becoming law.
Oh, and on the crypto front, I forgot to mention that trading volume of digital tokens seems to have partially arrested its free fall, which should help some exchanges breath a bit more easily.
Equity is TechCrunch’s flagship podcast and posts every Monday, Wednesday and Friday, and you can subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts.
You also can follow Equity on X and Threads, at @EquityPod.
For the full interview transcript, for those who prefer reading over listening, read on, or check out our full archive of episodes over at Simplecast.
Congress may be closer than ever to passing a comprehensive data privacy framework after key House and Senate committee leaders released a new proposal on Sunday.
The bipartisan proposal, titled the American Privacy Rights Act, or APRA, would limit the types of consumer data companies can collect, retain, and use to what they need to operate their services. Users would also be allowed to opt-out of targeted advertising and have the ability to view, correct, delete, and download their data from online services. The proposal would also create a national registry of data brokers, and force those companies to allow users to opt out of having their data sold.
“This landmark legislation gives Americans the right to control where their information goes and who can sell it,” Cathy McMorris Rodgers, House Energy and Commerce Committee chair, said in a statement on Sunday. “It reins in Big Tech by prohibiting them from tracking, predicting, and manipulating people’s behaviors for profit without their knowledge and consent. Americans overwhelmingly want these rights, and they are looking to us, their elected representatives, to act.”
Congress has tried to put together a comprehensive federal law protecting user data for decades. Lawmakers have remained divided, though, on whether that legislation should prevent states from issuing tougher rules, and whether to allow a “private right of action” that would enable people to sue companies in response to privacy violations.
In an interview with the Spokesman Review on Sunday, McMorris Rodgers claimed that the draft’s language is stronger than any active laws, seemingly as an attempt to assuage the concerns of Democrats who have long fought attempts to preempt preexisting state-level protections. APRA does allow states to pass their own privacy laws related to civil rights and consumer protections, among other exceptions.
In the previous session of Congress, the leaders of the House Energy and Commerce Committees brokered a deal with Roger Wicker, the top Republican on the Senate Commerce Committee, on a bill that would preempt state laws with the exception of the California Consumer Privacy Act and the Biometric Information Privacy Act of Illinois. That measure, titled the American Data Privacy and Protection Act, also created a weaker private right of action than most Democrats were willing to support. Cantwell refused to support the measure, instead circulating her own draft legislation. The ADPPA hasn’t been reintroduced, but APRA was designed as a compromise.
“I think we have threaded a very important needle here,” Cantwell told the Spokesman Review. “We are preserving those standards that California and Illinois and Washington have.”
APRA includes language from California’s landmark privacy law allowing people to sue companies when they are harmed by a data breach. It also provides the Federal Trade Commission, state attorneys general, and private citizens the authority to sue companies when they violate the law.
The categories of data that would be impacted by the APRA include certain categories of “information that identifies or is linked or reasonably linkable to an individual or device,” according to a Senate Commerce Committee summary of the legislation. Small businesses—those with $40 million or less in annual revenue and limited data collection—would be exempt under APRA, with enforcement focused on businesses with $250 million or more in yearly revenue. Governments and “entities working on behalf of governments” are excluded under the bill, as are the National Center for Missing and Exploited Children and, apart from certain cybersecurity provisions, “fraud-fighting” nonprofits.
US representative Frank Pallone, the top Democrat on the House Energy and Commerce Committee, called the draft “very strong” in a Sunday statement, but said he wanted to “strengthen” it with tighter child safety provisions.
Still, it remains unclear whether APRA will receive the necessary support for approval. On Sunday, committee aids said that conversations on other lawmakers signing onto the legislation are ongoing. The current proposal is a “discussion draft;” while there’s no official date for introducing a bill, Cantwell and McMorris Rodgers will likely shop around the text to colleagues for feedback over the coming weeks, and plan to send it to committees this month.
Google has agreed to delete “billions of data records” the company collected while users browsed the web using Incognito mode, according to documents filed in federal court in San Francisco on Monday. The agreement, part of a settlement in a class action lawsuit filed in 2020, caps off years of disclosures about Google’s practices that shed light on how much data the tech giant siphons from its users—even when they’re in private-browsing mode.
Under the terms of the settlement, Google must further update the Incognito mode “splash page” that appears anytime you open an Incognito mode Chrome window after previously updating it in January. The Incognito splash page will explicitly state that Google collects data from third-party websites “regardless of which browsing or browser mode you use,” and stipulate that “third-party sites and apps that integrate our services may still share information with Google,” among other changes. Details about Google’s private-browsing data collection must also appear in the company’s privacy policy.
Additionally, some of the data that Google previously collected on Incognito users will be deleted. This includes “private-browsing data” that is “older than nine months” from the date that Google signed the term sheet of the settlement last December, as well as private-browsing data collected throughout December 2023. Certain documents in the case referring to Google’s data collection methods remain sealed, however, making it difficult to assess how thorough the deletion process will be.
Google spokesperson Jose Castaneda says in a statement that the company “is happy to delete old technical data that was never associated with an individual and was never used for any form of personalization.” Castaneda also noted that the company will now pay “zero” dollars as part of the settlement after earlier facing a $5 billion penalty.
Other steps Google must take will include continuing to “block third-party cookies within Incognito mode for five years,” partially redacting IP addresses to prevent re-identification of anonymized user data, and removing certain header information that can currently be used to identify users with Incognito mode active.
The data-deletion portion of the settlement agreement follows preemptive changes to Google’s Incognito mode data collection and the ways it describes what Incognito mode does. For nearly four years, Google has been phasing out third-party cookies, which the company says it plans to completely block by the end of 2024. Google also updated Chrome’s Incognito mode “splash page” in January with weaker language to signify that using Incognito is not “private,” but merely “more private” than not using it.
The settlement’s relief is strictly “injunctive,” meaning its central purpose is to put an end to Google activities that the plaintiffs claim are unlawful. The settlement does not rule out any future claims—The Wall Street Journal reports that the plaintiffs’ attorneys had filed at least 50 such lawsuits in California on Monday—though the plaintiffs note that monetary relief in privacy cases is far more difficult to obtain. The important thing, the plaintiffs’ lawyers argue, is effecting changes at Google now that will provide the greatest, immediate benefit to the largest number of users.
Critics of Incognito, a staple of the Chrome browser since 2008, say that, at best, the protections it offers fall flat in the face of the sophisticated commercial surveillance bearing down on most users today; at worst, they say, the feature fills people with a false sense of security, helping companies like Google passively monitor millions of users who’ve been duped into thinking they’re browsing alone.
The argument is one that some Apple critics have made for years, as spelled out in an essay in January by Cory Doctorow, the science fiction writer, tech critic, and co-author of Chokepoint Capitalism. “The instant an Android user is added to a chat or group chat, the entire conversation flips to SMS, an insecure, trivially hacked privacy nightmare that debuted 38 years ago—the year Wayne’s World had its first cinematic run,” Doctorow writes. “Apple’s answer to this is grimly hilarious. The company’s position is that if you want to have real security in your communications, you should buy your friends iPhones.”
In a statement to WIRED, Apple says it designs its products to “work seamlessly together, protect people’s privacy and security, and create a magical experience for our users,” and adds that the DOJ lawsuit “threatens who we are and the principles that set Apple products apart” in the marketplace. The company also says it hasn’t released an Android version of iMessage because it couldn’t ensure that third parties would implement it in ways that met the company’s standards.
“If successful, [the lawsuit] would hinder our ability to create the kind of technology people expect from Apple—where hardware, software, and services intersect,” the statement continues. “It would also set a dangerous precedent, empowering government to take a heavy hand in designing people’s technology. We believe this lawsuit is wrong on the facts and the law, and we will vigorously defend against it.”
Apple has, in fact, not only declined to build iMessage clients for Android or other non-Apple devices, but actively fought against those who have. Last year, a service called Beeper launched with the promise of bringing iMessage to Android users. Apple responded by tweaking its iMessage service to break Beeper’s functionality, and the startup called it quits in December.
Apple argued in that case that Beeper had harmed users’ security—in fact, it did compromise iMessage’s end-to-end encryption by decrypting and then re-encrypting messages on a Beeper server, though Beeper had vowed to change that in future updates. Beeper cofounder Eric Migicovsky argued that Apple’s heavyhanded move to reduce Apple-to-Android texts to traditional text messaging was hardly a more secure alternative.
“It’s kind of crazy that we’re now in 2024 and there still isn’t an easy, encrypted, high-quality way for something as simple as a text between an iPhone and an Android,” Migicovsky told WIRED in January. “I think Apple reacted in a really awkward, weird way—arguing that Beeper Mini threatened the security and privacy of iMessage users, when in reality, the truth is the exact opposite.”
Even as Apple has faced accusations of hoarding iMessage’s security properties to the detriment of smartphone owners worldwide, it’s only continued to improve those features: In February it upgraded iMessage to use new cryptographic algorithms designed to be immune to quantum codebreaking, and last October it added Contact Key Verification, a feature designed to prevent man-in-the-middle attacks that spoof intended contacts to intercept messages. Perhaps more importantly, it’s vowed to adopt the RCS standard to allow for improvements in messaging with Android users—although the company did not say whether those improvements would include end-to-end encryption.
Everywhere you go online, you’re being tracked. Almost every time you visit a website, trackers gather data about your browsing and funnel it back into targeted advertising systems, which build up detailed profiles about your interests and make big profits in the process. In some places, you’re tracked more than others.
In a little-noticed change at the end of last year, thousands of websites started being more transparent about how many companies your data is being shared with. In November, those infuriating cookie pop-ups—which ask your permission to collect and share data—began sharing how many advertising “partners” each website is working with, giving a further glimpse of the sprawling advertising ecosystem. For many sites, it’s not pretty.
A WIRED analysis of the top 10,000 most popular websites shows dozens of sites say they are sharing data with more than 1,000 companies, while thousands of other websites are sharing data with hundreds of firms. Quiz and puzzle website JetPunk tops the pile, listing 1,809 “partners” that may collect personal information, including “browsing behavior or unique IDs.”
More than 20 websites from publisher Dotdash Meredith—including investopedia.com, people.com, and allrecipes.com—all say they can share data with 1,609 partners. The newspaper The Daily Mail lists 1,207 partners, while internet speed monitoring firm Speedtest.net, online medical publisher WebMD, and media outlets Reuters, ESPN, and BuzzFeed all state they can share data with 809 companies. (WIRED, for context, lists 164 partners). These hundreds of advertising partners include dozens of firms most people have likely never heard of.
“You can always assume all of them are first going to try and disambiguate who you are,” says Midas Nouwens, an associate professor at Aarhus University in Denmark, who has previously built tools to automatically opt-out of tracking by cookie pop-ups and helped with the website analysis. The data collected can vary by website, and the cookie pop-ups allow some control over what can be gathered; however, the information can include IP addresses, fingerprinting of devices, and various identifiers. “Once they know that, they might add you to different data sets, or use it for enrichment later when you go to a different site,” Nouwens says.
The online advertising world is a messy, murky space, which can involve networks of companies building profiles of people with the aim of showing you tailored ads the second you open a webpage. For years, strongprivacy laws in Europe, such as GDPR, have resulted in websites showing cookie consent pop-ups that ask for permission to store cookies that collect data on your device. In recent years, studies have shown cookie pop-ups have included dark patterns, disregarded people’s choices, and are ignored by people. “Every single person we’ve ever observed in user testing doesn’t read any of this. They find the fastest way they can to close it out,” says Peter Dolanjski, a product director at privacy focused search engine and browser DuckDuckGo. “So they end up in a worse privacy state.”
For the website analysis, Nouwens scraped the 10,000 most popular websites and analyzed whether the collected pop-ups mentioned partners and, if so, the number they disclosed. WIRED manually verified all the websites mentioned in this story, visiting each to confirm the number of partners they displayed. We looked at the highest total number of partners within the whole dataset, and the highest number of partners for the top 1,000 most popular websites. The process, which is only a snapshot of how websites share data, provides one view of the complex ecosystem. The results can vary depending on where in the world someone visits a website from.
It also only includes websites using just one system to display cookie pop-ups. Many of the world’s biggest websites—think Google, Facebook, and TikTok—use their own cookie pop-ups. However, thousands of websites, including publishers and retailers, use third-party technology, made by consent management platforms (CMPs), to show the pop-ups. These pop-ups largely follow standards from the marketing and advertising group IAB Europe, which details the information that should be included in the cookie pop-ups.
Reddit said ahead of its IPO next week that licensing user posts to Google and others for AI projects could bring in $203 million of revenue over the next few years. The community-driven platform was forced to disclose Friday that US regulators already have questions about that new line of business.
In a regulatory filing, Reddit said that it received a letter from the US Federal Trade Commision on Thursday asking about “our sale, licensing, or sharing of user-generated content with third parties to train AI models.”
The FTC, the US government’s primary antitrust regulator, has the power to sanction companies found to engage in unfair or deceptive trade practices. The idea of licensing user-generated content for AI projects has drawn questions from lawmakersand rights groups about privacy risks, fairness, and copyright.
Reddit isn’t alone in trying to make a buck off licensing data, including that generated by users, for AI. Programming Q&A site Stack Overflow has signed a deal with Google, the Associated Press has signed one with OpenAI, and Tumblr owner Automattic has said it is working “with select AI companies” but will allow users to opt out of having their data passed along. None of the licensors immediately responded to requests for comment. Reddit also isn’t the only company receiving an FTC letter about data licensing, Axios reported on Friday, citing an unnamed former agency official.
It’s unclear whether the letter to Reddit is directly related to review into any other companies.
Reddit said in Friday’s disclosure that it does not believe that it engaged in any unfair or deceptive practices but warned that dealing with any government inquiry can be costly and time-consuming. “The letter indicated that the FTC staff was interested in meeting with us to learn more about our plans and that the FTC intended to request information and documents from us as its inquiry continues,” the filing says. Reddit said the FTC letter described the scrutiny as related to “a non-public inquiry.”
Reddit, whose 17 billion posts and comments are seen by AI experts as valuable for training chatbots in the art of conversation, announced a deal last month to license the content to Google. Reddit and Google did not immediately respond to requests for comment. The FTC declined to comment. (Advance Magazine Publishers, parent of WIRED’s publisher Condé Nast, owns a stake in Reddit.)
AI chatbots like OpenAI’s ChatGPT and Google’s Gemini are seen as a competitive threat to Reddit, publishers, and other ad-supported, content-driven businesses. In the past year the prospect of licensing data to AI developers emerged as a potential upside of generative AI for some companies.
But the use of data harvested online to train AI models has raised a number of questions winding through boardrooms, courtrooms, and Congress. For Reddit and others whose data is generated by users, those questions include who truly owns the content and whether it’s fair to license it out without giving the creator a cut. Security researchers have found that AI models can leak personal data included in the material used to create them. And some critics have suggested the deals could make powerful companies even more dominant.
US president Joe Biden will sign an executive order on Wednesday aimed at preventing a handful of countries, including China, North Korea, and Russia, from purchasing sensitive information about Americans through commercial data brokers in the United States.
Administration officials say categories of sensitive data, including personal identifiers, precise location information, and biometrics—vital tools for waging cyberattacks, espionage, and blackmail operations against the US—are being amassed by what the White House is calling “countries of concern.”
Biden administration officials disclosed the order to reporters in advance during a Zoom call on Tuesday and briefly took questions, on the condition that they not be named or referred to by job title.
The order will have few immediate effects, they said. The US Justice Department will instead launch a rulemaking process aimed at mapping out a “data security program” envisioned by the White House. The process affords experts, industry stakeholders, and the public at large an opportunity to chime in prior to the government adopting the proposal.
White House officials said the US Attorney General would consult with the heads of the Department of State and Department of Commerce to finalize a list of countries falling under the eye of the program. A tentative list given to reporters during Tuesday’s call, however, included China, Cuba, Iran, North Korea, Russia, and Venezuela.
The categories of information covered by the program will include health and financial data, precise geolocation information, and “certain sensitive government-related data,” among others, the officials said. The order will contain several carve-outs for certain financial transactions and activities that are “incidental” to ordinary business operations.
It’s unclear to what degree such a program would be effective. Notably, it does not extend to a majority of countries where trafficking in Americans’ private data will ostensibly remain legal. What’s more, it’s unclear whether the government has the authority or wherewithal (outside of an act of Congress) to restrict countries that, while diplomatically and militarily allied with the US, are also known to conduct espionage against it: close US ally Israel, for instance, was accused in 2019 of planting cell-phone-spying devices near the White House, and has served as an international marketplace for illicit spyware; or Saudi Arabia, which availed itself of that market in 2018 to covertly surveil a Washington Post contributor who was later abducted and murdered by a Saudi hit squad.
If China, Russia, or North Korea moves to obtain US data from a third party in one of the more than 170 countries not on the US government’s list, there may be little to prevent it. US data brokers need only take steps to ensure overseas customers follow “certain security requirements” during the transfer, many of which are already required by law.
The restrictions imposed by the executive order are meant to protect against “direct” and “indirect transfers of data,” officials said. But data brokers are on the hook merely until they obtain “some type of commitment” from overseas customers—an “understanding”—when it comes to the possibility of data being sold or transferred to others down the line.
Shou Zi Chew, a 40-year-old entrepreneur, was born on January 1, 1983, in Singapore.
Raised in an affluent family alongside his brothers, he received his education at Hwa Chong Institution before serving as a commissioned officer in the Singapore Army.
Following his military service, Chew relocated to London to complete his undergraduate studies in economics at the University of London.
He further advanced his education by earning a Master of Business Administration from Harvard Business School, where he also completed a summer internship at Facebook.
Key Takeaway
Net Worth around USD 1.5 billion as of 2024.
CEO of ByteDance, the parent company of TikTok, since May 2021.
Former Chief Financial Officer for Xiaomi Corporation.
Bachelor of Economics from the University College London.
Master of Business Administration from Harvard Business School.
Married to Vivian Kao, a successful businesswoman and CEO at Tamarind Global.
41 years old (Born on January 1, 1983).
Net Worth 2024
In May 2021, Shou Zi Chew assumed the role of CEO at ByteDance, taking over from the company’s founder, Zhang Yiming.
Before his tenure at ByteDance, he served as the Chief Financial Officer for Xiaomi Corporation and had a stint as an investment banker at Goldman Sachs.
As we enter 2024, his estimated net worth stands at approximately $1.5 billion USD according to Hot New HipHop.
While Chew has amassed an impressive fortune, it is important to note that he is not the owner of ByteDance, the parent company of TikTok. That title belongs to Zhang Yiming. As of 2023, Yiming’s net worth was estimated to be a staggering $43.4 billion, according to Forbes.
Professional Career
Shou began his professional journey after completing his education at the University of London, starting his career in the banking sector with Goldman Sachs, a leading American investment bank and financial services company based in London as noted by Straits Times.
Before this role, he had already acquired a diverse set of experiences across various fields.
After a two-year stint at Goldman Sachs, Shou transitioned to a venture capital firm named DST Global, where he led the team focused on ByteDance investments.
He then joined Xiaomi, a Chinese multinational company specializing in the design and manufacturing of electronics, initially serving as its Chief Financial Officer.
By 2019, Shou had risen to the position of President of Xiaomi’s international division. In 2021, ByteDance appointed him as Chief Financial Officer.
However, after a brief period, he left the CFO position to become the CEO of TikTok, succeeding the former American businessman Kevin A. Maye.
Before joining Xiaomi, Shou Zi Chew was employed at DST, an investment firm founded by Yuri Milner, an Israeli-Russian billionaire in the tech industry as reported by The New York Times.
During his five-year tenure there, he managed a team that was among the initial investors in ByteDance, the parent company of TikTok.
In March 2021, Shou became the first person to hold the position of Chief Financial Officer for the media conglomerate. Shortly after, in May 2021, he was appointed as the Chief Executive Officer (CEO) of the company. This move garnered significant admiration for Shou Zi Chew.
Zhang Yiming, ByteDance’s founder and former CEO, praised Shou for his extensive understanding of the company and the broader technology sector.
He highlighted Shou’s leadership of a team that was one of ByteDance’s earliest investors and his decade-long experience in tech.
Kevin Mayer, Chew’s predecessor, left the role after just three months to join Walt Disney. It was reported that Mayer’s departure was influenced by pressure from American lawmakers concerned about the security implications of the app.
Career Milestones
From July 2006 to July 2008, worked at Goldman Sachs International.
Completed an internship at Facebook during its start-up phase.
Held the position of Director at Kingsoft Cloud Holdings Limited, a branch of Kingsoft Corporation Limited.
Became a partner at DST Investment Management Ltd in July 2015.
In 2019, took on the roles of Senior Vice President, Executive Director, and President of Global Business Groups at Xiaomi.
Wife and Family Life
Shou Zi Chew is married to Vivian Kao, a fellow Singaporean and successful businesswoman, with whom he shares two children. The couple maintains a low profile, opting for private settings on their Instagram and other social media accounts, reflecting their preference for privacy.
Their paths first crossed at Harvard University, where both were pursuing their MBAs. It was during this intense period of academic endeavor that they found love, even as they focused on achieving their educational goals.
Their initial connection was made via email in 2008, but it wasn’t until the following summer, when both were interning in California, that they truly got to know each other.
Vivian Kao holds the position of CEO at Tamarind Global, a company that specializes in financial services.
Tamarind Global is dedicated to managing the investment portfolio and philanthropic efforts of a prominent Singaporean family spanning the third and fourth generations.
The firm’s mission is centered on ensuring long-term capital preservation and growth, with a particular emphasis on investments.
The family, now including Shou Zi Chew, Vivian Kao, their two children, and the family dog, has settled in Beijing, China. Before making Beijing their home, they traveled extensively, visiting places like London, Singapore, and Hong Kong according to Harvard Business School Alumni.
Both are also recognized for their philanthropy, being active donors and members of the Harvard Business School Fund Investors Society.
Congressional Testimony and Controversy
In March 2023, he faced a significant challenge when he was called upon to address the U.S. Congress regarding TikTok’s ties to China and the potential implications for national security as stated by CNBC.
The Biden administration had taken a firm stance, proposing to ban TikTok unless its Chinese stakeholders divested their shares in the app.
This situation intensified the scrutiny on Chew, especially as the app was already banned on government devices in the U.S. and other nations, heightening suspicions about its operations.
Repeated Citizenship Queries
During a tense congressional testimony focused on online safety for children, Singaporean TikTok CEO Shou Zi Chew was persistently questioned by Sen. Tom Cotton about his citizenship and potential affiliations with the Chinese Communist Party according to Business Insider.
Despite Chew’s repeated clarifications of his Singaporean nationality, the line of questioning continued, touching on his past, present, and future citizenship, his family’s American citizenship, and his connections to the Chinese communist party.
This interrogation took place amidst a broader, combative hearing with CEOs from four other social media companies, including X, Meta, Snap, and Discord, all scrutinized for their platforms’ safety measures for children.
The intense focus on Chew and TikTok’s parent company, ByteDance, reflects ongoing concerns over Chinese government influence and data misuse, amidst a backdrop of anti-Asian rhetoric that conflates Chinese ancestry with the actions of the Chinese Communist Party.
Philanthropy Initiatives
Shou Zi Chew, the successful CEO of TikTok, is not only focused on the growth of his company and followers but is also actively involved in philanthropy. Under his leadership, TikTok has contributed to numerous social causes, and Chew himself has also engaged in personal charitable activities.
One of TikTok’s notable initiatives is the Creativity for Good program, which encourages users to demonstrate their creativity in addressing societal issues. This program has led to numerous innovative ideas and campaigns designed to raise awareness and funds for various non-profit organizations.
Shou Zi Chew’s commitment to education is also evident in his support for underprivileged students. He has personally donated to several scholarship programs, providing financial assistance for bright students from low-income families to pursue their higher education. This effort underlines Chew’s dedication to narrowing the education gap and improving access to quality education for all.
FAQ
What is ByteDance’s business model and how does it contribute to TikTok’s success?
ByteDance operates on a content platform model that leverages advanced AI algorithms to personalize and recommend content to users. This model is central to TikTok’s success, driving user engagement and growth by delivering tailored video content that matches individual interests and behaviors.
How does TikTok address data privacy concerns, especially in Western markets?
As of 2024, TikTok has implemented several measures to address data privacy concerns, including establishing a transparency center, undergoing third-party audits, and storing US user data on servers located in the United States to mitigate risks related to data privacy and governmental concerns.
What measures has Shou Zi Chew taken to address regulatory challenges faced by TikTok?
Under Chew’s leadership, TikTok has taken proactive steps to address regulatory challenges, including enhancing data privacy measures, engaging in transparent dialogue with regulators, and restructuring operations to comply with local laws and guidelines in various markets.
How does Shou Zi Chew plan to expand TikTok’s user base and market reach?
Chew focuses on localizing content and features to cater to diverse global audiences, investing in technology to enhance user experience, and exploring new markets with untapped potential. He also seeks strategic partnerships to broaden TikTok’s ecosystem and reach.
How has His leadership style influenced TikTok’s corporate culture and innovation?
Shou Zi Chew’s leadership is characterized by a focus on innovation, inclusivity, and adaptability, fostering a corporate culture that encourages creativity, collaboration, and a forward-thinking approach to challenges and opportunities in the digital space.
What future technologies is TikTok investing in to enhance user experience and content creation?
TikTok is investing in advanced AI, augmented reality (AR), and machine learning technologies to enhance content personalization, improve user experience, and offer new creative tools for content creators, ensuring the platform remains at the forefront of digital innovation.
Final Words
Shou Zi Chew’s remarkable journey from a young student in Singapore to the CEO of ByteDance, the parent company of TikTok, is a testament to his visionary leadership, strategic acumen, and unwavering commitment to innovation.
His tenure at ByteDance has been marked by significant achievements, including navigating regulatory challenges, spearheading philanthropic initiatives, and driving TikTok’s global expansion.
Chew’s leadership has not only propelled TikTok to unprecedented success but has also positioned the platform as a pivotal player in the digital landscape, influencing content creation, social interaction, and digital marketing strategies across the globe.
EMI (equated monthly installment) cards are the most preferred medium for taking short-term credit, with 49 per cent of borrowers choosing the mode in 2023 owing to higher trust and faster disbursals, as per a survey by Home Credit India, the local arm of Dutch consumer finance provider Home Credit.
Further, embedded finance has gained traction, with 50 per cent of borrowers open to opting for the same during e-shopping. However, the share of the segment has fallen 10 per cent y-o-y due to stringent RBI regulations on BNPL (Buy Now, Pay Later) and PPI (prepaid payment instruments) products, leading to fewer offers.
Trends in borrowing have shifted from ‘running the household’ in 2021 to ‘consumer durables’ in 2023, with 44 per cent of borrowers purchasing smartphones and home appliances in 2023. However, on a whole, the share of consumer durable loans declined by 9 per cent, whereas business-related borrowing increased by 5 per cent.
The ‘How India Borrows Survey 2023’ was conducted across 17 cities with data from 1,842 borrowers in the age group of 18–55 years with an average monthly income of ₹31,000.
One-fourth of borrowers opted for the online channel for availing loans, even as loans initiated through telecalling increased from 16 per cent in 2022 to 19 per cent in 2023, and those through POS or bank branches declined from 56 per cent to 51 per cent.
“Over half the borrowers (51 per cent) are looking forward to completing their entire future loan application on the mobile app without any physical interaction with POS or banks. The preference for online loan mediums is primarily driven by younger and aspirational small-town borrowers,” the report said, highlighting cities such as Dehradun, Ludhiana, Ahmedabad, and Chandigarh.
Concerning trend
The report also highlighted a concerning trend: that only 18 per cent of borrowers understood data privacy rules, and 88 per cent had only a superficial understanding. Further, only 23 per cent understand the usage of their personal data by loan apps.
While 60 per cent of borrowers were worried about how their personal data is collected and used, and 58 per cent believed the apps collect more data than required, nearly 60 per cent — especially borrowers from Tier-I towns — said they don’t have control over the data being shared by them.