Reddit, Meta, and Google voluntarily “complied with some of the requests” for identifying details of users critical of Immigration and Customs Enforcement (ICE) sent as part of a recent wave of administrative subpoenas the Department of Homeland Security has been distributing to Big Tech the past few months, according to an anonymously sourced New York Times report.
Those three companies, plus Discord, have received “hundreds” of such requests that have come from DHS recently. Meta, it should be noted, is the parent company of Instagram, Facebook, and WhatsApp.
Administrative subpoenas used for this purpose represent an escalation. This tool, which comes not from a judge but from DHS itself, was formerly reserved for situations like child abductions, according to the Times.
The users were targeted because their posts “criticized ICE or pointed to the locations of ICE agents,” the Times says.
A Google spokesperson replied to the Times with a statement, saying “When we receive a subpoena, our review process is designed to protect user privacy while meeting our legal obligations,” and “We inform users when their accounts have been subpoenaed, unless under legal order not to or in an exceptional circumstance. We review every legal demand and push back against those that are overbroad.”
Gizmodo requested comment from Meta, Discord, and Reddit. We will update if we hear back.
According to the Times, one or multiple of the relevant companies have stated that they notify users of these requests from DHS, and give them a 14-day window to “fight the subpoena in court” before complying.
Amazon has also been accused of at least some degree of participation with ICE’s ongoing mass deportation efforts. In October, Amazon-owned Ring announced a partnership with Flock that would loop the AI-powered network into the content coming from users’ doorbell cameras. According to a 404 Media investigation, that network feeds information to law enforcement agencies at the local and federal levels, allowing for reasonable concern that ICE has access to all that footage.
Protesters have launched an effort called “Resist and Unsubscribe” targeting ten tech companies they perceive as exceptionally supportive of ICE. That list includes Meta, Google, and Amazon, but not Reddit.
new video loaded: How ICE Is Pushing Tech Companies to Identify Protesters
The DHS is flooding social media companies with administrative subpoenas to identify accounts that are protesting ICE. Social media companies have pushed back but are largely complying. Our tech reporter, Sheera Frenkel, explains.
By Sheera Frenkel, Christina Thornell, Valentina Caval, Thomas Vollkommer, Jon Hazell and June Kim
February 14, 2026
[ad_2]
Sheera Frenkel, Christina Thornell, Valentina Caval, Thomas Vollkommer, Jon Hazell and June Kim
new video loaded: Why Tech Giants Are Accused of Causing Social Media Addiction
In a series of landmark trials, plaintiffs are alleging that Meta, TikTok, Snap and YouTube caused personal injury through addictive products. Our technology reporter Cecilia Kang describes what’s at stake for tech giants and social media users.
By Cecilia Kang, Christina Thornell, Laura Salaberry, Nikolay Nikolov and Thomas Vollkommer
February 12, 2026
[ad_2]
Cecilia Kang, Christina Thornell, Laura Salaberry, Nikolay Nikolov and Thomas Vollkommer
French ice dancer Guillaume Cizeron and his partner, Laurence Fournier Beaudry, took home the gold in their free dance on Wednesday, February 11, at the 2026 Winter Olympics, but Cizeron’s former partner isn’t interested in cheering him on.
“Logging off xxx,” Gabriella Papadakis wrote via Instagram on Wednesday alongside a photo of a pack of cigarettes and a glass of wine.
Papadakis, 30, is now retired, but opened up in her new memoir about her experience working with Cizeron, 31. In it, she accuses the gold medalist of being “controlling, demanding, critical” during their time as partners. She eventually refused to skate with him without a coach present, feeling she was “under his grip.”
French ice dancers Guillaume Cizeron and Laurence Fournier Beaudry have been considered gold-medal contenders before the 2026 Winter Olympics even begin. Olympic medalist Cizeron joined forces with Fournier Beaudry in January 2025, quickly becoming a force in the ice dancing discipline. Their partnership, however, has been met with controversy. Keep scrolling for a full history: […]
“In the face of this smear campaign, I want to express my incomprehension and disagreement with the labels attributed to me,” he said. “The book contains false information, including statements I never made, which I consider serious. For more than 20 years, I have shown deep respect for Gabriella Papadakis, despite the gradual erosion of our bond, our relationship was built on equal collaboration and marked by success and mutual support.”
Papadakis hasn’t backed down. An Olympic gold medalist herself, she previously served as a commentator for NBC but was ultimately let go ahead of the 2026 games. NBC told The New York Times in January that her book created a “clear conflict of interest” with Cizeron set to compete.
Papadakis also took aim at the culture around ice dancing in her memoir, explaining that it lends itself to an environment where men have all the power.
“The environment I was working in had become deeply unhealthy,” she wrote. “I was exhausted, physically and psychologically, and I had to leave to protect myself. I had no choice.”
Olympic figure skater Laurence Fournier Beaudry is opening up about her former skating partner and boyfriend Nikolaj Sorensen’s sexual assault allegations. In the first episode of the Netflix docuseries, Glitter & Gold: Ice Dancing, released on Sunday, February 1, Fournier Beaudry, 33, discusses the impact of the allegations. “I never really publicly discuss about how […]
She has been following the Olympics from home. Papadakis posted a video via Instagram on Sunday, February 8, where she encouraged fans to remember “whose voices are excluded from the arena and to engage critically with a spectacle that is built upon erasure and abuse.”
“I’m sharing my experiences because I believe in a sport where young athletes don’t have to endure what I did in order to achieve their dreams,” she wrote in the caption. “It is however incredibly difficult to make sport safer when survivors’ voices are still being silenced. I had to end my competitive career because I could no longer tolerate abuse. And now, as a result of speaking up about it I’ve lost my job.”
Papadakis continued, “I don’t single myself out as a victim. I use my experience to highlight a reality: as long as survivors are punished for speaking out, the sport cannot truly change or become safer. As the Winter Olympics unfold, I encourage you to engage critically with the spectacle. Spectators have power, and the way we choose to watch, support, question, or look away helps shape the culture of the sport.”
Russian authorities have taken new measures to ensure they can monitor all communications by people inside the country, officially blocking access to the popular, Meta-owned messaging app WhatsApp.
WhatsApp said in a statement shared Thursday on social media that Russia had “attempted to fully block WhatsApp in an effort to drive people to a state-owned surveillance app,” calling it an attempt to isolate “over 100 million users from private and secure communication.”
WhatsApp called it a “backwards step” that would lead to “less safety for people in Russia.”
People look at their phones while riding an escalator in the Moscow metro, Feb. 12, 2026, as Russian officials confirmed the popular messaging service WhatsApp had been blocked over a failure to comply with national laws.
Hector RETAMAL/AFP/Getty
Speaking to reporters Thursday in Moscow, Kremlin spokesman Dmitry Peskov confirmed “a decision was indeed made and implemented” in response to a question on the WhatsApp ban.
He said the decision was taken due to WhatsApp’s unwillingness “to comply with the norms and letter of Russian law.”
The ban appears to stem from Russian legislation that requires all companies listed on a register of online information disseminators to store both personal user details and data on all electronic messages exchanged within Russia, and to make that information available to government agencies.
Roskomnadzor, the Russian federal agency responsible for monitoring — and censoring — mass media in the country,added WhatsApp to that register in late 2024.
WhatsApp said in its statement that it would “do everything we can to keep users connected.”
CBS News found on Thursday that while WhatsApp was blocked for users inside Russia, it was still possible to use the app via a virtual private network (VPN), which is not illegal in the country, despite the Kremlin’s ban.
Earlier in the week, another popular messaging app, Telegram, also faced new restrictions in Russia in a move highly criticized by many citizens. According to Roskomnadzor, which, like all Russian government agencies, uses the platform itself to distribute official announcements, Telegram failed to protect users’ personal data.
Telegram founder Pavel Durov, a Russian national who lives in exile in Dubai and who faces outstanding allegations in France over alleged criminal activity on his platform, criticized the move, saying the real motive was political censorship.
“Russia is restricting access to Telegram in an attempt to force its citizens to use a state-controlled app built for surveillance and political censorship,” he said, adding that “restricting citizens’ freedom is never the right answer.”
Russia previously banned a number of social media platforms, including Instagram, Facebook, and X (formerly known as Twitter) in response to what it said was the platforms’ “discrimination” against Russian media following the launch of Moscow’s ongoing full-scale invasion of Ukraine in February 2022.
Russia’s state-backed “Max” app
The “surveillance app” app referred to in the statements by WhatsApp and Telegram’s Durov is a platform called MAX. Launched in 2025 with full backing from the government, it is a multifunction app that includes messaging and e-commerce functions, but also access to a wide range of government services such as medical and municipal appointments.
Similar to the WeChat app in China, MAX is touted by Russian officials as both a social network and key portal for government services.
Authorities ordered the state-backed app to come pre-installed on all new digital devices sold in Russia from last year.
The MAX app logo is displayed on a smartphone screen in front of a Russian flag in Moscow, Russia, Feb. 9, 2026.
Sefa Karacan/Anadolu/Getty
The company notes in its legal terms that it can share user data with Russian authorities upon request, but says it does so only after a “mandatory legal assessment is conducted to determine the legality, validity, and adequacy of the requested data volume for the stated purposes,” and that it provides “only the minimum amount of data expressly required by applicable law.”
India’s government last year revoked a previous order for all new devices sold in the country to come pre-loaded with a state-developed and owned communications app, amid an outcry by opposition politicians and privacy organizations warning that it would be intrusive.
The world’s biggest social media companies face several landmark trials this year that seek to hold them responsible for harms to children who use their platforms. Opening statements in one such trial in Los Angeles County Superior Court began on Monday.
Instagram’s parent company Meta and Google’s YouTube face claims that their platforms deliberately addict and harm children. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.
Jurors got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining social media companies named as defendants.
Mark Lanier delivered the opening statement for the plaintiffs first, in a lively display where he said the case is as “easy as ABC,” which he said stands for “addicting the brains of children.” He called Meta and Google “two of the richest corporations in history” that have “engineered addiction in children’s brains.”
He presented jurors with a slew of internal emails, documents and studies conducted by Meta and YouTube, as well as YouTube’s parent company, Google. He emphasized the findings of a study Meta conducted called “Project Myst” in which they surveyed 1,000 teens and their parents about their social media use. The two major findings, Lanier said, were that the company knew children who experienced “adverse events” like trauma and stress were particularly vulnerable for addiction; and that parental supervision and controls made little impact.
Internal company documents
He also showed internal Google documents that likened YouTube to a casino, and internal communication between Meta employees in which one person said Instagram is “like a drug” and that employees are “basically pushers.”
At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury.
KGM made a brief appearance after a break during Lanier’s statement and she will return to testify later in the trial. Lanier spent time speaking about her childhood, and particularly focused on what her personality was like before she began using social media, saying her mother called her a “creative spark” as a child. She started using YouTube at age 6 and Instagram at age 9, Lanier said. Before she graduated elementary school, she had posted 284 videos on YouTube.
The outcome of the trial could have profound effects on the companies’ businesses and how they will handle children using their platforms.
Lanier said the companies’ lawyers will “try to blame the little girl and her parents for the trap they built,” referencing the plaintiff. She was a minor when she said she became addicted to social media platforms, which she claims had a detrimental impact on her mental health.
Lanier said that despite the public position of Meta and YouTube being that they work to protect children and implement safeguards for their use of the platforms, their internal documents show an entirely different position, with explicit references to young children being listed as their target audiences.
Lanier also drew comparisons between the social media companies and tobacco firms, citing internal communication between Meta employees who were concerned about the company’s lack of proactive action about the potential harm their platforms can have on children and teens.
“For a teenager, social validation is survival,” Lanier said. The defendants “engineered a feature that caters to a minor’s craving for social validation,” he added, speaking about “like” buttons and similar features.
“This was only the first case — there are hundreds of parents and school districts in the social media addiction trials that start today, and sadly, new families every day who are speaking out and bringing Big Tech to court for its deliberately harmful products,” said Sacha Haworth, executive director of the nonprofit Tech Oversight Project.
Jurors are not being asked to stop using Facebook, Instagram, YouTube or any other forms of social media throughout the course of the trial — which is expected to last about eight weeks — but Judge Carolyn B. Kuhl emphasized that they should not make any changes to the way they interact with the platforms, including changing their settings or creating new accounts.
Kuhl said that jurors should decide the liability of Meta and YouTube independently when they deliberate.
A separate trial in New Mexico, meanwhile, also kicked off with opening statements on Monday.
KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits. This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.
“Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue,” the lawsuit says.
Mark Zuckerberg expected to testify
Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the trial, which will last six to eight weeks. Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors.
The tech companies dispute the claims that their products deliberately harm children, citing a bevy of safeguards they have added over the years and arguing that they are not liable for content posted on their sites by third parties.
A Meta spokesperson said in a recent statement that the company strongly disagrees with the allegations outlined in the lawsuit and that it’s “confident the evidence will show our longstanding commitment to supporting young people.”
José Castañeda, a Google spokesperson, said that the allegations against YouTube are “simply not true.” In a statement, he said, “Providing young people with a safer, healthier experience has always been core to our work.”
The case will be the first in a slew of cases beginning this year that seek to hold social media companies responsible for harming children’s mental well-being.
In New Mexico, opening statements began Monday for trial on allegations that Meta and its social media platforms have failed to protect young users from sexual exploitation, following an undercover online investigation. Attorney General Raúl Torrez in late 2023 sued Meta and Zuckerberg, who was later dropped from the suit.
A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.
In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.
TikTok also faces similar lawsuits in more than a dozen states.
Other countries, meanwhile, are enacting new laws to limit social media for children. In January, French lawmakers approved a bill banning social media for children under 15, paving the way for the measure to enter into force at the start of the next school year in September, as the idea of setting a minimum age for use of the platforms gains momentum across Europe.
In Australia, social media companies have revoked access to about 4.7 million accounts identified as belonging to children since the country banned use of the platforms by those under 16, officials said. The law provoked fraught debates in Australia about technology use, privacy, child safety and mental health and has prompted other countries to consider similar measures.
The British government also said last month it will consider banning young teenagers from social media as it tightens laws designed to protect children from harmful content and excessive screen time.
LOS ANGELES — The world’s biggest social media companies face several landmark trials this year that seek to hold them responsible for harms to children who use their platforms. Opening arguments for the first, in Los Angeles County Superior Court, begin this week.
Instagram’s parent company Meta and Google’s YouTube will face claims that their platforms deliberately addict and harm children. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.
“This was only the first case — there are hundreds of parents and school districts in the social media addiction trials that start today, and sadly, new families every day who are speaking out and bringing Big Tech to court for its deliberately harmful products,” said Sacha Haworth, executive director of the nonprofit Tech Oversight Project.
At the core of the case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.
It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms.
KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits. This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.
“Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue,” the lawsuit says.
Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the trial, which will last six to eight weeks. Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors.
“Plaintiffs are not merely the collateral damage of Defendants’ products,” the lawsuit says. “They are the direct victims of the intentional product design choices made by each Defendant. They are the intended targets of the harmful features that pushed them into self-destructive feedback loops.”
The tech companies dispute the claims that their products deliberately harm children, citing a bevy of safeguards they have added over the years and arguing that they are not liable for content posted on their sites by third parties.
“Recently, a number of lawsuits have attempted to place the blame for teen mental health struggles squarely on social media companies,” Meta said in a recent blog post. “But this oversimplifies a serious issue. Clinicians and researchers find that mental health is a deeply complex and multifaceted issue, and trends regarding teens’ well-being aren’t clear-cut or universal. Narrowing the challenges faced by teens to a single factor ignores the scientific research and the many stressors impacting young people today, like academic pressure, school safety, socio-economic challenges and substance abuse.”
A Meta spokesperson said in a recent statement that the company strongly disagrees with the allegations outlined in the lawsuit and that it’s “confident the evidence will show our longstanding commitment to supporting young people.”
José Castañeda, a Google Spokesperson, said that the allegations against YouTube are “simply not true.” In a statement, he said, “Providing young people with a safer, healthier experience has always been core to our work.”
The case will be the first in a slew of cases beginning this year that seek to hold social media companies responsible for harming children’s mental well-being. A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.
In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.
TikTok also faces similar lawsuits in more than a dozen states.
In New Mexico, meanwhile, opening arguments begin Monday for trial on allegations that Meta and its social media platforms have failed to protect young users from sexual exploitation, following an undercover online investigation. Attorney General Raúl Torrez in late 2023 sued Meta and Zuckerberg, who was later dropped from the suit.
Prosecutors have said that New Mexico is not seeking to hold Meta accountable for its content but rather its role in pushing out that content through complex algorithms that proliferate material that can be harmful, saying they uncovered internal documents in which Meta employees estimate that about 100,000 children every day are subjected to sexual harassment on the company’s platforms.
Meta denies the civil charges while accusing Torrez of cherry-picking select documents and making “sensationalist” arguments. The company says it has consulted with parents and law enforcement to introduce built-in protections to social media accounts, along with settings and tools for parents.
—
Ortutay reported from Oakland, California. Associated Press Writer Morgan Lee in Santa Fe, New Mexico, contributed to this story.
MILAN — U.S. President Donald Trump on Sunday said that it is hard to cheer for American Olympians who are speaking out against administration policies, calling one such critic “a real Loser” who perhaps should have stayed home.
It was the latest and most prominent example of U.S. Olympians at the Milan Cortina Games inviting online backlash with their words.
Reporters on Friday asked U.S. athletes at a news conference how they feel representing the country during the Trump administration’s heighted immigration enforcement actions. Freestyle skier Hunter Hess replied that he had mixed emotions since he doesn’t agree with the situation, and that he is in Milan competing on behalf of everyone who helped get him to The Games.
“If it aligns with my moral values, I feel like I’m representing it,” Hess said. “Just because I’m wearing the flag doesn’t mean I represent everything that’s going on in the U.S.”
Among those who piled on Hess were YouTuber-turned-boxer Jake Paul.
“From all true Americans If you don’t want to represent this country go live somewhere else,” he wrote on X, where he has 4.4 million followers. Minutes later, he was photographed sitting beside U.S. Vice President JD Vance at the U.S women’s hockey game in Olympic host city Milan.
Trump said the next day that Hess’ comments make it hard to root for him.
“Hess, a real Loser, says he doesn’t represent his Country in the current Winter Olympics. If that’s the case, he shouldn’t have tried out for the Team, and it’s too bad he’s on it,” he wrote on his Truth Social account.
Hess wasn’t the only athlete voicing discontent – or facing blowback
At Friday’s news conference with the athletes, freestyle skier Chris Lillis referenced Immigration and Customs Enforcement, saying he’s “heartbroken” about what is happening in the U.S.
“I think that, as a country, we need to focus on respecting everybody’s rights and making sure that we’re treating our citizens as well as anybody, with love and respect,” Lillis said. “I hope that when people look at athletes compete in the Olympics, they realize that that’s the America that we’re trying to represent.”
And U.S. figure skater Amber Glenn said the LGBTQ+ community has had a hard time during the Trump administration.
In addition to Paul, conservative figures criticizing the athletes on social media include former NFL quarterback Brett Favre, actor Rob Schneider and U.S. Rep. Byron Donalds – who Trump has endorsed for the Florida gubernatorial race in November. And there was a flood of vitriol directed at them from ordinary Americans.
Glenn posted on Instagram that she had received “a scary amount of hate / threats for simply using my voice WHEN ASKED about how I feel.” She added that she will start limiting her social media use for her well-being.
In response to questions from The Associated Press, the U.S. Olympic and Paralympic Committee said in a statement Sunday that it is aware of an increasing amount of abusive and harmful messages directed toward the athletes and was doing its best to remove content and report credible threats to law enforcement.
“The USOPC stands firmly behind Team USA athletes and remains committed to their well-being and safety, both on and off the field of play,” it said.
Anti-ICE protests in Italy
Support for the U.S. abroad has eroded as the Trump administration has pursued an aggressive posture on foreign policy, including punishing tariffs, military action in Venezuela and threats to invade Greenland.
During the opening ceremony, Team USA athletes were cheered on, but jeers and whistles could be heard as Vance and his wife, second lady Usha Vance, were shown on the stadium screens, waving American flags from the tribune.
In Milan, several demonstrations have broken out against the against the local deployment of ICE agents – even after clarification that they are from an investigations unit that is completely separate from the enforcement unit at the forefront of the immigration crackdown in the U.S.
Homeland Security Investigations, an ICE unit that focuses on cross-border crimes, frequently sends its officers to overseas events like the Olympics to assist with security. The ICE arm seen in the streets of the U.S. is known as Enforcement and Removal Operations, and there is no indication its officers were sent to Italy.
A demonstration on Saturday featured thousands of protesters. Toward its end, a small number of them clashed with police, who fired tear gas and a water cannon. That followed another one last week, when hundreds protested the deployment of ICE agents.
___
Associated Press writer Graham Dunbar contributed to this report.
Moments after the news broke about the apparent abduction of “Today” show host Savannah Guthrie’s mother, the floodgates opened on social media.
Influencers relayed the timeline from the hours after Nancy Guthrie was last seen and posted photos of the blood found on her front porch that later was a match for the 84-year-old grandmother. Others called out individuals connected to the case as looking “sus” or filmed themselves walking through her neighborhood to help find her.
The desperate search for Guthrie, who authorities believe was taken a week ago against her will from her home just outside Tucson, Arizona, has become the latest investigation to pique the widespread interest of online armchair detectives.
As the search continues with no suspects or persons of interest, posts across Instagram, TikTok, X, Facebook and YouTube have put millions of eyeballs on tips and theories surrounding her disappearance. But they’ve also helped to amplify rumors and forced law enforcement to repeatedly set the record straight on at least one crucial detail.
Michael Alcazar, adjunct professor at John Jay College of Criminal Justice and retired New York Police Department detective, said overall the positives outweigh the negatives when it comes to the onslaught of social media posts.
“More people are aware; It keeps people alert,” he said. “If they know she hasn’t been found yet, perhaps people will remember that and if they see something, they might say something.”
He compared it to the widespread online response to the disappearance and death of Gabby Petito in 2021 and the impact that may have had on her body being found.
Two YouTubers said at the time that an image they posted showed Petito and her boyfriend’s white van and that it led investigators to the area where her body was found. But the FBI didn’t specify what led to the discovery.
“I think it’s just something that we have to adapt to as far as law enforcement,” Alcazar said. “The true crime community is growing. … There’s a lot of people out there that want to help.”
But with the widespread posts also comes the proliferation of misinformation.
Ashleigh Banfield, from the cable network NewsNation, announced on her podcast Wednesday that a law enforcement source told her a Guthrie family member is the prime suspect. She seemed to quickly walk-back the statement seconds later, saying the person “may be a prime suspect,” and adding that family members are often looked at first. The information quickly took off across social media, with people posting photos of the person she named.
Pima County Sheriff Chris Nanos addressed the rumor early in a news conference Thursday, saying authorities don’t have any suspects or persons of interest. That remained the case Friday.
“I plead with you to be careful of what it is we put out there. … You could actually be doing some damage to the case, you could do some damage to the individual, too,” he said later in the news conference. “Social media’s kind of an ugly world sometimes.”
Other posts have included a medium expressing her feeling that Guthrie is close by and a woman using astrology to point her viewers in the direction of what may have happened.
Calvin Chrustie, who has more than three decades of experience in negotiations for kidnapping, ransom and extortions, said if the public truly understood the toll those situations can have on family and law enforcement, they might not hastily post unsubstantiated information.
“This stuff on X and other stuff out there that’s pure speculation is actually making it more difficult for the families and making it more difficult to the police to secure the safe, you know, the safe return of the hostage,” he said.
Julie Urquhart, an elementary school teacher in New Brunswick, Canada, has been posting about the case on TikTok, Instagram and Facebook. She said she was drawn to the disappearance because she has a mother near Guthrie’s age and was fascinated that someone could have taken her seemingly without a trace.
Urquhart said her information comes from national news sites and law enforcement news conferences. One of her posts on TikTok and Instagram amassed more 4 million views, she said.
“That’s 4 million eyes that now saw that story and now maybe will see something or know something or know someone who does,” she said. “There’s just so many people it hits.”
__
Associated Press reporter Safiyah Riddle in Montgomery, Alabama, contributed.
LONDON — The European Union on Friday accused TikTok of breaching the bloc’s digital rules with “addictive design” features including autoplay and infinite scroll, in preliminary charges that strike at the heart of the popular video sharing app’s operating model.
EU regulators said their investigation found that TikTok hasn’t done enough to assess how its features could harm the physical and mental health of users, including children and “vulnerable adults.”
The European Commission said it believes TikTok should change the “basic design” of its service. The commission is the EU’s executive arm and enforcer of the 27-nation bloc’s Digital Services Act, a sweeping rulebook that requires social media companies to clean up their platforms and protect users, under threat of hefty fines.
TikTok denied the accusations.
“The Commission’s preliminary findings present a categorically false and entirely meritless depiction of our platform, and we will take whatever steps are necessary to challenge these findings through every means available to us,” the company said in a statement.
TikTok now has a chance to reply to the commission’s findings, which could lead to a so-called non-compliance decision and possible fine worth up to 6% of the company’s total annual revenue.
“Social media addiction can have detrimental effects on the developing minds of children and teens,” Henna Virkkunen, the commission’s executive vice-president for tech sovereignty, security and democracy, said in a press statement. “The Digital Services Act makes platforms responsible for the effects they can have on their users. In Europe, we enforce our legislation to protect our children and our citizens online.”
The preliminary findings from Brussels are the latest example of pressure that TikTok and other social media platforms are facing over youth addiction.
Australia has banned social media for under-16s while governments in Spain, France, and Denmark want to introduce similar measures. In the U.S., TikTok last month settled a landmark social media addiction lawsuit while two other companies named in the suit — Meta’s Instagram and Google’s YouTube — still face claims that their platforms deliberately addict and harm children.
The commission said that TikTok fuels the urge to keep scrolling because it constantly rewards users with new content, leading to reduced self control.
It said TikTok ignores signs that someone is compulsively using the app, such as the amount of time that minors spend on it at night, and how often the app is opened.
The company has failed to put in place “reasonable, proportionate and effective” measures to offset the risks, it said.
The commission said TikTok’s existing time management controls are easy to dismiss and “introduce limited friction,” while parental tools need “additional time and skills” from parents.
Changes that the commission wants TikTok to make include disabling features like infinite scroll; putting in more effective breaks for screen time, including at night; and changing its “highly personalized” recommender system, which feeds users an endless stream of video shorts based on their preferences.
TikTok says it has numerous tools, such as custom screen time limits and sleep reminders, that let users make “intentional decisions” about how they spend their time on the app.
Snap is on a mission to diversify its revenue sources — moving from a business model in which it largely chases ad revenue to one where it can also make money through subscriptions and, eventually, hardware. The company’s latest quarterly earnings report shows that, so far, the firm is having moderate success with that strategy.
In Q4, Snap’s revenue was $1.7 billion, which is up 10% year-over-year. Its average revenue per user was also up, slightly (to $3.62 from $3.44). The company’s net income was $45 million, up from $9 million the previous year, its earnings report shows.
The company has also continued to generate a significant amount of revenue from Snap+, the paid subscription service that the platform launched back in 2022. The service’s subscribers grew 71% year-over-year, reaching 24 million.
While those numbers might seem to suggest a company whose trajectory is headed in the right direction, the earnings report also shows the platform had slightly fewer daily active users last quarter — dropping from 477 million to 474 million. Those users fell away in North America and Europe, the report shows, while growing slightly throughout the rest of the world.
Reuters also reports that the company expects its revenue during the first quarter of this year to be below analysts’ previous estimates, as competition from Facebook, Instagram, and TikTok cuts into its advertising earnings.
During Wednesday’s earnings call, CEO Evan Spiegel focused on the company’s newer offerings, including its recent effort to charge users for Memories storage — a feature that lets users save and store their Snaps — and its plans to launch Specs later this year. The company has not launched a public-facing version of the augmented-reality glasses since 2019. In anticipation of that event, Snap recently announced the creation of a new subsidiary, Specs Inc., that is focused solely on further developing the glasses.
“Our long-term vision for augmented reality extends beyond the smartphone to a future when computing is more natural, contextual and seamlessly integrated into the real world,” said Spiegel. The CEO added that it was important to develop a “strong standalone brand” for Specs, as he said the hardware product could appeal “to a different audience segment” than the “core Snapchat audience.”
Techcrunch event
Boston, MA | June 23, 2026
That said, it sounds like the strategy behind Specs may not be entirely ironed out yet. Later in the conversation, Spiegel continued: “We’re so close to launch that the key here is really just, you know, nailing the launch and making sure that we deliver an extraordinary product. And then, you know, I think we have a lot of flexibility to think about how we want to capitalize [on] it moving forward.”
SEPTA is moving away from using social media to alert riders about bus and trolley delays, shifting instead to real-time updates on its website, app and third-party platforms like Google Maps.
Spain will join the growing list of countries banning access to social media for children, Prime Minister Pedro Sanchez Tuesday. The law will apply to users under 16 years of age amidst a broader push to hold social media companies accountable for hate speech, social division and illegal content.
at the World Governments Summit in Dubai, Prime Minister Sanchez excoriated social media, calling it a “failed state” where “laws are ignored and crime is endured.” He spoke to the importance of digital governance for these platforms, highlighting recent incidents like X’s AI chatbot Grok sexualized images of children, and the myriad that have taken place on Facebook.
In light of what Sanchez called the “integral” role social media plays in the lives of young users, he said the best way to help them is to “take back control.” Next week, his government will enact a slew of new regulations, with a ban on users under 16 years of age among them. Social media companies will be required to implement what he calls “effective age verification systems” and “not just checkboxes.” A specific timeline on enforcement of the coming ban has not been announced.
Spain will also make “algorithmic manipulation and amplification of illegal content” into a new criminal offense and Sanchez says tech CEOs will face criminal liability for hateful or illegal content on their platforms. The Prime Minister further announced that Spain has formed a coalition with five other unnamed European nations to enact stricter governance over social media platforms.
Sanchez said children have been “exposed to a space they were never meant to navigate alone,” and that it’s the government’s job to intervene. He added social media has fallen from its promise to be a “tool for global understanding and cooperation.”
enacted an under-16s ban on social media last year, which has prompted many nations to follow suit. It is under in the UK, while and have announced plans to enact similar bans.
PARIS — French prosecutors raided the offices of Elon Musk’s social media platform X on Tuesday as part of a preliminary investigation into a range of alleged offences, including spreading child sexual abuse images and deepfakes.
The investigation was opened in January last year by the prosecutors’ cybercrime unit, the Paris prosecutors’ office said in a statement. It’s looking into alleged “complicity” in possessing and spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system as part of an organized group, among other charges.
Prosecutors also asked Elon Musk and former CEO Linda Yaccarino to attend “voluntary interviews” on April 20. Employees of X have also been summoned that same week to be heard as witnesses, the statement said. Yaccarino was CEO from May 2023 until July 2025.
A spokesperson for X did not respond to a request for comment.
In a message posted on X, the Paris prosecutors’ office announced the ongoing searches at the company’s offices in France and said it was leaving the platform while calling on followers to join it on other social media.
“At this stage, the conduct of the investigation is based on a constructive approach, with the aim of ultimately ensuring that the X platform complies with French law, as it operates on the national territory,” the prosecutors’ statement said.
European Union police agency Europol ’’is supporting the French authorities in this,″ Europol spokesperson Jan Op Gen Oorth told The Associated Press, without elaborating.
The investigation was first opened following reports by a French lawmaker alleging that biased algorithms on X were likely to have distorted the functioning of an automated data processing system.
It was later expanded after Musk’s artificial intelligence chatbot Grok generated posts that allegedly denied the Holocaust and spread sexually explicit deepfakes, the statement said. Holocaust denial is a crime in France.
Grok wrote in a widely shared post in French that gas chambers at the Auschwitz-Birkenau death camp were designed for “disinfection with Zyklon B against typhus” rather than for mass murder — language long associated with Holocaust denial.
Musk’s artificial intelligence company built xAI and it is integrated into his X platform.
In later posts on its X account, the chatbot acknowledged that its earlier reply was wrong, said it had been deleted and pointed to historical evidence that Zyklon B in Auschwitz gas chambers was used to kill more than 1 million people.
Grok has a history of making antisemitic comments. Musk’s company took down posts from the chatbot that appeared to praise Adolf Hitler after complaints.
X is also under pressure from the EU. The 27-nation bloc’s executive arm opened an investigation last month after Grok spewed nonconsensual sexualized deepfake images on the platform.
Brussels has already hit X with a 120-million euro (then-$140 million) fine for shortcomings under the bloc’s sweeping digital regulations, including blue checkmarks that broke the rules on “deceptive design practices” that risked exposing users to scams and manipulation.
Paris, France — French authorities have asked Elon Musk to appear to answer questions as part of a probe into his social media platform X, the Paris prosecutor’s office said Monday, as authorities searched X’s office in the French capital.
“Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,” the Paris prosecutor’s office said in a statement.
French cybercrime authorities were carrying out a search, meanwhile, at X’s offices in Paris, the prosecutor’s office said.
The summonses for Musk and Yaccarino and the search at the X office were related to an investigation launched in January 2025 over complaints about how X’s algorithm recommends content to users and gathers data, the prosecutor’s office said. Officials have previously raised concern that the way X works could amount to political interference.
The investigation is to ensure that X is in compliance with French laws, and the prosecutor added that it was broadened last year after reports that X was allowing users to share nonconsensual, AI-generated sexually explicit imagery, and holocaust denial content.
Elon Musk, CEO of Tesla and SpaceX, and Shivon Zilis, a venture capitalist, arrive to attend the wedding of Dan Scavino, White House Deputy Chief of Staff, and Erin Elmore, the Department of State Director of Art in Embassies, at President Trump’s Mar-a-Lago resort in Palm Beach, Florida, Feb. 1, 2026.
SAUL LOEB/AFP/Getty
X and Musk have dismissed the French investigation, and similar probes by European Union and British authorities, as baseless, politically motivated attacks on free speech.
Yaccarino resigned as CEO of X in July last year after two years at the helm of the company.
The investigation is being led by the cybercrime unit of the prosecutor’s office, in conjunction with French police and the joint European policing agency Europol.
A CBS News investigation found late last month that the Grok AI tool on Musk’s X platform still allowed users in the U.S., U.K. and EU to digitally undress people without their consent, despite public pledges from the company to stop the function.
The Grok chatbot, both via its standalone app and for premium X account holders using the platform, allowed people to use artificial intelligence to edit images of real people and show them in revealing clothing such as bikinis.
A request for comment on the findings of CBS News’ investigation was met with an apparent auto-reply from Musk’s company xAI, saying only: “Legacy media lies.”
Scrutiny of the Grok feature has mounted rapidly in recent months, with the British government warning X could face a U.K.-wide ban if it fails to block the “bikini-fy” tool, and EU regulators announcing their own investigation into the Grok AI editing function on in late January.
CBS News found Grok was still enabling users to digitally undress people in photos weeks after X said, earlier in January, that it had, “implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.”
Moltbook was launched last week by a software developer and mirrors the template of Reddit, but it’s not for humans. Instead, it allows artificial intelligence agents to post written content and interact with other chatbots through comments, up-votes and down-votes. Tyler Cowen, professor of economics at George Mason University, joins CBS News to discuss.
As linear TV fades, social platforms are racing to become the next big screen for entertainment. Nikos Pekiaridis/NurPhoto via Getty Images
Is social media the new TV? Cable and linear television have been in decline for years, especially as younger generations consume more entertainment on their phones. In response, traditional studios and streaming services have been experimenting with social platforms. Peacock tested the waters by uploading clips from its comedy Killing It to TikTok, while Paramount broke its 2006 film Mean Girls into several parts on the same platform.
At the same time, microdramas—short, bite-sized video series designed for mobile viewing—have surged in popularity. Networks like TelevisaUnivision and Telemundo have been launching original microdramas. Earlier this month at CES, Disney announced it would begin releasing “microcontent” on Disney+. But what happens when social media doesn’t just live on phones and starts moving into traditional TV screens and living rooms?
In December, Instagram announced it was testing an “Instagram for TV” app that allows users to watch Reels on their televisions. TikTok previously made a similar push with TV apps, before they were discontinued due to compliance with new laws.
On the advertiser side, Pinterest recently acquired connected TV (CTV) ad-buying platform tvScientific, signaling that the company believes advertising dollars may start shifting toward living room viewing for its platform.
That shift is already underway. Social video is now the second-most-watched video type on TVs, according to research from Parks Associates.
Jennifer Kent, SVP and principal analyst at Parks Associates, said this trend is blurring the lines between traditional video media and social video strategies, particularly as YouTube, Instagram and TikTok push for more TV-based viewing.
Kent added that this also correlates with the growth of the creator economy, as traditional media companies partner with creators or launch initiatives dedicated to creator content. Amazon MGM Studios, for example, has collaborated with popular creators like MrBeast on projects such as Beast Gamesto produce more premium programming. YouTube has also announced efforts to introduce more episodic formats for creator content.
“Lines are blurring all over,” Kent said. “Everybody on the big screen wants to mimic what’s happening on social media, and everyone on social media wants to be on the big screen.”
She added, “The important impact of all of these social video platforms coming to the big screen is the way that they are raising expectations for everybody else that’s on the big screen—to be more interactive, to be more creative with formats, to engage with new creators that can speak to audiences in different ways.”
The growing pains of social media platforms
The roughly $15 billion decline of the U.S. linear TV market has accelerated this experimentation, said Max Willens, a principal analyst at eMarketer. However, he noted that growing competition has also made social platforms more sensitive to slowing growth. For years, platforms could rely on two assumptions: that more users would join each year, and that those users would spend more time on their apps. That is no longer the case.
According to eMarketer, time spent on social media in the U.S. is flatlining and is expected to begin gently declining starting next year.
“Combine social media platforms realizing they don’t have the easy path toward incremental growth with the increasingly spread-out competition, and they face a lot of pressure to try to establish a beachhead on television screens as the budgets that used to go to linear advertisers come up for grabs,” Willens told Observer.
Still, moving into living rooms isn’t a new idea. Willens pointed to YouTube, which launched as a desktop platform, became mobile-first, and is now a major force in TV viewing.
YouTube has also said that more than 150 million Americans watch the platform on TV screens. Nielsen’s Media Distributor Gauge report found that YouTube captured 13.4 percent of TV viewing time, outpacing Disney’s 9.4 percent share. eMarketer research shows that Americans now spend roughly equal time watching YouTube on TV and on their phones.
“That balance is going to persist over the next couple of years,” Willens predicted. “When you add all those things together, it’s not hard to understand why the social platforms are trying to position themselves on the biggest screen in the house.”
Looking ahead, Willens said both media companies and social platforms will need to adjust their strategies as viewing habits continue to shift.
“They’re all just screens at the end of the day, but it’s not like television has gone away,” he said. “Televisions are not just these big dusty boxes that our grandparents are looking at. They are still central hubs of leisure time for consumers of every age. So, advertisers and media companies have to figure out what’s different about that consumption and adjust their strategies accordingly.”
TikTok, which is under new ownership in the U.S., said Sunday that it has restored service after outages last week that marred user experiences. The social network has over 220 million users in the U.S.
The company blamed last week’s snowstorm, which caused an outage at an Oracle-operated data center responsible for TikTok operations.
“We have successfully restored TikTok back to normal after a significant outage caused by winter weather took down a primary U.S. data center site operated by Oracle. The winter storm led to a power outage which caused network and storage issues at the site and impacted tens of thousands of servers that help keep TikTok running in the U.S. This affected many of TikTok’s core features—from content posting and discovery to the real-time display of video likes and view counts,” the company said in a post on X.
In January, the U.S. finalized the deal to create a separate entity for TikTok. A U.S.-based investor consortium called TikTok USDS took a controlling 80% stake, with the remaining 20% ownership held by ByteDance.
Following the deal finalization — which coincided with the snowstorm — users experienced glitches in features like posting, searching within the app, slower load times, and time-outs. TikTok noted that creators might see zero views on their posts until the problem was resolved. Later, the company said that it was working on solving the issue, but outages persisted, and users faced problems with posting content.
TikTok’s transition to a new ownership structure, paired with app snafus and user experience glitches, was beneficial for some other social networks. The Mark Cuban-backed short video app Skylight, which is built on the AT protocol, saw its user base soar to more than 380,000 users in the week the deal was finalized. Upscrolled, a social network by Palestinian-Jordanian-Australian technologist Issam Hijazi, also climbed in App Store rankings to reach the second spot in the social media category in the U.S. The app was downloaded 41,000 times within days of the TikTok deal’s finalization, according to analyst firm AppFigures.
Before the internet we didn’t know much about remote areas, simply because we didn’t have access to that information. We had no idea what happened, for instance, in Antarctica, because everything we knew was very difficult to access, and extremely slow. We would resort to books, newspapers, and, more recently, to television. But even when a relatively fast information medium was accessible, like television, it was usually filtered: we had either state television, or, in capitalist countries, privately owned television. Both of them had their own agendas, which were more often than not diverging from the factual truth.
We’re now at the peak of the internet. The most advanced human technology, that makes virtually any spot on this planet accessible, is now commonplace. Chances are that someone at the research station in Antarctica is tweeting right now, or, who knows, even live-streaming. But, guess what, the truth about that place is still elusive. Why? Because we cannot trust the internet anymore. AI not only made everything fakeable, but it made it extremely cheap too. Anyone can create a deep fake now, for free. That live-stream? Maybe already made weeks ago and running in a loop from some building in Cambodia.
Technology Doesn’t Equal Trust
Just because a technology is sufficiently advanced to achieve impressive results, it doesn’t mean that technology is inherently trustworthy. At its helm are still humans. And humans are flawed. The moment some tool will grant them access to more wealth, or power, their ingrained greed will kick in.
I would go even further and say that the more advanced and predictable a tech is, the more it would be hijacked, for profit or control.
We came to a point where trust should be our base currency, not performance, or intelligence. Trust is more important for survival than intelligence now. You can be a very intelligent person, but if you trust the wrong sources, you’re fucked.
And here comes the one million dollars question: how do you develop trust? How do you practice it? How do you become a trustworthy person?
Counterintuitively, it’s by coming back to basics. To real life interactions (outside social media), to technology-stripped communication (in person, not on video calls), to material stores of value (gold, not Bitcoin). You see, we grew up as the wunderkind generation, believing that tech will fix humans. I still remember when Vitalik Buterin, the co-founder of Ethereum, was calling a few years ago for artificial wombs, in a strikingly similar way with the Matrix. Why artificial? What’s wrong with the real wombs?
Comfort Doesn’t Equal Truth
Technology made our lives incredibly comfortable. We grow up food anytime, anywhere; we can fly anytime, anywhere; we can talk to anyone, anywhere. This comfort shaped our expectations the wrong way. Just because we can do some things easily, we now expect everything to be easier – and when it’s not, we pour more technology into the fabric of reality.
The end result is not truth, it’s more confusion.
Our lives are more comfortable, but slowly drifting away from truth, in a never ending sea of confusion.
The way back to solid ground is difficult, but doable. It requires discipline, skill and the willingness to experience reality in a raw, unfiltered way. And the unshakeable commitment that we are the masters of technology, not the other way around.
Yesterday, Apple blogger John Gruber of Daring Fireball posted the overall most popular iPhone apps for all of 2025, and the top five were:
ChatGPT
Threads
Google
TikTok
WhatsApp
I’m not the first person to point this out, but it’s not exactly a stretch to infer that the three apps that have suddenly squeezed in between ChatGPT and Threads are on the list due to dissatisfaction with TikTok. Two are VPN apps, which can theoretically be used to access TikTok from a virtual network in a country where the U.S. version of TikTok is unnecessary, and one, UpScrolled, is an Australian video and text sharing app that recently went viral.
To refresh your memory on what’s going on with TikTok, after years of trying to force Chinese-owned ByteDance to relinquish ownership and let a U.S.-friendly buyer take over, a legal entity was created earlier this month that can take ownership of TikTok, with Adam Presser as its new CEO. This allows TikTok to comply with a new U.S. law essentially requiring TikTok to be run by a U.S. company or be banned.
But this entity, a complex joint corporate venture in charge of U.S. operations for TikTok, appears from the outside to be struggling to keep everything in order, amid the handoff from TikTok’s Singapore base of operations (U.S. TikTok data was already largely housed in the U.S., so it’s not clear if this transition actually involves any large, burdensome data transfers).
The issues TikTok is referring to dovetail nicely with the descriptions of problems described by users likw videos that sit in review indefinitely, and posts that get low or zero view counts, often despite high numbers for other engagement metrics like comments or shares. Other general issues that fit with a data center interruption include a possible lack of analytics in TikTok Studio, livestreamers apparently getting random messages saying they need to stop streaming immediately, and irrelevant search results.
However, the hiccups at TikTok are, at least in part, being perceived as the technical consequences of a right-wing takeover. That’s in part because that 15 percent of TikTok U.S. now held by Oracle is controlled by the right-wing billionaire Larry Ellison, and the ownership transition is of course being shepherded along by the Trump Administration. And that’s not to mention the fact that the Biden-era push to ban TikTok emerged amid paranoia that it was turning the youth into Maoist, Hamas-supporting terrorists.
Gizmodo reached out to TikTok’s U.S. joint venture for clarification about the causes of the platform’s recent problems. In a reply, we received links to statements on X, including the one from Oracle. We followed up, specifically asking if any content rules had been changed since the ownership transition. We will update if we hear back.
Around Sunday, TikTok users started writing that they felt like their political posts were being censored.
“TikTok has been under new leadership for like a day and I made a slideshow with posts from the ICE rally today and it immediately got out under review and is not being published,” wrote Bluesky user @pnwpolicyangel.bsky.social.
“TikTok is cooked. They won’t even post my last two videos — I can see them, but anyone else who goes to my profile won’t even see them. Overnight, our federal government has silenced and suppressed dissent [on] one of our largest platforms. Not just content, but everything from certain people.”
It would be corporate malpractice to roll out such insidious and restrictive policies right out of the gate like this, particularly amid the present backdrop of political upheaval. Once again, TikTok still has not commented on this speculation from some of its users.
But if it’s true that users are flocking to other options for political reasons despite no hard evidence that the new TikTok U.S. joint venture has already begun some kind of crackdown on political speech, that also doesn’t necessarily mean they’re misled. They might just expect changes along the lines of what happened at Twitter when Elon Musk took over. Content standards there took a hard right turn very quickly. So with that in mind, some TikTok users might just be leaving preemptively at the first sign of an annoying glitch in order to avoid enduring even worse changes that they perceive to be on the horizon.