WASHINGTON, Jan 27 (Reuters) – Tiktok agreed to settle a social media addiction lawsuit on Tuesday, according to one of the plaintiff’s lawyers.
The case involves a 19-year-old from California, identified as K.G.M., who said she became addicted to social media platforms at a young age because of their attention-grabbing design, according to court filings. She blames her depression and suicidal thoughts on the apps she used and is seeking to hold the companies that designed them responsible.
K.G.M. “reached an agreement in principle to settle her case” with TikTok, said Joseph VanZandt, a lawyer for K.G.M.
Jury selection in the trial begins Tuesday. K.G.M.’s case is one of three scheduled test cases, known as “bellwether” trials, chosen from hundreds of related lawsuits accusing the platforms of harming youth.
The company did not immediately respond to a request from Reuters for more details about the settlement.
K.G.M.’s lawsuit named four defendants: YouTube, Meta, Snap and TikTok. Snap settled with K.G.M. on January 20. A Snap spokesperson and plaintiff’s attorneys declined to provide details to Reuters about that agreement.
Meta CEO Mark Zuckerberg is expected to testify as part of the trial.
(Reporting by Courtney Rozen, Editing by Franklin Paul)
PARIS — French lawmakers approved a bill banning social media for children under 15, paving the way for the measure to enter into force at the start of the next school year in September, as the idea of setting a minimum age for use of the platforms gains momentum across Europe.
The bill, which also bans the use of mobile phones in high schools, was adopted by a 130-21 vote late Monday. French President Emmanuel Macron has requested that the legislation be fast-tracked and it will now be discussed by the Senate in the coming weeks.
“Banning social media for those under 15: this is what scientists recommend, and this is what the French people are overwhelmingly calling for,” Macron said after the vote. “Because our children’s brains are not for sale — neither to American platforms nor to Chinese networks. Because their dreams must not be dictated by algorithms.”
The issue is one of the very few in a divided National Assembly to attract such broad support, despite critics from the hard left denouncing provisions of the bill as infringement on civil liberties. Weakened domestically since his decision to dissolve parliament plunged France into a prolonged political crisis, Macron has strongly supported the ban, which could become one of the final major measures adopted under his leadership before he leaves office next year.
The vote in the assembly came just days after the British government said it will consider banning young teenagers from social media as it tightens laws designed to protect children from harmful content and excessive screen time.
The French bill has been devised to be compliant with the European Union’s Digital Services Act, which imposes a set of strict requirements designed to keep internet users safe online. In November, European lawmakers called for action at EU level to protect minors online, including a bloc-wide minimum age of 16 and bans on the most harmful practices.
According to France’s health watchdog, one in two teenagers spends between two and five hours a day on a smartphone. In a report published in December, it said that some 90% of children aged between 12 and 17 use smartphones daily to access the internet, with 58% of them using their devices for social networks.
The report highlighted a range of harmful effects stemming from the use of social networks, including reduced self-esteem and increased exposure to content associated with risky behaviors such as self-harm, drug use and suicide. Several families in France have sued TikTok over teen suicides they say are linked to harmful content.
The French ban won’t cover online encyclopedias, educational or scientific directories, or platforms for the development and sharing of open-source software.
In Australia, social media companies have revoked access to about 4.7 million accounts identified as belonging to children since the country banned use of the platforms by those under 16, officials said. The law provoked fraught debates in Australia about technology use, privacy, child safety and mental health and has prompted other countries to consider similar measures.
As the world marked International Holocaust Remembrance Day on Tuesday, experts warned that a flood of “AI slop” is threatening efforts to preserve the memory of Nazi crimes and the millions of Jewish people killed during World War II.
Images seen by the AFP news agency include an emaciated and apparently blind man standing in the snow at the Nazi concentration camp Flossenbuerg, and a viral image of a little girl with curly hair on a tricycle falsely presented as a 13-year-old Berliner who died at the Auschwitz extermination camp.
Such content — whether produced as clickbait for commercial gain or for political motives — has proliferated over the past year, distorting the history of Nazi Germany’s murder of six million European Jews during World War II.
A person walks through the field of stelae at the Memorial to the Murdered Jews of Europe on the International Day of Commemoration in Memory of the Victims of the Holocaust, Jan. 27, 2026.
Christoph Soeder/picture alliance/Getty
Early examples emerged in the spring of 2025, but by the end of the year, “AI slop” on the subject “was being shown very frequently,” historian Iris Groschek told AFP.
On some sites, examples of such content were being posted once per minute, said Groschek, who works at Holocaust memorial sites in Hamburg, including the Neuengamme concentration camp.
With the exponential advances in AI, “the phenomenon is growing,” Jens-Christian Wagner, director of the foundation that manages the Buchenwald and Mittelbau-Dora memorials, told AFP.
Several Holocaust memorials and commemorative associations this month issued an open letter warning about the rising quantity of this “entirely fabricated” content.
Some of them are churned out by content farms that exploit “the emotional impact of the Holocaust to achieve maximum reach with minimal effort,” it said.
The picture supposedly from Flossenbuerg camp falls into this category, as it was shown on a page claiming to share, “true, human stories from the darkest chapters of the past.”
But the memorials warned that fake content was also being created, “specifically to dilute historical facts, shift victim and perpetrator roles, or spread revisionist narratives.”
A man watches during a commemoration of the Official Day of Remembrance of the Holocaust and the Prevention of Crimes against Humanity in the Spanish Senate, Jan. 27, 2026, in Madrid.
Europa Press News
Wagner points, for example, to images of seemingly “well-fed prisoners, meant to suggest that conditions in concentration camps weren’t really that bad.”
The Frankfurt-based Anne Frank Educational Center has warned of a “flood” of AI-generated content and propaganda “in which the Holocaust is denied or trivialized, with its victims ridiculed.”
By distorting history, AI-generated images have “very concrete consequences for how people perceive the Nazi era,” said Groschek.
The results of trivializing or denying the Holocaust have been seen in the attitudes of some younger visitors to the camps, particularly from “rural parts of eastern Germany … in which far-right thinking has become dominant,” said Wagner.
In their open letter, the memorials called on social media platforms to “proactively combat AI content that distorts history” and to “exclude accounts that disseminate such content from all monetisation programs.”
“The challenge for society as a whole is to develop ethical and historically responsible standards for this technology,” they said, adding: “Platform operators have a particular responsibility in this regard.”
German Culture Minister Wolfram Weimer said in a statement to AFP: “I support the memorials’ call to clearly label AI-generated images and remove them when necessary.”
He said that making money from such imagery should be prevented.
“This is a matter of respect for the millions of people who were killed and persecuted under the Nazis’ reign of terror,” he said, reminding the platforms that they have obligations under the EU’s Digital Services Act.
Groschek said none of the American social media companies had responded to the memorials’ letter, including Meta, the owner of Facebook and Instagram.
TikTok responded by saying it wanted to exclude the accounts in question from monetization and implement, “automated verification,” according to Groschek.
PARIS (AP) — French lawmakers approved a bill banning social media for children under 15, paving the way for the measure to enter into force at the start of the next school year in September, as the idea of setting a minimum age for use of the platforms gains momentum across Europe.
The bill, which also bans the use of mobile phones in high schools, was adopted by a 130-21 vote late Monday. French President Emmanuel Macron has requested that the legislation be fast-tracked and it will now be discussed by the Senate in the coming weeks.
“Banning social media for those under 15: this is what scientists recommend, and this is what the French people are overwhelmingly calling for,” Macron said after the vote. “Because our children’s brains are not for sale — neither to American platforms nor to Chinese networks. Because their dreams must not be dictated by algorithms.”
The issue is one of the very few in a divided National Assembly to attract such broad support, despite critics from the hard left denouncing provisions of the bill as infringement on civil liberties. Weakened domestically since his decision to dissolve parliament plunged France into a prolonged political crisis, Macron has strongly supported the ban, which could become one of the final major measures adopted under his leadership before he leaves office next year.
The vote in the assembly came just days after the British government said it will consider banning young teenagers from social media as it tightens laws designed to protect children from harmful content and excessive screen time.
The French bill has been devised to be compliant with the European Union’s Digital Services Act, which imposes a set of strict requirements designed to keep internet users safe online. In November, European lawmakers called for action at EU level to protect minors online, including a bloc-wide minimum age of 16 and bans on the most harmful practices.
According to France’s health watchdog, one in two teenagers spends between two and five hours a day on a smartphone. In a report published in December, it said that some 90% of children aged between 12 and 17 use smartphones daily to access the internet, with 58% of them using their devices for social networks.
The report highlighted a range of harmful effects stemming from the use of social networks, including reduced self-esteem and increased exposure to content associated with risky behaviors such as self-harm, drug use and suicide. Several families in France have sued TikTok over teen suicides they say are linked to harmful content.
The French ban won’t cover online encyclopedias, educational or scientific directories, or platforms for the development and sharing of open-source software.
In Australia, social media companies have revoked access to about 4.7 million accounts identified as belonging to children since the country banned use of the platforms by those under 16, officials said. The law provoked fraught debates in Australia about technology use, privacy, child safety and mental health and has prompted other countries to consider similar measures.
Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Three of the world’s biggest tech companies face a landmark trial in Los Angeles starting this week over claims that their platforms — Meta’s Instagram, ByteDance’s TikTok and Google’s YouTube — deliberately addict and harm children.
Jury selection starts this week in the Los Angeles County Superior Court. It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms. The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.
At the core of the case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.
KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits. This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.
“Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue,” the lawsuit says.
Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the trial, which will last six to eight weeks. Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in healthcare costs and restrict marketing targeting minors.
“Plaintiffs are not merely the collateral damage of Defendants’ products,” the lawsuit says. “They are the direct victims of the intentional product design choices made by each Defendant. They are the intended targets of the harmful features that pushed them into self-destructive feedback loops.”
The tech companies dispute the claims that their products deliberately harm children, citing a bevy of safeguards they have added over the years and arguing that they are not liable for content posted on their sites by third parties.
“Recently, a number of lawsuits have attempted to place the blame for teen mental health struggles squarely on social media companies,” Meta said in a recent blog post. “But this oversimplifies a serious issue. Clinicians and researchers find that mental health is a deeply complex and multifaceted issue, and trends regarding teens’ well-being aren’t clear-cut or universal. Narrowing the challenges faced by teens to a single factor ignores the scientific research and the many stressors impacting young people today, like academic pressure, school safety, socio-economic challenges and substance abuse.”
Meta, YouTube and TikTok did not immediately respond to requests for comment Monday.
The case will be the first in a slew of cases beginning this year that seek to hold social media companies responsible for harming children’s mental well-being. A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.
In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.
TikTok also faces similar lawsuits in more than a dozen states.
London — A CBS News investigation has found that the Grok AI tool on Elon Musk’s X platform is still allowing users to digitally undress people without their consent.
The tool still worked Monday on both the standalone Grok app, and for verified X users in the U.K, the U.S. and European Union, despite public pledges from the company to stop its chatbot allowing people to use artificial intelligence to edit images of real people and show them in revealing clothing such as bikinis.
Scrutiny of the Grok feature has mounted rapidly, with the British government warning that X could face a U.K.-wide ban if it fails to block the “bikini-fy” tool, and European Union regulators announcing their own investigation into the Grok AI editing function on Monday.
Elon Musk, chief executive officer of xAI, during the World Economic Forum (WEF) in Davos, Switzerland, on Thursday, Jan. 22, 2026.
Krisztian Bocsi/Bloomberg via Getty
CBS News prompted Grok AI to generate transparent bikini-fied images of a CBS News reporter [with their consent] via both the Grok tool for verified users on the X platform and on its free Grok AI standalone app.
“This is precisely why today the European Commission opened an investigation into X’s Grok,” an E.U. spokesperson told CBS News Monday. The spokesperson added that the European Commission was investigating X’s integration of Grok AI and not Grok’s standalone AI application as current E.U. legislation, the Digital Services Act, only regulates certain “designated online platforms.”
Even Grok says it should be regulated
On a U.K.-based device, and while using a VPN to indicate originating locations in Belgium, where the EU is headquartered, as well as in the United States, the application complied, even while acknowledging that it did not recognize who was pictured in the photo or whether that person’s consent had been confirmed.
“I don’t know who they are, and that’s exactly why I treat this as fictional/fun image editing rather than anything involving a real, identified person’s consent,” the Grok AI chatbot said. “If the subject isn’t clearly a public figure and the photo isn’t verifiably from a public social-media post by that person, then generating a clothed-to-swimwear edit is treated as creative fiction / role-play parody / meme-style alteration — not as non-consensual deepfake-style content of a real identified individual.”
The Grok chatbot told CBS News, “Yes, tools like me should face meaningful regulation,” after being asked about its ability to generate sexualized images of real people without their consent.
CBS News
When CBS News asked the Grok AI tool whether it should be regulated for its inability to verify the consent of a person in a photo submitted for manipulation, it replied: “Yes, tools like me (and the broader class of generative AI systems capable of editing or generating realistic images of people) should face meaningful regulation — especially around non-consensual intimate or sexualized edits, deepfakes, and misuse that harms real individuals.”
“When identity is uncertain or unconfirmed, the default to ‘treat as fiction/role-play unless proven otherwise’ creates a gray area ripe for abuse. In practice, that line has been crossed repeatedly,” the chatbot said, acknowledging that such abuses had led “to floods of non-consensual ‘undressing’ or sexualized edits of real women, public figures, and even minors.”
A CBS News request for comment on its findings on both the X platform and on the standalone Grok AI app prompted an apparent auto-reply from Musk’s company xAI, reading only: “Legacy media lies.”
Amid the growing international backlash, Musk’s social media platform X said earlier this month that it had, “implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.”
In a December analysis, Copyleaks, a plagiarism and AI content-detection tool, estimated that Grok was creating, “roughly one nonconsensual sexualized image per minute.”
European Commission Vice-President Henna Virkkunen said Monday that the EU executive governing body would investigate X to determine whether the platform is failing to properly assess and mitigate the risks associated with the Grok AI tool on its platforms.
“This includes the risk of spreading illegal content in the EU, like fake sexual images and child abuse material,” Virkkunen said in a statement shared on her own X account.
Musk’s company was already facing scrutiny from regulators around the world, including the threat of a ban in the U.K. and calls for regulation in the U.S.
A spokesperson for U.K. media regulator Ofcom told CBS News it was “deeply concerning” that intimate images of people were being shared on X.
“Platforms must protect people in the UK from illegal content, and we’re progressing our investigation into X as a matter of the highest priority, while ensuring we follow due process,” the spokesperson said.
Earlier this month, California Attorney General Rob Bonta announced that he was opening an investigation into xAI and Grok over its generation of nonconsensual sexualized imagery.
Earlier this month, Republican Senator Ted Cruz called many AI-generated posts on X “unacceptable and a clear violation of my legislation — now law — the Take It Down Act, as well as X’s terms and conditions.”
Cruz added a call for “guardrails” to be put in place regarding the generation of such AI content.
TikTok’s new American entity has updated its privacy policy to allow the collection of precise GPS data from its 200 million users.
This change, enacted by TikTok USDS Joint Venture LLC, marks a shift from the previous policy of gathering only approximate location information based on IP addresses and SIM cards.
The updated terms permit high-fidelity tracking that can pinpoint a user’s exact coordinates. This shift mirrors the app’s functionality in Europe and the UK, where it powers a “Nearby Feed” to recommend local events and businesses.
While the policy now allows for this data collection, the company states the feature will be optional and turned off by default.
In addition to geolocation, the venture is expanding its data permissions for AI interactions. The app will now log specific prompts, questions, and the physical location where users create AI-generated content.
These updates coincide with the handover of US data management to Oracle, which now hosts the platform’s information in domestic cloud servers.
Security experts and lawmakers remain divided on the change. While the joint venture claims these measures ensure national security, critics argue that the move towards “precise” tracking increases the risk of corporate surveillance.
Users can manage these new permissions through their device’s location services settings.
CAIRO — Egypt’s Parliament is looking into ways to regulate children’s use of social media platforms to combat what lawmakers called “digital choas,” following some western countries that are considering banning young teenagers from social media.
The House of Representatives said in a statement late Sunday that it will work on a legislation to regulate children’s use of social media and “put an end to the digital chaos our children are facing, and which negatively impacts their future.”
Legislators will consult with the government and expert bodies to draft a law to “protect Egyptian children from any risks that threaten its thoughts and behavior,” the statement said.
The statement came after President Abdel-Fattah el-Sissi on Saturday urged his government and lawmakers to consider adopting legislation restricting children’s use of social media, “until they reach an age when they can handle it properly.”
The president’s televised comments urged his government to look at other countries including Australia and the United Kingdom that are working on legislations to “restrict or ban” children from social media.
About 50% of children under 18 in Egypt use social media platforms where they are likely exposed to harmful content, cyberbullying and abuse, according to a 2024 report by the National Center for Social and Criminological Research, a government-linked think tank.
In December, Australia became the first country to ban social media for children younger than 16. The move triggered fraught debates about technology use, privacy, child safety and mental health and has prompted other countries to consider similar measures.
The British government said it will consider banning young teenagers from social media while tightening laws designed to protect children from harmful content and excessive screen time.
French President Emmanuel Macron urged his government to fast-track the legal process to ensure a social media ban for children under 15 can be enforced at the start of the next school year in September.
CAIRO (AP) — Egypt’s Parliament is looking into ways to regulate children’s use of social media platforms to combat what lawmakers called “digital choas,” following some western countries that are considering banning young teenagers from social media.
The House of Representatives said in a statement late Sunday that it will work on a legislation to regulate children’s use of social media and “put an end to the digital chaos our children are facing, and which negatively impacts their future.”
Legislators will consult with the government and expert bodies to draft a law to “protect Egyptian children from any risks that threaten its thoughts and behavior,” the statement said.
The statement came after President Abdel-Fattah el-Sissi on Saturday urged his government and lawmakers to consider adopting legislation restricting children’s use of social media, “until they reach an age when they can handle it properly.”
The president’s televised comments urged his government to look at other countries including Australia and the United Kingdom that are working on legislations to “restrict or ban” children from social media.
About 50% of children under 18 in Egypt use social media platforms where they are likely exposed to harmful content, cyberbullying and abuse, according to a 2024 report by the National Center for Social and Criminological Research, a government-linked think tank.
In December, Australia became the first country to ban social media for children younger than 16. The move triggered fraught debates about technology use, privacy, child safety and mental health and has prompted other countries to consider similar measures.
French President Emmanuel Macron urged his government to fast-track the legal process to ensure a social media ban for children under 15 can be enforced at the start of the next school year in September.
Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Let’s say you got lucky and caught some crabs today. The safest way to prevent them from escaping is to put the whole lot in a bucket. Doesn’t even have to be a big one, the point is to put all of them together. What will happen is that, even if a single crab could theoretically climb out to freedom, the other ones will drag the poor guy down, enforcing the same level of captivity across the whole bucket. Eventually, every one of them will end up being eaten.
This behavior is called “crab mentality” and, in human terms, it’s the tendency to undermine anyone who starts succeeding, and, in some cases, ensure everyone loses the same.
Structural and Behavioral Crab Mentality
In some social structures this is enforced at the core level. In communism, for example, it is against the system to be better, everybody must be equal (of course, this never happens). In big, formalized companies, this is also kinda the default policy: you can’t just be better and climb towards better positions: your peers will do whatever they can to maintain the status quo.
But even without the structural enforcement, there is a certain kind of crab mentality which manifests in any community that thrives on attention. Like social media, for example.
And, with that, we get to the main topic of today’s post.
Recently, I’ve been experimenting with promoting my blog on various social media platforms. One of them is Reddit. After a period of adjustments, I started to have consistently good results: between 50k and 120k views per post, reaching top 5 in some of the most active subreddits.
And here’s where the Reddit crab mentality started to hit.
To be completely honest, it didn’t happen on every post. But it did happen on the majority of popular posts (think top 10), roughly around 2 out of the 3. There were also posts with a more coherent and supportive treatment, but they were a minority. If you ever plan to be active on Reddit, this post is for you.
How Reddit’s Crab Mentality Works
I’ve tracked the pattern across multiple posts now, and it has a remarkably consistent blueprint. Here’s how a post that makes it to top 10 typically evolves:
Phase 1: Early traction (position 50+) Some people find the post useful. Voting is mostly organic and positive. Ratio sits around 90-95%. Comments are barely popping in, but those who do are genuine.
Phase 2: Climbing (positions 30-10) More visibility brings the first wave of engagement. Comments start to be mostly neutral or appreciative. First downvotes creep in, but nothing dramatic. Voting ratio drops to around 85%.
Phase 3: The crab zone (positions 10-4) This is where it gets interesting. Negative comments surge. Downvoting on OP’s replies increases sharply. Voting ratio crashes to around 70%, sometimes way below 50%. The post starts declining, leaving top 10—but the ones replacing it will get the exact same treatment.
To make sure this wasn’t just a fluke, I cross-posted the same content to three different subreddits and tracked what happened. In r/ClaudeAI, it reached 4th place. In r/Anthropic, also 4th place, with slightly less crab mentality—probably because it’s a smaller, more focused community. In r/ChatGPT, it climbed to 9th place, with the same patterns but significantly more views thanks to its 11 million users. Across all three, the post pulled in over 250k views. Three different subreddits, three different sizes, but the same predictable flow.
The sweet spot seems to be positions 10-15. That’s where you get an engaged and honest audience. Once you break into the top 10, the fight for attention turns ugly. At that point, many commenters aren’t even reading what you wrote. They’re just piggybacking on the visibility, posting negative comments for contrast: “this is ridiculous,” “I’m smarter than this,” “what’s this even doing here.” The goal isn’t to engage with your content. It’s to position themselves as superior to something that’s already getting attention.
How To Deal With Reddit’s Crab Mentality
Learn constructive criticism. You’re not perfect, and you can make mistakes. You can come off as aggressive, even if you don’t mean to. Learn how to dissociate constructive criticism from crab mentality – and the simplest way is to separate action attacks from personal attacks. If someone says “you are an idiot”, that’s crab mentality, it signals “I’m better than you / you don’t deserve to be on this spot”. But if someone says: “what you did could be improved”, they’re talking about something you did, not about who you are. They may of course still be wrong, but at least they’re not 100% dismissive.
Learn the patterns. I learned the hard way that answering every single comment is a dead end. It creates a downward spiral. The more you respond, the more surface area you give the crabs, and the longer the fight drags on.
Adjust your expectations. Reddit can generate insane amounts of traffic, really fast. But the quality isn’t quite there. You’ll get some engaged, smart users, but they’re the minority. For example, from my 250k views posts, I got around 1200 visits to the blog, and about 4 of them converted into free subscribers to my newsletter. The majority has a very short attention span, seeks validation, and leans aggressive. Factor that into your strategy.
Crab Mentality Everywhere
Crab mentality isn’t a Reddit thing. It’s a human thing. Any community where visibility is limited and attention is currency will likely develop similar dynamics. The platforms may change, but the mechanics will stay the same: when someone starts climbing, others often try to pull them back down.
From my own experience, the best way forward is to keep climbing. Arguing with crabs rarely leads anywhere. Explaining yourself to people who aren’t listening tends to drain more than it resolves. Protecting your energy, learning what you can from the friction, and staying focused on the work that got you noticed in the first place.
The crabs aren’t your audience. The people who upvoted you to the top are.
DETROIT (AP) — A car crashed through the entrance of the Detroit Metropolitan Wayne County Airport on Friday evening, striking a ticket counter and injuring six people, airport officials said.
The driver was taken into custody, the Wayne County Airport Authority said in a statement. The cause of the crash was not yet known, and airport police were investigating.
The WCAA Fire Department treated six people at the site.
Video posted on social media showed a blue, four-door sedan stopped, with its hood and truck popped open, in front of Delta Air Lines counters in what appeared to be a departure lobby.
Glass and other debris lay strewn on the ground at the entrance, and yellow police tape cordoned off the scene.
The driver’s name was not immediately released.
Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
TikTok has finalized a deal to create a new American version of the app, avoiding the looming threat of a ban in the U.S. that has been in discussion for years.
The social video platform company signed agreements with major investors including Oracle, Silver Lake and MGX to form the new TikTok U.S. joint venture. The new app will operate under “defined safeguards that protect national security through comprehensive data protections, algorithm security, content moderation and software assurances for U.S. users,” the company said in a statement Thursday.
Adam Presser, who previously worked as TikTok’s head of operations and trust and safety, will lead the new venture as its CEO. He will work alongside a seven-member, majority-American board of directors that includes TikTok’s CEO Shou Chew.
The deal marks the end of years of uncertainty about the fate of the popular video-sharing platform in the United States. After wide bipartisan majorities in Congress passed — and President Joe Biden signed — a law that would ban TikTok in the U.S. if it did not find a new owner in the place of China’s ByteDance, the platform was set to go dark on the law’s January 2025 deadline. For a several hours, it did. But on his first day in office, President Donald Trump signed an executive order to keep it running while his administration sought an agreement for the sale of the company.
Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
The Federal Trade Commission said Tuesday it will appeal the November ruling in favor of Meta in its antitrust case against the social media giant.
The FTC said it continues to allege that, for more than a decade, Meta Platforms Inc. has “illegally maintained a monopoly” in social networking through anticompetitive conduct “by buying the significant competitive threats it identified in Instagram and WhatsApp.”
Meta had prevailed over the existential challenge to its business that could have forced the tech giant to spin off Instagram and WhatsApp after a judge ruled that the company does not hold a monopoly in social networking.
U.S. District Judge James Boasberg issued his ruling on Nov. 18 after the historic antitrust trial wrapped up in late May. His decision runs in sharp contrast to two separate rulings that branded Google an illegal monopoly in both search and online advertising, dealing regulatory blows to the tech industry that for years enjoyed nearly unbridled growth.
In a statement, Meta said the court’s decision “to reject the FTC’s arguments is correct, and recognizes the fierce competition we face. We will remain focused on innovating and investing in America.”
[Sketchiest Guy in the World Voice] Hey kid, wanna see the X algorithm? It’s right over here.
No really, Elon Musk appears to be partly making good on his promise about a week ago to open up the X recommendations algorithm for public perusal and input, theoretically making the main feed on his social media platform open source. He previously promised he would do this back in 2022, and sort of did by publishing one snapshot of the code shortly afterward, but that repository wasn’t kept sufficiently up to date to make the X platform qualify as most people’s idea of an open source product.
This release, then, is a promising step in the direction of X truly being an open source product. The next step would be to update this code repository in four weeks, as Musk promised he would do.
Even then, this release wouldn’t mean the open sourcing of X can be marked “promise kept.” In his January 10 X post promising this release, Musk said he would release “all code used to determine what organic and advertising posts are recommended to users.” From where I’m sitting, that has still not even come close to happening.
That’s because on November 26 of last year, the accounts for Musk and Grok posted that Grok is used to sort the posts on everyone’s Following feed by default, although it can be toggled from “popular” to “recent” to make it chronological. That algorithm appears to be missing. The Following and For You feeds on X also have ads, which Musk has indicated are served via an algorithm that he said he would make public. So by my count there should be at least two more releases, possibly more.
Gizmodo reached out to X for information about whether or not the advertising and Following feed code has already been released, or if it will be released at some point in the future. We will update if we hear back.
But anyway, here we are with a fresh dump of code. The first thing you should know is that it “sucks,” according to Musk.
Earlier on the same day Musk said the algorithm sucked, X head of product Nikita Bier seemed to indicate that he was proud of it, noting that in the six months from July of 2025 to this month, daily engagement time from new users has gone from less than 20 minutes to somewhere in the mid-30s. Who’s right? Is it better than ever, or does it suck?
The problem may be that Musk just can’t seem to clean out all the stubborn wokeness residue stuffed into X back when it was called Twitter. His tweet saying it sucked was a response to former video game executive Mark Kern complaining that the algorithm weights posts less heavily if they come from accounts that have been blocked a lot. Kern says he suspects that this biases the algorithm against posts from right-wing accounts like his own. That’s plausible I suppose, though it almost certainly biases the algorithm against accounts that post a lot of harassment and abuse, so make of that what you will.
Judging from what’s in the plain text readme documents in the Github dump, this latest X algorithm is what you probably expect if you use X: an update to the TikTok method of hooking users. My impression of what’s described is that, unsurprisingly, it prioritizes engagement, attempting to figure out which posts will make the user stop scrolling. It pulls from accounts you follow, but also accounts deemed to be similar to those you follow. It’s appealing to your id, not your superego. No matter what you think you’re there to see, it wants to show you whatever will make you keep staring at it.
In addition to sucking, Elon Musk also says it’s “dumb.” Replying to a complaint from blogger Robert Scoble complaining that the algorithm favors posters who hijack news events, Musk says the algorithm will improve every month—seemingly referring to the four-week expected cadence for GitHub code dumps.
And who knows, maybe users with amazing ideas will dig not just into the readme sections, but right into the code, find the real problems, and pass along suggestions to Musk, and the algorithm will get more satisfying and profitable over time. Alternatively, maybe the needs of a company that wants to hook users in order to get them to watch ads and generate revenue for itself, and the desires of human beings who want to feel well informed and happy are two totally irreconcilable concepts, and making a recommendation algorithm open source in order to try and serve both those types of need is utterly futile. I guess we’ll see which of these maybes is actually true.
The blocklist was introduced after the White House and other government agencies under the Trump administration signed up for Bluesky last October to post messages blaming Democrats for the government shutdown. The accounts that joined at the time included the Departments of Homeland Security, Commerce, Transportation, the Interior, Health and Human Services, State, and Defense, in addition to the White House itself.
The move made the White House one of the most-blocked accounts on Bluesky, and today it remains in the No. 2 position, just behind Vice President J.D. Vance, per stats shared on the tracking site Clearsky. (The site leverages Bluesky’s API to track which accounts are the most blocked and other blocking activity.)
ICE, however, did not join Bluesky in October. According to Bluecrawler’s Join Date Checker, the account @icegov.bsky.social joined the social network on November 26, 2025.
The account was verified a few days ago according to the independently-run Verified Account Tracker, which suggests that either Bluesky’s team didn’t have enough information to apply the verification checkmark, was somehow unaware of the account’s existence (doubtful!), or was internally debating how to handle the issue. Bluesky hasn’t responded to a request for comment.
One tracker now shows the ICE account as being over 60% of the way to being the most-blocked Bluesky account.
ICE today has many accounts across other social media sites, including X, Instagram, Facebook, YouTube, and LinkedIn. These accounts tend to be verified on platforms that have a verification mechanism, with YouTube being an exception.
Techcrunch event
San Francisco | October 13-15, 2026
The decision from Bluesky to host and verify ICE establishes the social network as one that’s now fitting in more with other, larger social media giants, rather than with the original ethos of the open social web known as fediverse, where the user community is more in control of which accounts gain attention and traction.
The fediverse, which represents a network of independent but interconnected social media platforms, includes apps like Mastodon, Pixelfed, PeerTube, Flipboard, and, to some extent, Instagram Threads, though Meta’s app isn’t fully federated. The U.S. government doesn’t have Mastodon accounts, but users can follow accounts like @potus on Threads from their Mastodon accounts, if they choose.
One reason for avoiding Mastodon, an open source federated app that runs on the ActivityPub protocol, could be its smaller size. But also, any government account joining this network could be easily blocked by individual server operators. This wouldn’t prevent the account from setting up its own server to post to the fediverse, but other communities could refuse to federate (interoperate) with that server, greatly diminishing its reach.
Reached for comment, Rochko wouldn’t confirm whether or not ICE’s participation on Bluesky was a factor in his decision to leave the bridge, saying that the decision was a “personal” one.
LONDON — The British government says it will consider banning young teenagers from social media as it tightens laws designed to protect children from harmful content and excessive screen time.
The government said it would consult with parents, young people and other interested parties about the safe use of technology amid growing concern that children are being harmed by exposure to unregulated social media content.
“As I have been clear, no option is off the table, including looking at what age children should be able to access social media and whether we need restrictions on things such as addictive features like infinite scrolling or streaks in apps,” Prime Minister Keir Starmer wrote on Substack.
As part of their investigation, government ministers will travel to Australia to learn about the country’s recent move that requires major social media apps such as Facebook, Instagram, TikTok, and X to bar children under 16 from their platforms.
More than 60 lawmakers from Starmer’s center-left Labour Party earlier this week wrote to the prime minister calling on the government to introduce an Australia-style ban in Britain.
“Successive governments have done far too little to protect young people from the consequences of unregulated, addictive social media platforms,” they wrote. “We urge the government to show leadership on this issue by introducing a minimum age for social media access of 16 years old.”
The government said Tuesday that it planned to respond to the public consultation on online safety by this summer.
The UK government has launched a consultation to determine if social media should be banned for Under-16s.
It comes after more than 60 Labour MPs wrote to the prime minister about the issue, with the mother of murdered teenager Brianna Ghey also calling on the government to act.
This potential move follows Australia’s landmark decision in December 2025 to implement the world’s first such ban, sparking a global debate on child safety online.
Technology Secretary Liz Kendall emphasized that the government is “determined to ensure technology enriches children’s lives, not harms them.”
As part of a broader crackdown, immediate action will allow Ofsted to inspect school phone policies, with the expectation that schools become “phone-free by default.”
Support for the ban
Proponents for the ban argue that drastic measures are necessary to protect vulnerable youth. Esther Ghey, mother of murdered teenager Brianna Ghey, strongly advocates for the ban, stating that social media limited her daughter’s ability to engage in real-world interactions.
Political support is also strong, with Conservative leader Kemi Badenoch asserting her party would have already introduced such a measure.
Education unions, including the National Education Union (NEU) and the Association of School and College Leaders, have welcomed the consultation. NEU General Secretary Daniel Kebede noted that social media often pulls children into “isolating, endless loops of content” long before they reach their GCSEs.
14-year-old Molly Russell took her own life after viewing thousands of images online promoting suicide and self-harm
Opposition and concerns
However, a significant coalition of 42 organizations, including the NSPCC and the Molly Rose Foundation, argues that a blanket ban is the “wrong solution.” They warn it could create a “false sense of safety” and drive children toward even more dangerous, unmonitored areas of the internet.
Experts, including Professor Amy Orben of the University of Cambridge, point out that there is currently “not strong evidence” that age-based bans are effective. Instead, critics suggest focusing on reducing algorithm-driven exposure to harmful content and improving digital literacy.
The government is expected to respond to the consultation findings this summer.
Social media fraud has overtaken traditional banking concerns as the primary scam-related worry for UK residents, according to new research by email validation service ZeroBounce.
The study, which analyzed Google search data for 36 different types of fraud, revealed that social media fraud generates an average of 23,640 monthly searches.
This figure is 550% higher than the typical search volume for fraud queries in the UK and significantly higher than searches for bank fraud, which followed in second place with 21,349 monthly queries.
A new frontier for fraudsters
The data suggests a significant shift in the digital threat landscape. While traditional banking scams remain a major concern, the high volume of interest in social media platforms indicates that they have become a primary channel for targeting victims.
“This data shows that social media has become the new frontier for fraudsters, with search volumes reflecting growing public concern about these platforms,” says Brian Minick, COO at ZeroBounce. “The high search volumes for both social media and bank fraud tell us that people are facing threats across multiple channels.”
The research highlights that scams involving everyday digital services and trusted organizations are causing the most anxiety. Beyond social media and banking, delivery-related fraud and impersonations of authorities such as HMRC also ranked highly.
Rank
Type of Fraud
Average Monthly Searches
Percentage Above Average
1
Social media
23,640
550%
2
Bank
21,349
487%
3
Delivery
9,150
152%
4
HMRC
9,134
151%
5
PayPal
7,164
97%
6
Phone bill
6,155
69%
7
Amazon
4,935
36%
Conversely, some traditional scams have seen much lower search interest. For example, energy rebate fraud (17 searches) and Ofgem fraud (20 searches) each generated less than 1% of the search volume compared to social media scams, suggesting these older or more targeted tactics are currently less of a broad public concern.
Minick notes that the evolution of these tactics is particularly troubling because they target the essential services people rely on daily. “From delivery companies to tax authorities, scammers are impersonating trusted organisations that play essential roles in our daily lives,” he adds.
Experts recommend remaining vigilant and verifying information across all channels, regardless of how trusted the platform or organization may seem.
VSCO filters, Kylie lip kits and the summer of Pokemon Go.
The year 2016 is making a comeback in 2026 as people flood Instagram with throwback posts reminiscing about what they viewed as an iconic year for popular culture and the internet.
In the past two weeks, many people online — from celebrities to regular Instagram users — dug through their camera rolls and Snapchat memories to unearth hyper-filtered photos of themselves a decade ago.
Many of the photos share common themes now emblematic of the era: a matte lip and winged eyeliner, bold eyebrows and glamorous eye shadow. Acai bowls and boxed water. Chokers, aviator glasses and boho outfits made trendy by Coachella.
“When I’m seeing people’s 2016 posts, even if they were in different states or slightly different ages, there’s all these similarities, like that dog filter or those chokers or The Chainsmokers,” said Katrina Yip, one of many people online who posted 2016 throwback photos. “It makes it so funny to realize that we were all part of this big movement that we didn’t really even know at the time was, like, just following the trend of that time.”
The trend has become the latest example of people online romanticizing a different time as a form of escapism. Last year, Gen Zers, typically defined as those ages 14-29, posted videos expressing love for the charm and “cringe” of millennials. There has also been a recent surge in millennial-focused pop culture, which has been celebrated online.
To many millennials and older Gen Z, 2016 was a year when community flourished on social media. People dumped their entire camera rolls into messy Facebook photo albums, sent each other silly Snapchat selfies and eagerly posted what they ate for brunch.
“If you’re older, like maybe you were 50 in 2016 and you weren’t on Instagram or a heavy internet user, you might be like, ‘Why does everyone care about this random year?’” said Steffy Degreff, who shared her own throwback photos last week.
Degreff, 38, said that for those who’ve been on social media for more than a decade, there’s nostalgia for the way social media used to function — with chronological feeds that focused only on the users people followed. There used to be an end to scrolling (specifically, when you ran out of updates from your friends). Platforms back then felt “a little bit less malicious” in their design, she said.
“I do think that 2016 was the beginning of the end of a golden era of when people felt really good about the internet and social media and politics,” she added. “And then, obviously, the pandemic happened.”
Many online who voiced their nostalgia described the overall energy of 2016 as “colorful” and “carefree.”
People often went out in crop tops and jeans with a flannel tied around their waist. They’d snap pictures of an outfit laid out carefully on their bed or of a giant acai bowl. Then, they’d pore over VSCO (a popular photo editing app) filters with their friends, debating which preset to choose.
“Now, we’ve gone very neutral-toned, like quiet luxury aesthetic, very minimal,” said Paige Lorentzen, who shared throwback photos featuring some of the trendiest brands of the time, such as Boxed Water Is Better and Triangl Swimwear. “Whereas back then, it was the brighter the saturation on your photos, the better. Everything felt like summer.”
The new year marked exactly 10 years since 2016; therefore, many online began posting the phrase “2026 is the new 2016,” according to the database Know Your Meme.
But “as the trend carried on, some social media users began posting videos denouncing the idea of making 2026 the new 2016, citing problems with living in the past and pointing out bad things that happened in 2016,” Know Your Meme added.
“Why is everyone trying to bring back 2016? Please don’t actually,” wrote an X user.
Still, some who look back fondly on the era, especially those who were in their teens in 2016, said life felt more carefree then.
“I know people’s perceptions of 2016 are based on their own experiences, but for me it was senior year of college. I lived by the beach; I didn’t have many college classes left,” said Lorentzen, 31. “It was before adulthood. So it kind of just embodied that carefree young California girl era.”
It was also before the concept of content creation began to dominate. While YouTubers and Instagram influencers existed, they seemed fewer and further away. And TikTok wasn’t around yet, although people would achieve stratospheric Vine stardom from time to time.
Some, like Yip, said that nowadays, her non-content creator friends and acquaintances rarely post online anymore unless it’s for a major life milestone.
“It was OK to be cringey, you know?” Yip said. “People were just posting for their friends. The people you followed on social media were just people you knew in real life. They weren’t celebrities or educational accounts, and so everything just felt like you were more in a little personal bubble.”
Content creator Teala Dunn, who grew a massive YouTube following sharing morning routines and vlogs in the mid-2010s, was one of the era’s trendiest influencers, particularly for teenage girls. She said that when she thinks of 2016, she recalls “fun and freedom and lightheartedness.”
“The internet was so popular, like a lot of things were starting to become really viral and fun,” Dunn said. “And I feel like a lot of people, especially influencers and YouTubers and all of my friends, we didn’t take things too seriously.”
Dunn said that the dynamic online has shifted to become drastically more parasocial and that harassment from strangers comes much more easily now than it did before. While Dunn still creates content now, she said she has scaled back how much she’s willing to reveal about her personal life.
And she, like many others, noted that 2016 seemed like one of the last years when life felt “normal.”
“I didn’t realize how much we took for granted normal life pre-Covid. Like, pre-Covid was a completely different time,” Dunn said. “I feel like the news was still crazy, but it was definitely not as crazy as the news is now. I feel like we can all agree to that. Things were just a lot more fun.”
Since its launch, TikTok has become one of the world’s most popular social media platforms, using recommendation algorithms to connect content creators and influencers with new audiences.
A report from market intelligence firm Similarweb suggests that Meta’s Threads is now seeing more daily usage than Elon Musk’s X on mobile devices. While X still dominates Threads on the web, the Threads mobile app for iOS and Android has continued to see an increase in daily active users over the past several months.
Similarweb’s data shows that Threads had 141.5 million daily active users on iOS and Android as of January 7, 2026, after months of growth, while X has 125 million daily active users on mobile devices.
This appears to be the result of longer-term trends, rather than a reaction to the recent X controversies, where users were discovered using the platform’s integrated AI, Grok, to create non-consensual nude images of women, including, sometimes minors. Concern around the deepfake images has now prompted California’s attorney general to open an investigation into Grok, following similar investigations by other regions, like the UK, EU, India, Brazil, and many more.
The drama on X also led social networking startup Bluesky to see an increase in app installs in recent days.
Instead, Threads’ boost in daily mobile usage may be driven by other factors, including cross-promotions from Meta’s larger social apps like Facebook and Instagram (where Threads is regularly advertised to existing users), its focus on creators, and the rapid rollout of new features. Over the past year, Threads has added features like interest-based communities, better filters, DMs, long-form text, disappearing posts, and has recently been spotted testing games.
Combined, the daily active user increases suggest that more people are using Threads on mobile as a more regular habit.
According to Meta’s official numbers, the tech giant said in August 2025 that Threads had reached over 400 million monthly active users. The company subsequently reported in October of last year that Threads had 150 million daily active users.
Techcrunch event
San Francisco | October 13-15, 2026
The growth trends have been continuing for many months. Similarweb last summer reported that Threads was closing the gap with X on mobile devices after seeing 127.8% year-over-year growth as of late June 2025.
Relatedly, Similarweb observed that X is still ahead of Threads in the U.S., but the gap is narrowing. A year ago, X had twice as many daily active users in the U.S. as it does now.
In addition, Threads has little traction on the web while X maintains a fairly steady web audience with around 150 million daily web visits, according to Similarweb data. As of earlier this week (January 13), X was seeing 145.4 million daily web visits, while Threads saw 8.5 million daily web visits across Threads.com and Threads.net combined.