The European Commission has found that Meta and TikTok had violated rules under the Digital Services Act (DSA) and is now giving them the chance to comply if they don’t want to be fined up to 6 percent of their total worldwide annual turnover. According to the Commission, Facebook, Instagram and TikTok have “put in place burdensome procedures and tools” for researchers who want to request access to public data. This means they’re stuck with incomplete or unreliable information if they want to do research on topics like how minors are exposed to illegal or harmful content online. “Allowing researchers access to platforms’ data is an essential transparency obligation under the DSA,” the Commission wrote.
In addition, the Commission is charging Meta over the lack of a user-friendly mechanism that would allow users to easily report posts with illegal content, such as child sexual abuse materials. The Commission explained that Facebook and Instagram use mechanisms that require several steps to be able to flag posts, and they use dark interface designs that make reporting confusing and dissuading. All those factors are in breach of DSA rules that require online platforms to give EU users easy-to-use mechanisms to be able to report illegal content.
Under the DSA, users must also be able to challenge social networks’ decisions to remove their posts or suspend their accounts. The Commission found that neither Facebook nor Instagram allow users to explain their sides or provide evidence to substantiate their appeals, which limits the effectiveness of the appeal process.
Meta and TikTok will be able to examine the Commission’s investigation files and to reply in writing about its findings. They’ll also have the opportunity to implement changes to comply with DSA rules, and it’s only if the Commission decides they’re non-compliant that they can be fined up to 6 percent of their global annual turnover. Meta disagreed that it had breached DSA rules, according to Financial Times. “In the European Union, we have introduced changes to our content reporting options, appeals process, and data access tools since the DSA came into force and are confident that these solutions match what is required under the law in the EU,” it said in a statement. Meanwhile, TikTok said it was reviewing the Commission’s findings but that “requirements to ease data safeguards place the DSA and GDPR in direct tension.” It’s asking regulators for guidance on “how these obligations should be reconciled.”
A federal court ruled that Facebook parent Meta can’t use attorney-client privilege to block internal documents and research related to teen harm, Bloomberg Law reported. The decision is a setback to Meta in its lawsuits against multiple states that accused the company of making its platforms addictive despite knowing they were harmful to teenagers.
Judge Yvonne Williams of the Washington, DC Superior Court found that Meta’s lawyers advised employees to “remove,” “block,” “button up” or “limit” portions of internal studies on the harm of social media to teens’ mental health, in order to limit the company’s legal liability. The court said that this advice appeared to be an attempt to cover up or alter information, meaning it falls under the crime-fraud exception to attorney-client privilege. Meta now has seven days to turn over four documents created between November 2022 and July 2023.
Meta disagreed with the ruling, a spokesperson told Bloomberg in a statement. “These were routine, appropriate lawyer-client discussions and contrary to the District’s misleading claim, no research findings were deleted or destroyed.”
The ruling is related to lawsuits filed in a California court involving dozens of US state attorneys general. Also involved are hundreds of private civil lawsuits filed by parents, teens and school boards against Meta and other platforms around social media addiction and harms. The first trials are scheduled to start in 2026.
More companies are naming chief A.I. officers as A.I. becomes central to strategy, reshaping corporate power and leadership structures. Unsplash
When A.I. moved from academia to corporate America, it didn’t just change how companies operate—it reshaped what leadership looks like. A title that barely existed a few years ago is now spreading fast: the chief A.I. officer (CAIO). The role signals how deeply A.I. has become embedded in corporate strategy and identity.
According to IBM’s 2025 survey, 26 percent of global enterprises now have a chief A.I. officer, up from 11 percent two years ago. More than half (57 percent) were promoted internally, and two-thirds of executives predict that nearly every major company will have one within the next two years.
The title first appeared in the early 2010s, as deep learning began to take off, but it truly gained momentum after 2023 with the rise of generative A.I. The U.S. government cemented its importance in 2024 through Executive Order 14110, which required every federal agency to appoint a CAIO to oversee A.I. governance and accountability.
The private sector quickly followed suit. A.I. strategists began moving into the C-suite, marking a new kind of leadership role for the algorithmic age.
“A.I. was often a specialist function living under the CTO. Organizations realized A.I. was too strategic to be managed as a side project,” Baris Gultekin, software giant Snowflake’s vice president of A.I., told Observer. “In addition to CAIOs, we often hear that Snowflake customers now also have large internal A.I. councils made up of individuals across departments to strategically and effectively facilitate enterprise-wide A.I. adoption.” Gultekin reports through Snowflake’s product leadership to the CEO.
Some of the most influential chief A.I. officers are already reshaping Big Tech. At Meta, Alexandr Wang, former Scale AI CEO, took on the role in mid-2025, co-leading Meta Superintelligence Labs alongside Nat Friedman, former GitHub CEO. Microsoft’s Mustafa Suleyman, DeepMind co-founder and former Inflection AI CEO, now heads Microsoft AI, overseeing the company’s long-term infrastructure push. At Apple, veteran A.I. leader John Giannandrea, continues to guide the company’s A.I. direction, reporting directly to CEO Tim Cook.
Companies beyond tech are also joining the trend. Lululemon appointed Ranju Das as its first chief A.I. and technology officer in September to boost personalization and innovation. Consulting giant PwC recently appointed Dan Priest, former VP and CIO at Toyota Financial Services, as its first CAIO for the U.S. market. Even universities, such as UCLA and the University of Utah, have added CAIOs to coordinate campuswide A.I. strategy.
From CIO to CDO to CAIO
In the 1980s, chief information officers (CIOs) led the IT revolution; in the 2010s, chief data officers (CDOs) rose with big data; now, CAIOs embody the institutionalization of A.I.
“CAIOs are responsible for exploring what parts of the business can be safely delegated to A.I. agents, how teams can properly govern A.I. decisions, the types of infrastructure needed to serve context-rich data to A.I. systems, and much more,” Sean Falconer, head of A.I. at data streaming platform Confluent, told Observer. “CDOs ensure the data is clean, while CIOs ensure it’s accessible. CAIOs ensure data becomes actionable and capable of reasoning, predicting and taking autonomous steps on behalf of the business.”
In industries like banking, health care and retail, CAIOs often act as translators, turning complex A.I. potential into practical results. “They navigate complex legacy processes and cultural resistance, making upskilling and securing organizational willingness to change as critical as building the models themselves,” Snowflake’s Gultekin said.
The rise of the chief A.I. officer also parallels the growing influence of data engineers. A study by Snowflake and MIT Technology Review Insights found that 72 percent of global executives now view data engineers as essential to business success. More than half said data engineers play a major role in shaping A.I. deployment and determining which use cases are feasible.
“Businesses will always require a CIO, which has also evolved over the years into providing strategic guidance to the business rather than just simply an IT function. Where we see overlap (with CAIOs) are areas that are critical to a company, like governance, tech enablement and strategic alignment,” Bhaskar Roy, chief of A.I. & product solutions at business automation platform Workato, told Observer. “The mandate for CAIOs is clear: continuously push the boundaries of what’s possible with A.I., and ensure the organization remains at the forefront of technological change, all while listening to customers’ needs and concerns.”
Meta has removed a deepfake AI video of Irish presidential candidate Catherine Connolly, which featured a false depiction of the politician saying that she’s withdrawing from the election. According to The Irish Times, the AI-generated video was shared nearly 30,000 times on Facebook just days before Ireland’s election on October 24 prior to it being removed from the website. Connolly called the video “a disgraceful attempt to mislead voters and undermine [Ireland’s] democracy” and assured voters that she was “absolutely still a candidate for President of Ireland.”
The video was posted by an account which had named itself RTÉ News AI, which is not affiliated with the actual Irish public service broadcaster Raidió Teilifís Éireann. It copied the likenesses not just of Connolly, but also of legitimate RTÉ journalist Sharon Ní Bheoláin and correspondent Paul Cunningham. “It is with great regret that I announce the withdrawal of my candidacy and the ending of my campaign,” the AI version of Connolly said in the fake video. Ní Bheoláin was shown reporting about the announcement and confirming the candidate’s withdrawal from the race. The AI version of Cunningham then announced that the election was cancelled and will no longer take place, with Connolly’s opponent Heather Humphreys automatically winning. Connolly, an independent candidate, is leading the latest polls with 44 points.
Meta removed the RTÉ News AI account completely after being contacted by the Irish Independent. The company told The Irish Times that it removed the video and account for violating its community standards, particularly its policy prohibiting content that impersonates or falsely represents people. Irish media regulator Coimisiún na Meán said it was aware of the video and had asked Meta about the immediate measures it took in response to the incident. Meta has been struggling to keep deepfake and maliciously edited videos featuring celebrities and politicians under control for years now. The company’s Oversight Board warned it earlier this year that it wasn’t doing enough to enforce its own rules and urged it to train content reviewers on “indicators” of AI-manipulated content.
Meta has thrown billions of dollars at its artificial intelligence efforts. Somehow, that is apparently resulting in fewer people being employed. According to a report from Axios, about 600 people lost their jobs in Meta’s “superintelligence” lab in an effort to create a less “bureaucratic” structure.
The cuts will reportedly primarily hit Meta’s FAIR AI research lab, which was the company’s long-standing AI research unit, as well as the company’s product-related AI teams and its AI infrastructure units. “By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Meta chief AI officer Alexandr Wang said in a memo obtained by Axios. TBD Lab, which is tasked with “developing the next generation” of the company’s large language models, was reportedly spared from the layoffs.
The company also reportedly encouraged the employees affected by the layoffs to apply for other open positions within the company, with Wang writing, “This is a talented group of individuals, and we need their skills in other parts of the company.” No word on whether there were efforts to move people into those roles before telling them to put their belongings in a box.
The restructuring is just the latest example of Meta desperately playing catch-up in the AI race. Earlier this year, the company made waves with a hiring spree that saw it throw massive, multi-million dollar paydays at top talent in an effort to poach them from its rivals. It succeeded in luring them away, but hasn’t necessarily figured out what comes next. Some recipients of those big signing bonuses threatened to leave within weeks of joining the company, according to the Financial Times, presumably over the lack of direction within the company. Others did dip, reportedly including people who had been with Meta for years.
Zuck’s company has seemingly yet to figure out what the shape of its AI operation should be. In addition to shelling out NBA max contract-sized payouts, the company poured $15 billion into Scale to get the company’s talent and infrastructure. Since absorbing all that, it has failed to figure out what to do with it. It announced its Superintelligence initiative first to unify its efforts in the AI space, but broke it up into multiple divisions within a matter of weeks. In the meantime, it looks like it’s the employees that Meta isn’t spending millions of dollars on who will be penalized for organizational incompetence.
Prince Harry and his wife Meghan have joined prominent computer scientists, economists, artists, evangelical Christian leaders and American conservative commentators Steve Bannon and Glenn Beck to call for a ban on AI “superintelligence” they say could threaten humanity.
The letter, released Wednesday by a politically and geographically diverse group of public figures, is squarely aimed at tech giants like Google, OpenAI and Meta Platforms that are racing each other to build a form of artificial intelligence designed to surpass humans at many tasks.
The 30-word statement says, “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”
In a preamble, the letter notes that AI tools may bring health and prosperity, but alongside those tools, “many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”
Prince Harry added in a personal note that “the future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance.”
Signing alongside the Duke of Sussex was his wife Meghan, the Duchess of Sussex.
Prince Harry and Meghan in August 2024
CBS News
“This is not a ban or even a moratorium in the usual sense,” wrote another signatory, Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley. “It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”
Also signing were AI pioneers Yoshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science’s top prize. Hinton also won a Nobel Prize in physics last year. Both have been vocal in bringing attention to the dangers of a technology they helped create.
But the list also has some surprises, including Bannon and Beck, in an attempt by the letter’s organizers at the nonprofit Future of Life Institute to appeal to President Trump’s Make America Great Again movement even as Mr. Trump’s White House staff has sought to reduce limits to AI development in the U.S.
Also on the list are Apple co-founder Steve Wozniak; British billionaire Richard Branson; the former Chairman of the U.S. Joint Chiefs of Staff Mike Mullen, who served under Republican and Democratic administrations; and Democratic foreign policy expert Susan Rice, who was national security adviser to President Barack Obama.
Former Irish President Mary Robinson and several British and European parliamentarians signed, as did actors Stephen Fry and Joseph Gordon-Levitt, and musician will.i.am, who has otherwise embraced AI in music creation.
Caution urged
“Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc.,” wrote Gordon-Levitt, whose wife Tasha McCauley served on OpenAI’s board of directors before the upheaval that led to CEO Sam Altman’s temporary ouster in 2023. “But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that.”
The letter is likely to provoke ongoing debates between the AI research community about the likelihood of superhuman AI, the technical paths to reach it and how dangerous it could be.
“In the past, it’s mostly been the nerds versus the nerds,” said Max Tegmark, president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology. “I feel what we’re really seeing here is how the criticism has gone very mainstream.”
Labeling is complicating the discourse
Confounding the broader debates is that the same companies that are striving toward what some call superintelligence and others call artificial general intelligence, or AGI, are also sometimes inflating the capabilities of their products, which can make them more marketable and have contributed to concerns about an AI bubble. OpenAI was recently met with ridicule from mathematicians and AI scientists when its researcher claimed ChatGPT had figured out unsolved math problems – when what it really did was find and summarize what was already online.
“There’s a ton of stuff that’s overhyped and you need to be careful as an investor, but that doesn’t change the fact that – zooming out – AI has gone much faster in the last four years than most people predicted,” Tegmark said.
Tegmark’s group was also behind a March 2023 letter – still in the dawn of a commercial AI boom – that called on tech giants to temporarily pause the development of more powerful AI models. None of the major AI companies heeded that call. And the 2023 letter’s most prominent signatory, Elon Musk, was at the same time quietly founding his own AI startup to compete with those he wanted to take a 6-month pause.
Asked if he reached out to Musk again this time, Tegmark said he wrote to the CEOs of all major AI developers in the U.S. but didn’t expect them to sign.
“I really empathize for them, frankly, because they’re so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy,” Tegmark said. “I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in.”
The discourse over AI regulation is heating up and spilling out on social media.
White House crypto and AI czar David Sacks and billionaire LinkedIn co-founder Reid Hoffman exchanged barbs after Hoffman expressed his support for Anthropic’s approach to AI innovation and safety in a thread posted to X on Monday.
“The leading funder of lawfare and dirty tricks against President Trump wants you to know that ‘Anthropic is one of the good guys.’ Thanks for clarifying that. All we needed to know,” Sacks posted on social media platform X.
Hoffman, who is also a major Democratic donor and AI optimist, responded minutes later. He accused Sacks of not actually reading the thread in which he advocates for “a light-touch regulatory landscape that prioritizes innovation and enables new players to compete on level playing fields.” He also referenced Microsoft, Google and OpenAI as “trying to deploy AI the right way.”
An Inc.com Featured Presentation
“When you are ready to have a professional conversation about AI’s impact on America, I’m here to chat,” Hoffman wrote. “Also: crying ‘lawfare and dirty tricks’ is particularly rich, given the Trump Administration’s recent actions.”
Shows you didn’t read the post (not shocked).
When you are ready to have a professional conversation about AI’s impact on America, I’m here to chat.
Also: crying “lawfare and dirty tricks” is particularly rich, given the Trump Administration’s recent actions. https://t.co/Qg1sVr3QpS
In a wide-ranging conversation prior to the social media spat (and on the heels of an event called Entrepreneurs First Demo Day in San Francisco), Hoffman spoke to Inc. about his approach to AI regulation, describing it as “iterative deployment and development,” rather than preemptive, fear-based rulemaking. He compared it to how motor vehicles preceded the introduction and mandate of seatbelts.
“Let’s limit the regulatory stuff to transparency, monitoring, accountability, to get a good sense of what’s actually going on, and then only impose when we know that there’s something potentially catastrophic,” he says.
Some critics worry, however, that lawmakers are not informed enough to craft meaningful regulations for technology that is changing as rapidly as AI, whereas others blame regulatory inaction on lobbying and campaign contributions. Hoffman says that he believes frontier AI labs can help govern themselves.
“When I was on the board of OpenAI, part of what we were doing was trying to make sure all the top labs were talking to each other about how to do safety the right way, but it grew more and more tense with regulators,” he says.
“It’d be useful to have some kinds of cross-collaboration on what is good alignment, what is good safety,” he adds.
Hoffman’s uneasy relationship with the Trump administration precedes the October X feud. In late September, Trump mentioned Hoffman as a possible target of a probe along with George Soros, after a Reuters reporter asked him who he might investigate in connection with domestic terrorism, Reuters reported. Trump was signing a memorandum meant to crack down on domestic terrorism and political violence several days after he signed an executive order designating anti-fascism or “Antifa” a domestic terrorism organization. Both Soros and Hoffman are substantial donors to the Democratic Party, and Hoffman also helped to fund E. Jean Carroll’s lawsuit against the president through a nonprofit, CNBC reported.
Hoffman tells Inc. that these developments have not changed his politics, although he has been “careful about trying to fund stuff very directly.”
Hoffman describes himself as “very pro-American society, very pro-American prosperity and business.”
“As far as I’m aware,” he says, “Antifa is a fictional organization and I certainly would never have deliberately funded anything that would support domestic terrorism.”
Hoffman also says he has not backed pro-AI super PACs, two of which emerged in one week in September to support AI-friendly politicians regardless of political affiliation,The New York Times reported. Tech titans have also been spotted hobnobbing with the president, including at a September dinner at the White House. Executives including Meta’s Mark Zuckerberg, Apple’s Tim Cook and Microsoft’s Bill Gates reportedly discussed various AI-related investments and educational initiatives, while also praising the president. Hoffman says the fawning “could be a little silly,” but says he believes business leaders do have a role to play in U.S. politics.
“Especially in democracies, it’s very important for all business leaders to be in collaboration [and] discussion with the elected leaders,” Hoffman says. “Technology sets the drumbeat about what happens with society, what happens with industries and so forth, and so I think that dialog is extremely important.”
Hoffman has himself co-founded two AI-powered startups in recent years. He co-founded Inflection AI together with Mustafa Suleyman and Karén Simonyan in 2022, to create a more empathetic large language model. The company pivoted in 2024 after Microsoft paid a fee to license its technology and hired away much of its top talent. And earlier this year, he launched a new venture, Manas AI, to leverage AI to cut down on the time and costs inherent to therapeutic drug discovery.
Warning: Some subject matter is disturbing. Instagram users around the world opened the app one day back in February and saw their feeds suddenly filled with graphic, violent videos. Its parent company, Meta, called it an “error” that’s now been fixed. But a CBS News investigation finds that violent content remains pervasive on Instagram reels. CBS News’ Ash-har Quraishi and Chris Hacker report.
Meta is expanding teen accounts to Facebook and Messenger. The company said the move is part of its ongoing effort to keep kids safer online.
One year after launching teen accounts for Instagram, Meta is expanding the program to Facebook and Messenger. The company said the move is part of its ongoing effort to keep kids safer online.
With teen accounts, users under 18 are automatically enrolled with built-in protections.
Meta says 97% of teens under 16 are staying within those restrictions.
The company also highlights features such as sleep mode and supervision tools, which let parents set daily time limits and monitor activity.
“Teen accounts are really meant to respond to some of the top concerns that we’ve heard from parents,” Jennifer Hanley, Meta’s North American head of safety policy, told WTOP in September.
The accounts ensure teens under 16 need their parents’ permission to change the restrictions, according to Hanley. Among the offerings are tools that keep kids from engaging on the platforms for long periods.
“After 60 minutes, a teen in the teen account gets a notification encouraging them to leave the platform,” Hanley said.
But not everyone is convinced to tools are helping. A report from Cybersecurity for Democracy labeled 64% of the safety tools “red” because they fell short.
The report’s authors, which included a former Facebook employee, said the tools were rated that way because they were either “no longer available or ineffective.”
The report also warned that teens still encounter harmful “rabbit holes,” including imagery of self-harm.
Hanley said Meta disagrees with the report and pushed back on the findings.
“We’ve been overwhelmingly hearing great things from parents,” she said. “We know that teens are spending less time on our platforms, they’re seeing less sensitive content and they’re having less unwanted contact as a result of being in teen accounts.”
Meta said it remains open to feedback and continues to improve its safety tools.
“We’re always open to constructive feedback,” Hanley said.
PG-13 content guidelines introduced
After the September interview with WTOP, Meta announced an update to teen accounts.
The tech company said Instagram will now guide teen content using PG-13 movie ratings by default. That means content seen by teens will be similar to PG-13 movies and teens won’t be able to opt out without a parent’s permission, according to Meta.
Parents who want more control can choose a stricter setting, Meta said, and they’ll also have new ways to report content they think teens shouldn’t see.
In a blog post, Meta called this “the most significant update” since teen accounts launched, saying it was shaped by feedback from thousands of parents worldwide.
The company also said it will use age prediction technology to place teens into protections even if they lie about their age when signing up.
Meta acknowledged in the post that “no system is perfect,” but said it’s committed to improving and keeping age-inappropriate content away from teens.
Support for schools added
Hanley also said Meta is expanding its efforts to help schools.
Through its School Partnership Program, middle and high schools in the U.S. can sign up to get educational resources and tools to report harmful content more easily. Schools that enroll receive a verified badge and access to expedited content review.
Meta said educators are often in the best position to spot issues such as bullying, and the program is designed to help them flag and address those concerns more effectively.
Get breaking news and daily headlines delivered to your email inbox by signing up here.
There are so many times when you’re just running and pass something beautiful. It’s so easy to just say, “Hey Meta, start taking video” and just get a quick clip as you happen to be zipping past. You can also customize the Action button to pick different filming modes, like slow motion or hyperlapse.
The Garmin integration is also designed to address your social media needs. Yes, it syncs with Meta AI, allowing the glasses to tell you if you’re hitting your target pace or HR zones—something I don’t think you really need if you’re already wearing a beeping, buzzing Garmin on your wrist. What you’re really wearing the watch to do is to trigger the camera’s autocapture at key moments in your workout, so you can put together highlight reels and overlay your Garmin stats on top of it afterwards.
That this is a device for social media fitness is also reflected in the fact that you’re limited in your filming to 30-second, 1-minute, 3-minute, and 5-minute clips. Meta informs me that most people usually just keep it to 30-second video clips, all the better for TikToks and Reels. You can also set the clips to auto-import, so it’s just in your Photos library when you think back to check and post on Instagram.
I’m private on Strava; I don’t really need anyone to witness my leisurely 10-minute mile trail runs. But every running influencer who is filming “Mile 1!” all the way through “Mile 26.2!” of their latest marathon is going to love these.
Outside of the fitness stuff, I do think the Meta AI assistant is kind of fun. I have a few friends who can identify plants and animals as we’re hiking. Meta AI can do that on a basic level, even if it’s not up to pinpointing specific varietals. I do think it’s a bit of a superpower to be able to identify if you’re not sure if those flowers are zinnias or dahlias as you pass. Nota bene: I would not ask Meta AI or any other chatbot super personal questions. I would also go into Settings, Data & Privacy, and Remove All Public Vibes (ew!) because I find everything about Meta AI as a social media platform to be (double ew!) gross, but that’s just me.
New data indicates that use of Meta AI’s mobile app for iOS and Android has seen a significant increase. According to a new analysis from market intelligence provider Similarweb, the app’s daily active users across both platforms jumped to 2.7 million as of October 17, up from around 775,000 just four weeks ago. In addition, Meta AI’s app installs are also up, reaching 300,000 new downloads per day, compared with under 200,000 daily downloads a few weeks ago.
For comparison, Meta AI’s app had just 4,000 daily downloads a year ago, on October 17, 2024.
Image Credits:Similarweb
The firm says it hasn’t seen any meaningful correlation in either search or advertising estimates, but notes Meta could be running Facebook or Instagram promotions that wouldn’t be captured in its model.
However, there’s also another possible explanation for the sharp rise: the launch of Meta’s new Vibes feed in September, which introduced short-form AI-generated videos to the Meta AI mobile app.
Meta AI introduced the Vibes feed on September 25, which correlates with the sharp increase in the app’s daily active users on iOS and Android, as seen in the chart below.
Image Credits:Similarweb
Recently, OpenAI’s video generator Sora drew headlines as its app reached the top of the App Store when users rushed to try the new technology. However, Meta AI could have benefited from this launch as well. While Similarweb says its data doesn’t prove cause and effect, it’s possible that the attention to Sora drove some people to try Meta AI, in order to compare the two experiences.
Another possibility is that Meta could be benefiting from Sora’s invite-only status. That is, those who couldn’t try out the OpenAI app may have looked for an alternative to experiment with. This would be an interesting explanation, too, as it suggests that OpenAI’s decision to gatekeep Sora may have directly boosted its rivals.
Techcrunch event
San Francisco | October 27-29, 2025
Image Credits:Similarweb
As of October 17, Meta AI’s app had seen a 15.58% increase in daily active users worldwide, while ChatGPT, Grok, and Perplexity saw declines of 3.51%, 7.35%, and 2.29%, respectively.
US District Judge Phyllis Hamilton has reduced the damages Meta is getting from the NSO Group from $167 million to $4 million, but she has also ordered the Israeli spyware maker to stop targeting WhatsApp. If you’ll recall, Meta sued the NSO Group in 2019 over its Pegasus spyware, which it said was used to spy on 1,400 people from 20 countries, including journalists and human rights activists. Meta said at the time that Pegasus can infect targets’ devices even without their participation by sending text messages with malicious codes to WhatsApp. Even a missed call is enough to infect somebody’s device.
According to Courthouse News Service, Hamilton reduced the damages because they would need to follow a legal framework designed to proportionate damages. However, she has also handed down a permanent injunction on the NSO Group’s efforts to break into WhatsApp. In her decision, she took note of statements made by NSO’s lawyers and its own CEO revealing that it hasn’t stopped collecting WhatsApp messages and trying to get around the messaging app’s security measures. The defendants previously said that the injunction Meta was requesting would “put NSO’s entire enterprise at risk” and “force NSO out of business,” since WhatsApp is one of the Pegasus spyware’s main ways to infect targets’ devices.
“Today’s ruling bans spyware maker NSO from ever targeting WhatsApp and our global users again,” said Will Cathcart, Head of WhatsApp. “We applaud this decision that comes after six years of litigation to hold NSO accountable for targeting members of civil society. It sets an important precedent that there are serious consequences to attacking an American company.”
Hamilton wrote that the proposed injunction requires the Israeli company to delete and destroy computer code related to Meta’s platforms, and that she concluded that the provision is “necessary to prevent future violations, especially given the undetectable nature of defendants’ technology.” It’s not quite clear how Meta will ensure that the NSO Group doesn’t use WhatsApp to infect its users’ devices again. Notably, the NSO Group was recently acquired by an American investment group that invested tens of millions of dollars into it to take controlling ownership.
A new digital tasks program will be piloted for U.S. Uber drivers this year. Jakub Porzycki/NurPhoto via Getty Images
The life of an Uber driver often involves stretches of downtime—waiting on ride requests or charging an electric vehicle’s battery. To make the most of those idle moments, Uber is launching a pilot program that allows drivers and couriers to make extra money by completing digital tasks that train A.I. models for Uber’s enterprise clients.
“Drivers have asked for more ways to earn, even when they’re not on the road,” Uber CEO Dara Khosrowshahi said in a statement. To address this request, drivers will soon be able to opt in for quick, in-app tasks ranging from uploading documents—such as restaurant menus or receipts—to providing everyday images and recording audio samples.
The pilot will launch later this fall as part of Uber’s AI Solutions Group, a division created last November to offer data-labeling services to other businesses. Its client list includes Aurora, a self-driving software developer; Niantic, the company behind Pokémon Go; and Luma AI, a text-to-video generator. Until now, Uber AI Solutions has relied on independent gig workers to complete data-labeling tasks. The new program shifts those assignments to Uber’s own network of drivers and couriers, giving them access to additional income streams directly through the Driver app.
In addition to the upcoming U.S. launch, Uber has already been testing the initiative in more than 12 cities in India. “Until now, these tasks were completed by independent contractors outside the app,” said Megha Yethadka, the global head of Uber AI Solutions, in a September LinkedIn post describing the Indian pilot as “very promising.”
Before accepting a task, drivers will be able to see the expected pay rate and estimated completion time. They can only take on digital tasks while not actively signed in to drive or deliver for Uber.
Khosrowshahi first discussed Uber’s plans to introduce digital tasks at the Bloomberg Tech Summit in June, where he laid out a strategy to expand income opportunities of drivers and couriers over the next five to ten years. He described the data-labeling effort as a form of “knowledge work” emerging from the A.I. era and a way to provide new job options even as automation and autonomous vehicles threaten traditional driving roles.
Uber announced the digital tasks initiative yesterday (Oct. 16) during its annual Only on Uber event, which highlights new features inspired by driver and courier feedback. Other updates unveiled at the event included a new heat map tool showing demand hotspots, a rider rating filter that allows drivers to screen trip requests, and a delayed-ride guarantee offering extra pay when trips take longer than estimated.
Uber also announced an expansion of its women rider preference feature, which lets female drivers accept rides only from women passengers—a setting that has been used for more than 150 million trips and is activated weekly by one in four female drivers.
I’ve been wearing the $800 Meta Ray-Ban Display glasses daily for ten days and I’m still a bit conflicted. On one hand, I’m still not entirely comfortable with how they look. I’ve worn them on the bus, at the office, on walks around my neighborhood and during hangouts with friends. Each time, I’m very aware that I probably look a bit strange.
On the other hand, there’s a lot I really like about using these glasses. The built-in display has helped me look at my phone less throughout the day. The neural band feels more innovative than any wrist-based device I’ve tried. Together, it feels like a significant milestone for smart glasses overall. But it’s also very much a first-generation device with some issues that still need to be worked out.
Meta
An exciting first-gen product, if you can get past the thick frames.
Pros
Display is bright, clear and doesn’t feel overwhelming
Ability to preview and zoom in with the camera makes it way easier to frame shots
Visual feedback for Meta AI prompts is surprisingly helpful
Neural band is very accurate and reduces reliance on voice commands
Cons
Frames are way too thick for most people’s comfort
To once again state the obvious: The frames are extremely chunky and too wide for my face. The dark black frames I tried for this review unfortunately accentuate the extra thickness. I won’t pretend it’s my best look and I did feel a bit self-conscious at times wearing these in public. Meta also makes a light brown “sand” color that I tried at the Connect event, and I think that color is a bit more flattering, even if the frames are just as oversized. (Sidenote: Smart glasses companies, please, please make your frames available in something other than black!)
But, everyone has a different face shape, skin tone and general ability to “pull off” what one of my friends charitably described as “chunky statement glasses.” What looks not-great on my face, may look good on someone else. I really wish Meta could have squeezed this tech into slightly smaller frames, but I did get more used to the look the more I wore them. Overall, I do think the size is a reasonable tradeoff for a first-generation product that’s pretty clearly aimed at early adopters.
Here’s how they look in the lighter “sand” color.
(Karissa Bell for Engadget)
The reason the glasses are so thick compared with Meta’s other frames is because there are a lot of extra components to power the display, including a mini projector and waveguide. And, at 69 grams, the display glasses are noticeably heavier. I didn’t find it particularly uncomfortable at first, but there is a noticeable pressure after six or seven hours of wear. Plus, the extra weight and width also made them consistently slide down my nose. I’m not sure I’d feel comfortable wearing these on a bike ride or a jog as I’d worry about them falling off.
While I tested these, I was very interested to get reactions from friends and family. I didn’t get many positive comments about how they looked on my face, though a few particularly generous colleagues assured me I was “pulling them off.” But seeing people’s reactions as soon as the display activated was another matter. Almost everyone has had the same initial reaction: “whoa.”
Quality display with some limitations
As I discussed in my initial impressions, these glasses have a monocular display on the right side, so it doesn’t offer the kind of immersive AR I experienced with the Orion prototype last year. You have to look slightly up and to the right to focus on the full-color display. It’s impressively bright and clear, but doesn’t overtake your vision.
At 20 degrees, the field of view is small, but it never felt like a limitation. Because the content you see isn’t meant to be immersive, it never feels like what’s on the display is being cut off or like you have to adjust where you’re looking to properly see it. The display itself has three main menus: an app launcher, a kind of home screen where you can access Meta AI and view notifications and a settings page for adjusting brightness, volume and other preferences.
The display is in the right lens.
(Karissa Bell for Engadget)
For now, there are only a handful of Meta-created “apps” available. You can check your Instagram, WhatsApp and Messenger inboxes and chat with Meta AI. There’s also a simple maps app for walking navigation, a music/audio player, camera and live translation and captioning features. There’s also a mini puzzle game called “Hypertrail.”
One of my favorite integrations was the ability to check Instagram DMs. Not only can you quickly read and respond to messages, you can watch Reels sent by your friends. While the video quality isn’t as high as what you’d see on your phone, there’s something very cool about quickly watching a clip without having to pull out your phone. Meta is also working on a standalone Reels experience that I’m very much looking forward to.
I also enjoyed being able to view media sent in my family group chats on WhatsApp. I often would end up revisiting the photos on videos once I pulled out my phone, but being able to instantly see these messages as they came in tickled whatever part of my brain responds to instant gratification.
There’s some impressive tech inside those thick frames.
(Karissa Bell for Engadget)
The display also solves one of my biggest complaints with Meta’s other smart glasses: that it’s really difficult to frame photos. When you open the camera app on the display model, you can see a preview of the photo and even use a gesture to zoom in to properly frame your shot. Similarly, if you’re on a WhatsApp video call you can see both the other person’s video as well as a small preview of your own like you would on your phone’s screen. It’s a cool trick but the small display felt too cramped for a proper video call. People I used this with also told me that my video feed had some quality issues despite being on Wi-Fi.
The glasses’ live captioning and translation features are probably the best examples of Meta bringing its existing AI features into the display. I’ve written before about how Meta AI’s translation abilities are one of my favorite features of the Ray-Ban smart glasses. Live translation on the display is even better, because it delivers a real-time text feed of what the person in front of you is saying. I tried it out with my husband, a native Spanish speaker, and it was even more natural than the non-display glasses because I didn’t have to pause and wait for the audio to relay what he was saying. It still wasn’t an exactly perfect translation, and there were still a few occasions when it didn’t catch everything he said, but it made the process so much simpler overall.
Likewise, live captions transcribes conversations in real-time into a similar text feed. I’ve found that it’s a cool way to demo these glasses’ capabilities, but I haven’t yet found an occasion to use this in anything other than a demo. However, I still think it could be useful as an accessibility aid for anyone who has trouble hearing or processing audio.
Another feature that’s useful for travel is walking navigation. Dictate an address or location (you can say something like “take me to the closest Starbucks”) and the glasses’ display will guide you on your route. The first time I tried this was the roughly 10-minute walk from my bus stop to Yahoo’s San Francisco office. The route only required two turns, but it didn’t quite work. My glasses confidently navigated me to an alleyway behind the office building rather than the entrance. These kinds of mishaps happen with lots of mapping tools — Meta’s maps rely on data from OpenStreetMap and Overture — but it was a good reminder that it’s still early days for this product.
I don’t use Meta AI a ton on any of my smart glasses, but having a bit of visual feedback for these interactions was a nice change. I retain information much better from reading than listening, so seeing text-based output to my queries felt a lot more helpful. It’s also nice that for longer responses from the assistant, you can stop the audio playback and swipe through informational cards instead.
Meta AI on the glasses’ display delivers information in a card-like interface.
(Meta)
While cooking dinner one night, I asked for a quick recipe for teriyaki salmon and Meta AI supplied what seemed like a passable recipe onto the display. The only drawback was the display goes to sleep pretty quickly unless you continue to interact with the content you’re seeing, so the recipe I liked disappeared before I could actually attempt it. (You can view your Meta AI history in the Meta AI app if you really want to revisit something.)
My main complaint is that I want to be able to do much more with the display. Messaging app integrations are nice, but I wish the display worked with more of the apps on my phone. When it worked best, I was happy to be able to view and dismiss messaging notifications without having to touch my phone; I just wish it worked with all my phone’s notifications.
There are also some frustrating limitations on sending and receiving texts. For example, there’s no simple way to take a photo on your glasses and text it to a friend with the glasses. You have to wait for the glasses to send a “preview” of your message to your phone and then manually send the text. Or, you can opt in to Meta’s cloud services and send the photo immediately as a link, but I’m not sure many of my friends would readily open a “media.meta.com” URL.
The glasses also don’t really support non-WhatsApp group chats, at least on iOS. You can receive messages sent in group chats, but there’s no indication the message originated in a group thread. And, it’s impossible to reply in the same thread; instead, replies are sent directly to the person who texted, which can get confusing if you’re not checking your phone. It was also a little annoying that reading and even replying to texts from my glasses wouldn’t mark the text as read in my phone’s inbox. Meta blames all this on Apple’s iOS restrictions, and says it’s hoping to work with the company to improve the experience. The company tells me that group messaging should work normally for people with Android devices and that there is also a dedicated inbox for checking texts on the glasses. I haven’t tested this out yet.
The band + battery life
The glasses are controlled using Meta’s Neural Band, which can translate subtle gestures like finger taps into actions on the display. Because the band relies on electromyography (EMG), you do need a fairly snug fit for it to work properly. I didn’t find it uncomfortable, but, like the glasses, I don’t love how it looks as a daily accessory. It also requires daily charging if you wear the glasses all day.
But the band does work surprisingly well. In more than a week, it almost never missed a gesture, and it never falsely registered a gesture, despite my efforts to confuse it by fidgeting or rubbing my fingers together. The gestures themselves are also pretty intuitive and don’t take long to get used to: double tapping your thumb and middle fingers wakes up or puts the display to sleep, single taps of your index and middle fingers allow you to select an item or go back, and swiping your thumb along the side of your index finger lets you navigate around the display. There are a few others, but those are the ones I used most often.
The Meta Neural Band requires a snug fit to work properly.
(Karissa Bell for Engadget)
Each time you make a gesture, the band emits a small vibration so you get a bit of haptic feedback letting you know it registered. I’ve used hand tracking-based navigation in various VR, AR and mixed reality devices and I’ve always felt a bit goofy waving my hands around. But the neural band gestures work when your hand is by your side or in your pocket.
The other major drawback of these glasses is that heavy use of the display drains the battery pretty quickly. Meta says the Ray-Ban Display’s battery can go about six hours on a single charge, but it really depends on how much you’re using the display. With very limited use, l was able to stretch the battery to about seven hours, but if you’re doing display-intensive tasks like video calling or live translation, it will die much, much more quickly.
The Meta Ray-Ban Display glasses, charging case and neural band.
(Karissa Bell for Engadget)
The glasses do come with a charging case that can deliver a few extra charges on-the-go, but I was a bit surprised at how often I had to recharge the case. With my normal Ray-Ban Meta glasses I can go several days without topping up the charging case, but with the Meta Ray-Ban Display case, I’m charging it at least every other day.
Privacy and safety
Whenever I write or post on social media about a pair of Meta-branded glasses, I inevitably hear from people concerned about the privacy implications of these devices. As I wrote in my recent review of Meta’s second-gen Ray-Ban glasses, I share a lot of these concerns. Meta has made subtle but meaningful changes to its glasses’ privacy policy over the last year, and its track record suggests these devices will inevitably scoop up more of our data over time.
In terms of privacy implications of the display-enabled glasses, there isn’t a meaningful difference compared to their counterparts. Meta’s policies are the same for all its wearables. I suppose you could use live translation to surreptitiously eavesdrop on a conversation you wouldn’t typically understand, though that’s technically possible with Meta’s other glasses too. And the addition of a wrist-based controller means taking photos is a bit less obvious, but there’s still an LED indicator that lights up when the camera is on.
The neural band allows you to snap photos without touching the capture button or using a voice command.
(Karissa Bell for Engadget)
I have been surprised at how many people have asked me if these glasses have some kind of facial recognition abilities. I’m not sure if that’s a sign of people’s general distrust of Meta, or an assumption based on seeing similar glasses in sci-fi flicks, but I do think it’s telling. (They don’t, to be clear. Meta currently only uses facial recognition for two safety-related features on Facebook and Instagram.) Meta hasn’t done much to earn people’s trust when it comes to privacy, and I wish the company would use its growing wearables business to try to prove otherwise.
On a more practical level, I have some safety concerns. The display didn’t hinder my situational awareness while walking, but I could see how it might for others. And I’m definitely not comfortable using the display while driving. Meta does have an audio-only “driving detection” setting that can automatically kick in when you’re traveling in a car, but the feature is optional, which seems potentially problematic.
Should you buy these?
In short: probably not. As much as I’ve been genuinely impressed with Meta’s display tech, I don’t think these glasses make sense for most people right now. And, at $800, the Meta Ray-Ban Display glasses are more than twice as much as the company’s very good second-generation Ray-Ban glasses, which come in a wide range of much more normal-looking frame styles and colors.
The Meta Ray-Ban Display glasses, on the other hand, still look very much like a first-gen product. There are some really compelling use cases for the display, but its functionality is limited. The glasses are also too thick and bulky for what’s meant to be an everyday accessory. At the end of the day, most people want glasses that make them look good. There’s also the fact that right now, these glasses are somewhat difficult to actually buy. They are only available at a handful of physical retailers, which currently have a very limited supply, Meta is also requiring would-be buyers to schedule demo appointments in order to buy, though some stores — like the LensCrafters where I bought my pair — aren’t enforcing this.
Still, there’s a lot to be excited about. Watching people’s reactions to trying these has been almost as much fun as using them myself. Meta also has a solid lineup of new features already in the works, including a standalone Reels app, a teleprompter and gesture-based handwriting for message replies. If you’re already all-in on smart glasses or, like me, you’ve been patiently waiting for glasses with a high quality, usable display, then the Meta Ray-Ban Display glasses are worth the investment now — as long as you can accept the thick frames.
Update, October 17, 2025, 3:42PM PT: Added more information about group text functionality on Android.
A Facebook feature that gives Meta AI the ability to suggest edits to photos stored on your phone’s camera roll but haven’t yet been shared is now rolling out to all users in the U.S. and Canada. The company announced on Friday that users can choose to opt in to receive these sharing suggestions, which will then prompt them to post photos to their Facebook Feed and Stories with the AI edits.
First launched as a test over the summer, Facebook’s app pops up a permission dialog box requesting access to “allow cloud processing” so users can get “creative ideas made for you from your camera roll.” This box explains that the feature could offer ideas like collages, recaps, AI restyling, birthday themes, and more for the end user.
Image Credits:screenshot of Facebook’s app, June 2025
For the AI to work, Facebook’s app would upload images from your device to its cloud on an ongoing basis. This allows Meta’s AI to make its suggested edits. Meta says users’ media will not be used for ad targeting purposes, and it won’t use the media to improve its AI systems, unless the user takes the step of editing the media or sharing the edited photos with friends or others on its social network.
The feature can be disabled at any time.
Though Meta may not train its AI on all your photos, when you agree to Meta’s AI Terms of Service, you permit your media and facial features to be analyzed by AI. The terms say that, by processing your photos, Meta has the ability to “summarize image contents, modify images, and generate new content based on the image.”
The company also uses the date and presence of people or objects in your photos to craft its creative ideas, giving Meta a lot more information about you, your relationships, and your life.
Plus, giving Meta access to photos you haven’t yet shared on Meta’s platforms could give the company an advantage in the AI race by providing a wealth of user data, behavioral insights, and ideas for new AI features.
Techcrunch event
San Francisco | October 27-29, 2025
Image Credits:screenshot of Facebook’s app, June 2025
Settings for the feature are found under the Preferences section of Facebook’s Settings. On the “Camera roll sharing suggestions” page, there are two toggles. The first lets Facebook suggest photos from your camera roll when browsing the app. The second is where you could enable or disable the “cloud processing,” which lets Meta make AI images using your camera roll photos.
Meta has been leveraging its position as a dominant social network to improve its AI technology and had previously announced it would train its image recognition AI on publicly shared data, including posts and comments on Facebook and Instagram. (EU users had until May 27, 2025, to opt out.) Last year, it also said it would train its AI on images that Ray-Ban Meta users asked the device to analyze.
Meta on Friday previewed its upcoming parental control features for teens’ conversations with AI characters on its platforms. The features, which will be rolled out next year, include the ability to block certain characters and monitor conversation topics.
Starting in the coming months, parents will be able to turn off chats with AI characters entirely for teens. This action won’t block access to the Meta AI chatbot — the company’s general-purpose AI chatbot — which will only discuss age-appropriate content.
Parents will also be able to turn off chats with individual characters if they prefer more selective control. Plus, they will receive information about the topics teens are discussing with AI characters and Meta AI.
The company said it plans to roll out these controls on Instagram early next year. They will be available in English in the U.S., U.K., Canada, and Australia.
“We recognize parents already have a lot on their plates when it comes to navigating the internet safely with their teens, and we’re committed to providing them with helpful tools and resources that make things simpler for them, especially as they think about new technology like AI,” the company said in a post written by Instagram head Adam Mosseri and newly appointed Meta AI head Alexandr Wang.
The company added that currently, teens are only allowed to interact with a limited number of characters that follow age-appropriate content guidelines. Parents can also set time limits on teens’ interactions with AI characters. Earlier this year, Instagram announced that it is using AI to identify attempting to skirt age limits by faking their age on the app.
Techcrunch event
San Francisco | October 27-29, 2025
In the past few weeks, multiple platforms, including OpenAI, Meta, and YouTube, have released tools and controls focused on teen safety. These changes come amid growing concerns about the impactof social media on teen mental health and lawsuits againstAI companies that allege they played a part in teen suicides.
Meta is working on new supervision controls that will allow parents to cut off their teens’ access to AI chatbots on its platforms completely. While the tools can remove teens’ ability to engage AI characters on one-on-one chats, they’ll still be able to access the general Meta AI chatbot. If parents don’t want to block their teens from being able to access AI bots altogether, they can also just block specific AI characters. In addition, parents will be able to get insights into the topics their children are discussing with Meta’s AI bots. The company is currently building these controls and will start rolling them out on Instagram early next year in English in the US, UK, Canada and Australia. Take note that the images above are just illustrations, and the tools’ interfaces could still change.
The company has been under fire since an internal Meta document was leaked a few months ago, showing that it allowed its chatbots to have “sensual” conversations with children. In one example, a Meta chatbot told a shirtless eight-year-old that “every inch of you is a masterpiece — a treasure I cherish deeply.” The US Attorneys General of 44 jurisdictions urged companies to protect children “from exploitation by predatory artificial intelligence products” after that information came out. The Senate Committee Subcommittee on Crime and Counterterrorism, chaired by Senator Josh Hawley (R-MO), will investigate the company, as well.
Shortly after the internal documents leaked, Meta started retraining its AI and added new protections to prevent younger users from accessing user-made AI characters that might engage in inappropriate conversations. It also introduced age-appropriate protections so that its AIs will give teens responses guided by PG-13 movie ratings. Plus, it now only allows teens to interact with a limited group of AI characters, focused on age-appropriate topics.
Meta is shutting down its Messenger app for macOS and Windows and pushing users to the web. Meta confirmed over email to Engadget that the app will be fully shutdown on December 15, after which the easiest ways to access Messenger chats when you’re not on your phone will be the Facebook app on Windows, or the Facebook and Messenger websites.
The company hasn’t provided an explanation for why it’s abandoning its desktop Messenger apps, but Meta’s support article does say that users will receive a notification informing them of the shutdown, and will be blocked from accessing the app after December 15.
In order for your chats to be saved going forward, the company says you’ll have to turn on secure storage and add a pin code to your account. To make sure your chats will be archived:
Click on the gear icon above your profile picture.
Click on Privacy & Safety, and then End-to-end encrypted chats.
Click on Message storage, and then make sure Turn on secure storage is toggled on.
Meta officially cut Messenger out of Facebook in 2014 to create a focused messaging experience separate from the tangle of features the social media platform offered at the time. The company later tried to connect Messenger and Instagram Direct Messaging into one communication platform, but backed away from the idea in 2023. Rather than Meta’s interest in messaging suddenly waning, abandoning the desktop apps likely reflects the fact that most people prefer to use the company’s mobile apps or websites.
Instagram is introducing its biggest update yet to online safety for young users by applying PG-13 content guidelines to all teen accounts, Meta announced this week.
Under the new regime, under-18s will continue being blocked from seeing sexually suggestive or explicitly violent content as before, but Meta said that the app will now step further by avoiding recommending posts containing strong language, risky stunts or anything that could “encourage potentially harmful behaviors.”
Newsweek reached out to Meta’s press team via email.
Instagram will also block searches on mature topics, such as “alcohol” or “gore”; penalize accounts that repeatedly post age-inappropriate content; and extend the curbs to Instagram’s AI features. Importantly, teens under 18 will be automatically placed in the 13+ mode and cannot opt out without parental permission.
For parents seeking greater controls, Meta is introducing a stricter “Limited Content” mode that further restricts teen access and disables comment interactions.
These changes will begin rolling out this week in Canada, the U.S., the U.K. and Australia, with global rollout scheduled by end of 2025, but campaigners, parents and tech experts remain deeply skeptical about how effective this shift will be in practice.
Campaigner Concerns
Advocacy groups argue that these revisions are far from sufficient. A recent report by the HEAT Initiative, ParentsTogether and others found that 60 percent of 13- to 15-year-olds had encountered unsafe content or unwanted messages on Instagram in the past six months, despite existing safety tools.
Yaron Litwin, an online safety and AI expert, told Newsweek that enforcement will determine whether these new measures succeed.
Litwin said: “Hopefully, its age prediction model will actually prevent … some children from accessing explicit and dangerous content on their feeds.
“However, that is [a] big if, and in any case, there is much harmful content on social-media platforms, including Instagram, that are not obvious enough for filters to catch.”
Meta’s age classification system detects when a user is under 18, even if they claim otherwise. It analyzes signals from their profile and behavior, such as which accounts they follow, what content they engage with, and when their account was created to estimate whether they are likely underage.
“Whether it’s hate speech, glorification of eating disorders, content that is technically compliant although very suggestive, a young Instagram user can still be exposed to much that his or her parents would find objectionable,” Litwin added.
Parental Perspective
Many parents have long struggled to monitor their teens’ online experience. U.K.-based mom Faye McCann is concerned about how the new guidelines will work in practice.
McCann, also a business strategist and social media expert, told Newsweek there is a big gap between what Meta says its offering and what teens will actually see.
“I can’t help but feel this is partly a reaction to years of public pressure,” McCann said. “Meta has been criticized relentlessly about teen safety, and this feels like a step in the right direction, but it’s not the full solution parents and campaigners have been asking for.
“I fully understand their intentions, but, right now, it feels more like a box-ticking exercise than a deep commitment to genuinely protecting young people.”
Algorithms vs Real Life
Other experts agreed that moderation—not messaging—is the real challenge.
Miruna Dragomir, the chief marketing officer at Planable, a social-media management platform, said Instagram’s new rating system may make sense to parents, but it doesn’t solve the underlying problem. She added that young users are adept at outsmarting moderation systems.
“People who use social media, especially youth, are very good at getting around limits by using code phrases, trendy lingo, and visual indicators that AI systems have trouble understanding,” Dragomir told Newsweek. “Every time a policy changes, kids come up with new ways to get around it, and they often know more about how to use the platform than adults do.”
Dragomir said that these changes could give parents “a false sense of security.”
“The most-honest answer is that these rules are a big step toward making areas safer, but they aren’t the only thing that will work,” she added. “Parents need to be involved in their teens’ online lives on a regular basis instead of just trusting what the site says. The best way to keep teens safe is to use better platform tools and have open family talks on how to think critically and use technology.”
For parents like McCann, transparency is a priority. “I want clear, simple ways to see what my children are being exposed to and control over that exposure,” she said. “That means tools that actually work, not just guidelines on paper. Instagram can set the rules all it wants, but unless they can make them enforceable in the real world, teens will still find a way around them—and that’s where the real risk lies.”
A U.S. Border Patrol video featuring antisemitic lyrics went viral on X on Tuesday after far-right users discovered it had been posted to Facebook and Instagram. The video, which included the lyrics “Jew me” and “kike me,” was deleted from the platforms on Wednesday morning, though it’s not clear whether the offensive content was taken down by Border Patrol or Meta.
The 13-second video appears to have been posted to Instagram in August, but was pinned in the Reels section of the official Border Patrol page, making it more visible to a wider audience. The video only gained widespread attention late Tuesday on X, where far-right extremists celebrated a signal that was clearly intended for them. The Instagram video had 4.3 million views when Gizmodo viewed it Tuesday night.
The audio used in the clip comes from Michael Jackson’s controversial 1996 song “They Don’t Care About Us.” The song includes the lyrics “Jew me, sue me, everybody do me/ Kick me, kike me, don’t you black or white me.”
The lyrics were criticized at the time for being antisemitic, though Jackson defended his words, insisting he didn’t intend for them to be offensive. The singer, who died in 2009, issued an apology and later released an edited version of the song.
The antisemitic Border Patrol video
The video starts with footage of someone adjusting a bodycam before viewers see Border Patrol agents walking around with guns. Another shot shows a truck hauling Border Patrol dune buggies, and then a shot in the desert where a dune buggy kicks up dust behind it.
The video is very short, making it clear that the choice of lyrics was the intentional focus. Viewers are obviously meant to hear the antisemitic aspects, since it’s more or less the only audio in the 13 seconds being presented. DHS didn’t respond to questions from Gizmodo on Wednesday morning.
Comments on Instagram included people who clearly understood the message of the video as antisemitic. One commenter replied, “based song choice,” which was liked by the Border Patrol account. Another commenter wrote, “if you know, you know.”
Comments from the far-right on X were even more explicit, including “This deserves 6 million likes and shares,” a reference to the number of Jews who died in the Holocaust.
Other commenters on X marveled at how mainstream their far-right and antisemitic ideas were becoming, with one person writing, “This movie is taking a strange turn. It’s strange to me because I never thought I’d see this in the mainstream—it was always underground.”
And while it’s accurate to describe the shift as “strange,” it was entirely predictable after President Donald Trump was inaugurated for a second time in January. Billionaire Elon Musk really kicked off the tone of the era with two Nazi-style salutes. Musk later denied he was making Nazi gestures, but many of his supporters clearly took it as a sign that they could drop the mask. Steve Bannon, a former top advisor to Trump, made the same salute not long after.
Trump himself has also said some extremely antisemitic things, including when he used the term “shylock” at a rally in July.
In fact, there’s an entire Wikipedia page devoted to collecting examples of Trump’s antisemitism.
None of this is new
U.S. Border Patrol is part of the U.S. Department of Homeland Security, which has been posting far-right extremist content since Trump took office for a second time. In a tweet on Tuesday, DHS posted just one word, “Remigrate,” a term more popular in Europe among the far-right that refers to ethnic cleansing through deporting non-white people.
DHS also posted a video that included the words “Save America” in a typeface that’s clearly meant to evoke Nazi-era imagery.
DHS has frequently posted fascist propaganda using copyrighted material without permission, something that sometimes gets the content removed from the major social media platforms.
The people of DHS often know they’re the bad guys, like when they responded to questions from John Oliver’s HBO show by talking about the “heroism” of Darth Vader. The late-night host was asking about a video posted by Gregory Bovinot—the new face of anti-immigrant operations in the U.S., with his frequent appearances on TV—where Vader is destroying rebel forces labeled with things like “gang member,” and “fake news.”
Is a lot of this trolling? Sure, that’s one defense of it. But at some point, you own the words and images that you push into the world. And if you spend all day, every day saying racist and antisemitic things, people have to start taking you at your word.
Not to mention the fact that DHS has real power in the world to upend lives and has no business joking or “trolling” the American people. Agencies under DHS, like ICE, are currently harassing and arresting people for looking Latino. And that often includes American citizens.
The consequences
Ironically, DHS said back in April that social media would be screened for “antisemitism” by any foreign nationals in the country. In reality, DHS was looking for anyone who opposed the war in Gaza, falsely equating such a position with antisemitism. The U.S. State Department announced Tuesday it had canceled the visas of six people who had written negative things about Charlie Kirk.
Antisemitism runs deep in the modern Republican Party. Politico published leaked texts from the Young Republicans on Tuesday, which included messages like “I love Hitler.” Vice President JD Vance defended the texts and dismissed criticism as “pearl-clutching.” And guys like Vance know their audience. They can be dismissed as shitposters, but they’re some of the most vile racists on the planet, and they’re becoming normalized in ways that would’ve been unthinkable even a decade ago.
No Kings
Americans who are opposed to Trump plan to stage nationwide protests on Saturday, Oct. 18, for what’s being dubbed another No Kings rally. Republicans have tried to characterize the upcoming protests as hate marches, falsely insisting they would be full of “Hamas supporters.”
Treasury Secretary Scott Bessent told CNBC on Wednesday that the reason the government hasn’t opened yet is because of the upcoming demonstrations, a claim that makes no sense whatsoever.
“This crazy No Kings rally this weekend, which is gonna be the farthest left, the hardest core, the most unhinged in the Democratic Party, which is a big title. No Kings equals no paychecks,” said Bessent.
Bessent: “This crazy No Kings rally this weekend, which is gonna be the farthest left, the hardest core, the most unhinged in the Democratic Party, which is a big title. No Kings equals no paychecks.”
The No Kings rally, which is likely to include a wide variety of Americans who are opposed to Trump’s fascist takeover of the country, has a website that allows people to find their nearest demonstration. It won’t just be the “hardest core,” as Bessent puts it, if past protests are any guide.