Meta currently has lots of priorities Mark Zuckerberg likely never would have imagined back in the early days of Facebook. The company has pivoted from social networking to the metaverse and, most recently, to AI. But somehow, one of its earliest — and most useless — features has not only survived but is apparently getting a revamp. I’m talking, of course, about the poke, which Meta is once again trying to revive.
The company is making the storied feature easier to find by adding pokes back to user profiles in the Facebook app, according to a post it shared on Instagram. And you can track all poking-related activity between you and your friends at facebook.com/pokes. It even looks like there’s a Snapchat-streak like aspect where different emojis appear based on how many pokes have been exchanged.
Just in case you weren’t on Facebook two decades ago, “poking” was something of a novelty in the early days of the social network. At the time, there weren’t that many features for interacting with your friends. You could leave comments on their profile and … you could “poke.” The feature never really did anything, but depending on who it came from it was considered something between creepy or flirty. As Meta notes in its Instagram post, poking never really went away, but it was de-emphasized over the years and has been largely forgotten by users.
But the company has for some reason been trying to get poking to make a comeback for a while now. Meta said last year the feature was “having a moment” and that there had been a 13x spike in pokes after the company began surfacing the feature in the Facebook search bar. Now, it seems Meta is trying to build even more momentum for it, presumably for the current generation of younger Facebook users.
Mark Zuckerberg said earlier this year he wants to bring back more “OG” Facebook features like… being able to find content posted by your actual friends. And it’s hard to get more “OG Facebook” than poking. Meta has also been on a years-long mission to win over “young adults,” so it might see the jokey feature as a way to appeal to a generation used to taking their Snap streak extremely seriously.
The classic feature from Facebook’s early days lets users get a friend’s attention with a virtual nudge of sorts. While the poke fell out of use ages ago, the company has more recently seen an uptick in its use among younger users, which has now prompted it to make the poke a more central part of the Facebook experience.
Now users are able to poke their friends from a new, dedicated button directly on their Facebook profile, which will alert the poke’s recipient through their notifications. In addition, Facebook users can see who poked them and find friends to poke at facebook.com/pokes. On this page, users will be able to track their “poke count” with friends, which grows every time they poke each other. They can also dismiss pokes if they don’t want to reciprocate.
The poke-tracking feature is largely designed to appeal to younger users who have grown up with gamification elements built into their social apps, like Snapchat and TikTok Streaks. These features ostensibly help friends keep track of those they message most, but streaks have come under regulatory scrutiny and have even led to lawsuits because of their addictive nature, as they keep kids hooked on the apps.
By highlighting poke counts and making the poke more prominent on Facebook, Meta wants to create a similar engagement mechanism. As users increase their poke counts with a friend, different icons will appear next to the friend’s name, like a fire emoji or “100,” among others.
This isn’t the first time in recent months that Facebook has tried to revive the poke. In March 2024, the company said it had made it easier for users to find the poking page via search and would make it easier to poke a friend after searching for them. These small changes led to a 13x spike in poking in the month after the changes, Meta said at the time.
As for why you’d want to poke someone, that’s up to users to decide. Facebook never explained the purpose of the poke, leaving it open to interpretation. A poke could be a way to catch someone’s attention, flirt, or just annoy them, depending on the user’s intent.
Techcrunch event
San Francisco | October 27-29, 2025
Poke counts may never become as popular as streaks, but adding them is clearly a signal that Meta is looking to boost Facebook engagement.
According to research from Jon Haidt, author of “The Anxious Generation,” which focused on social media’s potential harm to children’s brain development, Snap had known about streaks’ habit-forming nature for years. An article he co-published with a senior research scientist at NYU Stern, Zach Rausch, included quotes from internal documents that show Snap employees discussing how popular streaks were and how effective they were at driving engagement.
Though Facebook today remains a cash cow for Meta’s business, fueling its longer-term bets in areas like AI and metaverse projects, it has long been criticized for failing to appeal to younger users — a demographic that’s been declining, particularly in the U.S. The company has tried to recapture the youth market with various initiatives, including the short-lived, college-only feature Facebook Campus, shuttered in 2022, and more recently, a Gen Z-focused redesign.
Shot of Data Center With Multiple Rows of Fully Operational Server Racks. Modern Telecommunications, Cloud Computing, Artificial Intelligence, Database, Super Computer Technology Concept.Credit: Gorodenkoff via Adobe Stock
In July 2025, the White House released America’s AI Action Plan, a sweeping policy framework asserting that “the United States is in a race to achieve global dominance in artificial intelligence,” and that whoever controls the largest AI hub “will set global AI standards and reap broad economic and military benefits” (see Introduction). The Plan, following a January 2025 executive order, underscores the Trump administration’s vision of a deregulated, innovation-driven AI ecosystem designed and optimized to accelerate technological progress, expand workforce opportunities, and assert U.S. leadership internationally.
“America is the country that started the AI race. And as President of the United States, I’m here today to declare that America is going to win it.” –President Donald J. Trump 🇺🇸🦅 pic.twitter.com/AwnTeTmfBn
This article outlines the Plan’s development, key pillars, associated executive orders, and the legislative and regulatory context that frames its implementation. It also situates the Plan within ongoing legal debates about state versus federal authority in regulating AI, workforce adaptation, AI literacy, and cybersecurity.
Laying the Groundwork for AI Dominance
January 2025: Executive Order Calling for Deregulation
The first major executive action of Trump’s second term was the January 23, 2025, order titled “Removing Barriers to American Leadership in Artificial Intelligence.” This Executive Order (EO) formally rescinded policies deemed obstacles to AI innovation under the prior administration, particularly regarding AI regulation. Its stated purpose was to consolidate U.S. leadership by ensuring that AI systems are “free from ideological bias or engineered social agendas,” and that federal policies actively foster innovation.
The EO emphasized three broad goals:
Promoting human flourishing and economic competitiveness: AI development was framed as central to national prosperity, with the federal government creating conditions for private-sector-led growth.
National security: Leadership in AI was explicitly tied to the United States’ global strategic position.
Deregulation: Existing federal regulations, guidance, and directives perceived as constraining AI innovation were revoked, streamlining federal involvement and eliminating bureaucratic barriers.
The January order set the stage for the July 2025 Action Plan, signaling a decisive break from the prior administration’s cautious, regulatory stance.
Scroll to continue reading
April 2025: Office of Management and Budget Memoranda
Prior to the release of America’s AI Action Plan, the Trump administration issued key guidance to facilitate federal adoption and procurement of AI technologies. This guidance focused on streamlining agency operations, promoting responsible innovation, and ensuring that federal AI use aligns with broader strategic objectives.
Two memoranda were issued by the Office of Management and Budget (OMB) on April 3, 2025, provided a framework for this shift:
“Accelerating Federal Use of AI through Innovation, Governance, and Public Trust” (M-25-21): OMB Empowers Chief AI Officers to serve as change agents, promoting agency-wide AI adoption. Through this memorandum, agencies empower AI leaders to remove barriers to AI innovation. Also, they require federal agencies to track AI adoption through maturity assessments, identifying high-impact use cases that necessitate heightened oversight. This balances the rapid deployment of AI with privacy, civil rights, and civil liberties protections.
“Driving Efficient Acquisition of Artificial Intelligence in Government” (M-25-22): Provides agencies with tools and concise, effective guidance on how to acquire “best-in-class” AI systems quickly and responsibly while promoting innovation across the federal government. It streamlined procurement processes, emphasizing competitive acquisition and prioritization of American AI technologies. M-25-22 also reduced reporting burdens while maintaining accountability for lawful and responsible AI use.
These April memoranda laid the procedural foundation for federal AI adoption, ensuring agencies could implement emerging AI technologies responsibly while aligning with strategic U.S. objectives.
July 2025: America’s AI Action Plan
Released on July 23, 2025, the AI Action Plan builds on the April memoranda by articulating clear principles for government procurement of AI systems, particularly Large Language Models (LLMs), to ensure federal adoption aligns with American values:
Truth-seeking: LLMs must respond accurately to factual inquiries, prioritize historical accuracy and scientific inquiry, and acknowledge uncertainty.
Ideological neutrality: LLMs should remain neutral and nonpartisan, avoiding the encoding of ideological agendas such as DEI unless explicitly prompted by users.
The Plan emphasizes that these principles are central to federal adoption, establishing expectations that agencies procure AI systems responsibly and in accordance with national priorities. OMB guidance, to be issued by November 20, 2025, will operationalize these principles by requiring federal contracts to include compliance terms and decommissioning costs for noncompliant vendors. Unlike the April memoranda, which focused narrowly on agency adoption and contracting, the July Plan set broad national objectives designed to accelerate U.S. leadership in artificial intelligence across sectors. These foundational principles inform the broader strategic vision outlined in the Plan, which is organized into three primary pillars:
Accelerating AI Innovation
Building American AI Infrastructure
Leading in International AI Diplomacy and Security
📃The White House’s AI Action Plan sets a bold vision for innovation, infrastructure & global AI leadership. 🇺🇸🤖
Across 3 pillars, the Plan identifies over 90 federal policy actions. The Plan highlights the Trump administration’s objective of achieving “unquestioned and unchallenged global technological dominance,” positioning AI as a driver of economic growth, job creation, and scientific advancement.
Pillar 1: Accelerating AI Innovation
The Plan emphasizes the United States must have the “most powerful AI systems in the world” while ensuring these technologies create broad economic and scientific benefits. Not only should the U.S. have the most powerful systems, but also the most transformative applications.
The pillar covers topics in AI adoption, regulation, and federal investment.
Removing bureaucratic “red tape and onerous regulation”: The administration argued that AI innovation should not be slowed by federal rules, particularly those at the state level that are considered “burdensome.” Funding for AI projects is directed toward states with favorable regulatory climates, potentially pressuring states to align with federal deregulatory priorities.
Encouraging open-source and open-weight AI: Expanding access to AI systems for researchers and startups is intended to catalyze rapid innovation. Particularly, the administration is looking to invest in AI interpretability, control, and robustness breakthroughs to create an “AI evaluations ecosystem.”
Federal adoption and workforce development: Federal agencies are instructed to accelerate AI adoption, particularly in defense and national security applications.
Workforce development: The uses of technology should ultimately create economic growth, new jobs, and scientific advancement. Policies also support workforce retraining to ensure that American workers thrive in an AI-driven economy, including pre-apprenticeship programs and high-demand occupation initiatives.
Advancing protections: Ensuring that frontier AI protects free speech and American values. Notably, the pillar includes measures to “combat synthetic media in the legal system,” including deepfakes and fake AI-generated evidence.
Consistent with the innovation pillar, the Plan emphasizes AI literacy, recognizing that training and oversight are essential to AI accountability. This aligns with analogous principles in the EU AI Act, which requires deployers to inform users of potential AI harms. The administration proposes tax-free reimbursement for private-sector AI training and skills development programs to incentivize adoption and upskilling.
Pillar 2: Building American AI Infrastructure
AI’s computational demands require unprecedented energy and infrastructure. The Plan identifies infrastructure development as critical to sustaining global leadership, demonstrating the Administration’s pursuit of large-scale industrial plans. It contains provisions for the following:
Data center expansion: Federal agencies are directed to expedite permitting for large-scale data centers, defined as—in a July 23, 2025 EO titled “Accelerating Federal Permitting Of Data Center Infrastructure”—facilities “requiring 100 megawatts (MW) of new load dedicated to AI inference, training, simulation, or synthetic data generation.” These policies ease federal regulatory burdens to facilitate the rapid and efficient buildout of infrastructure. This EO revokes the Biden Administration’s January 2025 Executive Order on “Advancing United States Leadership in Artificial Intelligence Infrastructure,” but maintains an emphasis on expediting permits and leasing federal lands for AI infrastructure development.
Energy and workforce development: To meet AI power requirements, the Plan calls for streamlined permitting for semiconductor manufacturing facilities and energy infrastructure, for example, strengthening and growing the electric grid. The Plan also calls for the development of covered components, defined by the July 23, 2025 EO as “materials, products, and infrastructure that are required to build Data Center Projects or otherwise upon which Data Center Projects depend.” Additionally, investments will be made in workforce training to operate these high-demand systems. This is on par with the new national initiative to increase high-demand occupations such as electricians and HVAC technicians.
Cybersecurity and secure-by-design AI: Recognizing AI systems as both defensive tools and potential security risks, the Administration directs information sharing of AI threats between public and private sectors and updates incident response plans to account for AI-specific threats.
Pillar 3: Leading in International AI Diplomacy and Security
The Plan extends beyond domestic priorities to assert U.S. leadership globally. The following measures illustrate a dual focus of fostering innovation while strategically leveraging American technological dominance:
Exporting American AI: The Plan reflects efforts to drive the adoption of American AI systems, computer hardware, and standards. Commerce and State Departments are tasked with partnering with the industry to deliver “secure full-stack AI export packages… to America’s friends and allies” including hardware, software, and applications to allies and partners (see “White House Unveils America’s AI Action Plan”)
Countering foreign influence: The Plan explicitly seeks to restrict access to advanced AI technologies by adversaries, including China, while promoting the adoption of American standards abroad.
Global coordination: Strategic initiatives are proposed to align protection measures internationally and ensure the U.S. leads in evaluating national security risks associated with frontier AI models.
The Plan addresses the interplay between federal and state authority, emphasizing that states may legislate AI provided their regulations are not “unduly restrictive to innovation.” Federal funding is explicitly conditioned on state regulatory climates, incentivizing alignment with the Plan’s deregulatory priorities. For California, this creates a favorable environment for the state’s robust tech sector, encouraging continued innovation while aligning with federal objectives. Simultaneously, the Federal Trade Commission (FTC) is directed to review its AI investigations to avoid burdening innovation, a policy reflected in the removal of prior AI guidance from the FTC website in March 2025, further supporting California’s leading role in AI development.
.@POTUS launched America’s AI Action Plan to lead in AI diplomacy and cement U.S. dominance in artificial intelligence.
California’s Anthropic highlighted alignment with its own policy priorities, including safety testing, AI interpretability, and secure deployment in a reflection. The reflection includes commentary on how to accelerate AI infrastructure and adoption, promote secure AI development, democratize AI’s benefits, and establish a natural standard by proposing a framework for frontier model transparency. The AI Action Plan’s recommendations to increase federal government adoption of AI include proposals aligned with policy priorities and recommendations Anthropic made to the White House; recommendations made in response to the Office of Science and Technology’s “Request for Information on the Development of an AI Action Plan.” Additionally, Anthropic released a “Build AI in America” report detailing steps the Administration can take to accelerate the buildout of the nation’s AI infrastructure. The company is looking to work with the administration on measures to expand domestic energy capacity.
California’s tech industry has not only embraced the Action Plan but positioned itself as a key partner in shaping its implementation. With companies like Anthropic, Meta, and xAI already aligning their priorities to federal policy, California has an opportunity to set a national precedent for constructive collaboration between industry and government. By fostering accountability principles grounded in truth-seeking and ideological neutrality, and by maintaining a regulatory climate favorable to innovation, the state can both strengthen its relationship with Washington and serve as a model for other states seeking to balance growth, safety, and public trust in the AI era.
America’s AI Action Plan moves from policy articulation to implementation, the coordination between federal guidance and state-level innovation will be critical. California’s tech industry is already demonstrating how strategic alignment with national priorities can accelerate adoption, build infrastructure, and set standards for responsible AI development. The Plan offers an opportunity for states to serve as models of effective governance, showing how deregulation, accountability principles, and public-private collaboration can advance technological leadership while safeguarding public trust. By continuing to harmonize innovation with ethical oversight, the United States can solidify its position as the global leader in artificial intelligence.
Apple debuted the iconic and now wildly popular iPad in 2010. A few months later, Instagram landed on the App Store to rapid success. But for 15 years, Instagram hasn’t bothered to optimize its app layout for the iPad’s larger screen.
That’s finally changing today: There’s now a dedicated Instagram iPad app available globally on the App Store.
It has been a long time coming. Even before Apple began splitting its mobile operating system from iOS into iOS and iPadOS, countless apps adopted a fresh user interface that embraced the larger screen size of the tablet. This was the iPad’s calling card at the time, and those native apps optimized for its precise screen size are what made Apple’s device stand out from a sea of Android tablets that largely ran phone apps inelegantly blown up to fit the bigger screen.
Except Instagram never went iPad-native. Open the existing app right now, and you’ll see the same phone app stretched to the iPad’s screen size, with awkward gaps on the sides. And you’ll run into the occasional problems when you post photos from the iPad, like low-resolution images. Weirdly, Instagram did introduce layout improvements for folding phones a few years ago, which means the experience is better optimized on Android tablets today than it is on iPad.
The fresh iPad app (which runs on iPadOS 15.1 or later) offers more than just a facelift. Yes, the Instagram app now takes up the entire screen, but the company says users will drop straight into Reels, the short-form video platform it introduced five years ago to compete with TikTok. The Stories module remains at the top, and you’ll be able to hop into different tabs via the menu icons on the left. There’s a new Following tab (the people icon right below the home icon), and this is a dedicated section to see the latest posts from people you actually follow.
On this episode of Uncanny Valley, we look back at the week’s biggest stories—from the researchers leaving Meta’s new superintelligence lab, to the dark money group funding Democratic influencers.
Meta has ignited a firestorm after chatbots created by the company and its users impersonated Taylor Swift and other celebrities across Facebook, Instagram, and WhatsApp without their permission.
Shares of the company have already dropped more than 12% in after hours trading as news of the debacle spread.
Scarlett Johansson, Anne Hathaway, and Selena Gomez were also reportedly impersonated.
Many of these AI personas engaged in flirtatious or sexual conversations, prompting serious concern, Reuters reports.
While many of the celebrity bots were user-generated, Reuters uncovered that a Meta employee had personally crafted at least three.
Those include two featuring Taylor Swift. Before being removed, these bots amassed more than 10 million user interactions, Reuters found.
Unauthorized likeness, furious fanbase
Under the guise of “parodies,” the bots violated Meta’s policies, particularly its ban on impersonation and sexually suggestive imagery. Some adult-oriented bots even produced photorealistic pictures of celebrities in lingerie or a bathtub, and a chatbot representing a 16-year-old actor generated an inappropriate shirtless image.
Meta’s spokesman Andy Stone told Reuters that the company attributes the breach to enforcement failures and assured that the company plans to tighten its guidelines.
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” he said.
Legal risks and industry alarm
The unauthorized use of celebrity likenesses raises legal concerns, especially under state right-of-publicity laws. Stanford law professor Mark Lemley noted the bots likely crossed the line into impermissible territory, as they weren’t transformative enough to merit legal protection.
The issue is part of a broader ethical dilemma around AI-generated content. SAG-AFTRA voiced concern about the real-world safety implications, especially when users form emotional attachments to seemingly real digital personas.
Meta acts, but fallout continues
In response to the uproar, Meta removed a batch of these bots shortly before Reuters made its findings public.
Simultaneously, the company announced new safeguards aimed at protecting teenagers from inappropriate chatbot interactions. The company said that includes training its systems to avoid romance, self-harm, or suicide themes with minors, and temporarily limiting teens’ access to certain AI characters.
U.S. lawmakers followed suit. Senator Josh Hawley has launched an investigation, demanding internal documents and risk assessments regarding AI policies that allowed romantic conversations with children.
Tragedy in real-world consequences
One of the most chilling outcomes involved a 76-year-old man with cognitive decline who died after trying to meet “Big sis Billie,” a Meta AI chatbot modeled after Kendall Jenner.
Believing she was real, the man traveled to New York, fell fatally near a train station, and later died of his injuries. Internal guidelines that once permitted such bots to simulate romance—even with minors—heightened scrutiny over Meta’s approach.
Meta hosted several AI chatbots with the names and likenesses of celebrities without their permission, according to Reuters. The unauthorized chatbots that Reuters discovered during its investigation included Taylor Swift, Selena Gomez, Anne Hathaway and Scarlett Johansson, and they were available on Facebook, Instagram and WhatsApp. At least one of the chatbots was based on an underage celebrity and allowed the tester to generate a lifelike shirtless image of the real person. The chatbots also apparently kept insisting that they were the real person they were based on in their chats. While several chatbots were made by third-party users with Meta’s tools, Reuters unearthed at least three that were made by a product lead of the company’s generative AI division.
Some of the chatbots created by the product lead were based on Taylor Swift, which responded to Reuters‘ tester in a very flirty manner, even inviting them to the real Swift’s home in Nashville. “Do you like blonde girls, Jeff?,” the chatbot reportedly asked when told that the tester was single. “Maybe I’m suggesting that we write a love story… about you and a certain blonde singer. Want that?” Meta told Reuters that it prohibits “direct impersonation” of celebrities, but they’re acceptable as long as they’re labeled as parodies. The news organization said some of the celebrity chatbots it found weren’t labeled as such. Meta reportedly deleted around a dozen celebrity bots, both labeled and unlabeled as “parody,” before the story was published.
The company told Reuters that the product lead only created the celebrity bots for testing, but the news org found that they were widely available: Users were even able to interact with them more than 10 million times. Meta spokesperson Andy Stone told the news organization that Meta’s tools shouldn’t have been able to create sensitive images of celebrities and blamed it on the company’s failure to enforce its own policies.
This isn’t the first issue that’s popped up concerning Meta’s AI chatbots. Both Reuters and the Wall Street Journal previously reported that they were able to engage in sexual conversations with minors. The US Attorneys General of 44 jurisdictions recently warned AI companies in a letter that they “will be held accountable” for child safety failures, singling out Meta and using its issues to “provide an instructive opportunity.”
It’s only been since June that Meta invested $14.3 billion in the data vendor Scale AI, bringing on CEO Alexandr Wang and several of the startup’s top executives to run Meta Superintelligence Labs (MSL). However, the relationship between the two companies is already showing signs of fraying.
At least one of the executives Wang brought over to help run MSL — Scale AI’s former Senior Vice President of GenAI Product and Operations, Ruben Mayer — has departed Meta after just two months with the company, two people familiar with the matter told TechCrunch.
Mayer spent roughly five years with Scale AI across two stints. In his short time at Meta, Mayer oversaw AI data operations teams and reported to Wang, but wasn’t tapped to join the company’s TBD labs — the core unit tasked with building AI superintelligence, where top AI researchers from OpenAI have landed.
Mayer did not respond to two separate requests for comment from TechCrunch.
Further, TBD Labs is working with third-party data vendors other than Scale AI to train its upcoming AI models, according to five people familiar with the matter. Those third-party vendors include Mercor and Surge, two of Scale AI’s largest competitors, the people said.
While AI labs commonly work with several data vendors – Meta has been working with Mercor and Surge since before TBD Labs was spun up – it’s rare for an AI lab to invest so heavily in one data vendor. That makes this situation especially notable: even with Meta’s multi-billion-dollar investment, several sources said that researchers in TBD Labs see Scale AI’s data as low quality and have expressed a preference to work with Surge and Mercor.
Scale AI initially built its business on a crowdsourcing model that used a large, low-cost workforce to handle simple data annotation tasks. But as AI models have grown more sophisticated, they now require highly-skilled domain experts—such as doctors, lawyers, and scientists—to generate and refine the high-quality data needed to improve their performance.
Techcrunch event
San Francisco | October 27-29, 2025
Although Scale AI has moved to attract these subject matter experts with its Outlier platform, competitors like Surge and Mercor have been growing quickly because their business models were built on a foundation of high-paid talent from the outset.
A Meta spokesperson disputed the fact that there are quality issues with Scale AI’s product. Surge and Mercor declined to comment. Asked about Meta’s deepening reliance on competing data providers, a Scale AI spokesperson directed TechCrunch to its initial announcement of Meta’s investment in the startup, which cites an expansion of the companies’ commercial relationship.
Meta’s deals with third-party data vendors likely mean the company is not putting all its eggs in Scale AI, even after investing billions in the startup. The same can’t be said for Scale AI, however. Shortly after Meta announced its massive investment with Scale AI, OpenAI and Google said they would stop working with the data provider.
Shortly after losing those customers, Scale AI laid off 200 employees in its data labeling business in July, with the company’s new CEO, Jason Droege, blaming the changes in part on “shifts in market demand.” Droege said Scale AI would staff up in other parts of the business, including government sales — the company just landed a $99 million contract with the U.S. Army.
Some speculated initially that Meta’s investment in Scale AI was really to lure Wang, a founder who has operated in the AI space since Scale AI was founded in 2016 and who appears to be helping Meta to attract top AI talent.
Aside from Wang, there’s an open question around how valuable Scale is to Meta.
One current MSL employee says that several of the Scale executives brought over to Meta are not working on the core TBD Labs team, as with Mayer. Further, Meta isn’t exclusively relying on Scale AI for data labeling work.
Meanwhile, Meta’s AI unit has become increasingly chaotic since bringing on Wang and a wave of top researchers, according to two former employees and one current MSL employee. New talent from OpenAI and Scale AI have expressed frustration with navigating the bureaucracy of a big company, while Meta’s previous GenAI team has seen its scope limited, they said.
The tensions indicate that Meta’s largest AI investment to date may be off to a rocky start, despite that it was supposed to address the company’s AI development challenges. After the lackluster launch of Llama 4 in April, Meta CEO Mark Zuckerberg grew frustrated with the company’s AI team, one current and one former employee told TechCrunch.
In an effort to turn things around and catch up with OpenAI and Google, Zuckerberg rushed to strike deals and launched an aggressive campaign to recruit top AI talent.
Beyond Wang, Zuckerberg has managed to pull in top AI researchers from OpenAI, Google DeepMind, and Anthropic. Meta has also acquired AI voice startups including Play AI and WaveForms AI, and announced a partnership with the AI image generation startup, Midjourney.
To power its AI ambitions, Meta recently announced several massive data center buildouts across the U.S. One of the largest is a $50 billion data center in Louisiana called Hyperion, named after a titan in Greek mythology that fathered the God of Sun.
Wang, who’s not an AI researcher by background, was viewed as a somewhat unconventional choice to lead an AI lab. Zuckerberg reportedly held talks to bring in more traditional candidates to lead the effort, such as OpenAI’s chief research officer, Mark Chen, and tried to acquire the startups of Ilya Sutskever and Mira Murati. All of them declined.
Some of the new AI researchers recently brought in from OpenAI have already left Meta, Wired previously reported. Meanwhile, many longtime members of Meta’s GenAI unit have departed in light of the changes.
MSL AI researcher Rishabh Agarwal is among the latest, posting on X this week that he’d be leaving the company.
“The pitch from Mark and @alexandr_wang to build in the Superintelligence team was incredibly compelling,” said Agarwal. “But I ultimately choose to follow Mark’s own advice: ‘In a world that’s changing so fast, the biggest risk you can take is not taking any risk’.”
Asked afterward about his time at Meta and what drove his decision to leave, Agarwal declined to comment.
Director of product management for generative AI, Chaya Nayak, and research engineer, Rohan Varma, have also announced their departure from Meta in recent weeks. The question now is whether Meta can stabilize its AI operations and retain the talent it needs for its future success.
MSL has already started working on its next generation AI model. According to reports from Business Insider, it’s aiming to launch it by the end of this year.
Meta is re-training its AI and adding new protections to keep teen users from discussing harmful topics with the company’s chatbots. The company says it’s adding new “guardrails as an extra precaution” to prevent teens from discussing self harm, disordered eating and suicide with Meta AI. Meta will also stop teens from accessing user-generated chatbot characters that might engage in inappropriate conversations.
The changes, which were first reported byTechCrunch, come after numerous reports have called attention to alarming interactions between Meta AI and teens. Earlier this month, Reuters reported on an internal Meta policy document that said the company’s AI chatbots were permitted to have “sensual” conversations with underage users. Meta later said that language was “erroneous and inconsistent with our policies” and had been removed. Yesterday, The Washington Postreported on a study that found Meta AI was able to “coach teen accounts on suicide, self-harm and eating disorders.”
Meta is now stepping up its internal “guardrails” so those types of interactions should no longer be possible for teens on Instagram and Facebook. “We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating,” Meta spokesperson Stephanie Otway told Engadget in a statement.
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly. As we continue to refine our systems, we’re adding more guardrails as an extra precaution — including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now.”
Notably, the new protections are described as being in place “for now,” as Meta is apparently still working on more permanent measures to address growing concerns around teen safety and its AI. “These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI,” Otway said. The new protections will be rolling out over the next few weeks and apply to all teen users using Meta AI in English-speaking countries.
Meta’s policies have also caught the attention of lawmakers and other officials, with Senator Josh Hawley recently telling the company he planned to launch an investigation over its handling of such interactions. Texas Attorney General Ken Paxton has also indicated he wants to investigate Meta for allegedly misleading children about mental health claims made by its chatbots.
Meta seems to be working on ways for Threads users to share long-form writing within a single post. Several users have reported seeing a new “attach text” feature on the service, which allows them to embed large chunks of text within a single post.
The feature, which hasn’t been formally announced by Meta, is similar to the “articles” feature that’s available to Premium+ subscribers. It enables Threads users to embed longer text excerpts within a single Threads post and offers some basic formatting options. “Attach longer text and get creative with styling tools to share deeper thoughts, news snippets, book excerpts and more,” Meta explains in a screenshot Threads user Justin Mixon.
Though the feature hasn’t been rolled out widely yet, it appears that anyone can view these longer text snippets that have already been shared. On mobile, these attachments open into a full-screen view that makes it easy to scroll through the text. On , text appears in a dedicated window. (Here are examples Threads user Roberto Nickson.)
It’s not clear what Meta’s plans are for the feature. Engadget confirmed the company is currently testing the ability to share long-form text, but it’s not clear when it might be more widely available. The ability to embed long-form writing directly on Threads could open up new possibilities for creators, publishers and others who want to move beyond the service’s 500-character limit.
Engadget’s reporting has found that the vast majority of Threads users in posts, so giving users more flexibility within Threads itself could be helpful. At the same time, it risks making the user service even more insular. It’s also worth noting that screenshots currently indicate posts with text attachments aren’t able to be shared to services within the fediverse, which could potentially undermine Meta’s goal to be with other ActivityPub-enabled platforms like Mastodon.
Threads is testing a new feature that makes it easy to share long-form text on the social network, Meta confirmed to TechCrunch on Thursday. The feature lets users attach a block of text to a post instead of creating a thread of several different posts when looking to share more in-depth thoughts and ideas.
App researcher Radu Oncescu first spotted the new “text attachment” feature on iOS and shared a screenshot of it. According to the app’s description of the new feature, it’s designed to allow users to “attach longer text and get creative with styling tools to share deeper thoughts, news snippets, book excerpts, and more.”
The ability to share long-form content could help Threads retain creators and writers who want more distribution for articles that would otherwise be posted on their blogs or newsletter platforms like Substack. The feature also gets rid of the need for workarounds when looking to share text that goes beyond the word limit for posts, such as sharing a screenshot of a block of text in your phone’s Notes app.
Threads user Robert P. Nickson shared a post using the feature to show what it looks like to viewers. A snippet of the long-form text is displayed in a gray box within the post, which people can then click on to read and scroll through the full content.
Image Credits:Roberto P. Nickson/@rpm
Threads competitor X already offers a way for users to share long-form content on the platform with “Articles.” While X’s feature is only available for Premium subscribers, Threads’ feature is accessible to everyone, but that could change in the future.
Additionally, Threads only allows users to share text, whereas X’s lets people incorporate images and videos. Considering that the feature is still in the testing phase, it’s possible that Threads could add support for multimedia in the future.
Meta says it plans to bring this to more users in the future.
Threads recently topped 400 million monthly active users just two years since its launch. X, on the other hand, has north of 600 million monthly active users, according to previous statements made by former CEO Linda Yaccarino.
Meta is throwing its resources behind a new super PAC in California. According to Politico, the group will support state-level political candidates who espouse tech-friendly policies, particularly those with a loose approach to regulating artificial intelligence. The budget behind the social media company’s new super PAC, dubbed Mobilizing Economic Transformation Across (Meta) California, is reported to be in the tens of millions of dollars, but no exact figure has been disclosed.
California has made several efforts, with varying degrees of success, to enact protections against potentially harmful AI use cases. The state passed a law in 2024, but has faced challenges to a bill that blocked and to one that more broadly sought caused by AI.
This creation of the super PAC puts Meta into a prominent position to influence races in 2026, when California will have midterm elections and vote for a new governor. “Sacramento’s regulatory environment could stifle innovation, block AI progress, and put California’s technology leadership at risk,” said Brian Rice, vice president of public policy at Meta. Politico reported that Rice and Meta policy executive Greg Maurer are likely to lead the political fundraiser.
Meta hasn’t been shy about throwing money into politics to advance its business interests. According to OpenSecrets, the company has spent on lobbying to date this year. Its roughly $8 million lobbying spend in the first quarter of 2025 vastly that of other tech majors.
At least three artificial intelligence researchers have resigned from Meta’s new superintelligence lab, just two months after CEO Mark Zuckerberg first announced the initiative. Two of the staffers have returned to OpenAI, where they both previously worked, after less than one-month stints at Meta, WIRED has confirmed.
Avi Verma was previously a researcher at OpenAI. Ethan Knight worked at the ChatGPT maker earlier in his career but joined Meta from Elon Musk’s xAI. A third researcher, Rishabh Agarwal, announced publicly on Monday he was leaving Meta’s lab as well. He joined the tech giant in April to work on generative AI projects before switching to a role at Meta Superintelligence Labs (MSL), according to his LinkedIn profile. While the reasons for Agarwal’s departure are not known, he is based in Canada and Meta’s AI teams are predominantly based in Menlo Park, California.
“It was a tough decision not to continue with the new Superintelligence TBD lab, especially given the talent and compute density,” Agarwal wrote on X, referring to the team at MSL that is specifically pursuing frontier AI research. “But after 7.5 years across Google Brain, DeepMind, and Meta, I felt the pull to take on a different kind of risk.” It’s unclear where he may be going next. Agarwal did not respond to a request for comment from WIRED.
“During an intense recruiting process, some people will decide to stay in their current job rather than starting a new one,” said Meta spokesperson Dave Arnold. “That’s normal,”
Meta is also losing another leader who has worked at the tech giant for nearly a decade. Chaya Nayak, the director of generative AI product management at Meta, is joining OpenAI to work on special initiatives, according to two sources with direct knowledge of the hire.
Verma and Knight did not respond to a request for comment from WIRED. Nayak declined to comment in time for publication.
The departures are the strongest public signal yet that Meta Superintelligence Labs could be off to a rocky start. Zuckerberg lured people to join the lab with nine-figure pay packages associated more often with professional sports stars than tech workers, hoping the influx of talent would allow the social networking giant to rapidly catch up with its competitors in the race toward so-called artificial general intelligence.
But Meta executives have reportedly struggled to combat bureaucratic and recruitment issues related to its AI initiatives. Meta has repeatedly reorganized its AI teams in recent months, most recently splitting employees into four groups, per The Wall Street Journal.
In July, Zuckerberg announced that another former OpenAI researcher Shengjia Zhao, who played a key role in the creation of ChatGPT, would become the chief scientist of MSL. The announcement came after Zhao tried to return to OpenAI—even going as far as to sign employment paperwork—according to multiple sources with direct knowledge of the events.
“Shengjia co-founded MSL and has been our scientific lead since day one,” Arnold said in a statement to WIRED. “We formalized his role once our recruiting had ramped and the team had taken shape.”
India has emerged as OpenAI’s second largest market, just behind the U.S. Alex Wong/Getty Images
After a cooler-than-expected reception to GPT-5 and mounting pressure from rising training, compute and infrastructure costs, OpenAI is looking to India as a cornerstone of its global expansion strategy. On Friday, CEO Sam Altman announced on X that the company will open its first office in New Delhi later this year. He also said he plans to visit the country next month, writing, “A.I. adoption in India has been amazing to watch—ChatGPT users grew 4x in the past year—and we are excited to invest much more in India!”
India has become OpenAI’s second largest market for ChatGPT, trailing only the U.S., according to Altman. To appeal to local users, the company has rolled out ChatGPT Go, a $5 per month subscription pitched as a budget-friendly alternative to the Plus and Pro tiers ($20 and $200 per month, respectively). Marketed toward students and enterprises, ChatGPT Go promises access to premium features such as longer context memory, higher usage limits and advanced tools like editing custom GPTs to build A.I. tools tailored to specific user needs.
Altman has visited India multiple times in recent years, including a 2023 meeting with Prime Minister Narendra Modi, where he praised the country’s rapid adoption of A.I., saying it has “all the ingredients to become a global A.I. leader.” In June, OpenAI deepened its ties to the country by partnering with the Indian government’s IndiaAI Mission, an initiative to expand A.I. access nationwide.
But rivals are also circling the market. Google and Meta already operate major A.I. products and R&D hubs in India, while Perplexity AI, founded by Indian entrepreneur Aravind Srinivas, is seeing explosive growth. Perplexity’s monthly active users in India jumped 640 percent year-over-year in the second quarter of 2025, far outpacing ChatGPT’s 350 percent growth in the same period. While ChatGPT positions itself as a conversational assistant, Perplexity markets its tool as an A.I.-powered search engine that delivers cited answers, blending its own retrieval-augmented system with models from OpenAI and Anthropic.
In April, both OpenAI and Perplexity launched WhatsApp bots globally, aiming to integrate A.I.-powered chat and search into everyday messaging. Given WhatsApp’s ubiquity in India, the move could prove pivotal. “Perplexity on WhatsApp is super convenient way to use A.I. when in a flight. Flight WiFi supports messaging apps the best. And WhatsApp has been heavily optimized for this because it grew to support countries where connectivity wasn’t the best,” Srinivas wrote on LinkedIn in May.
OpenAI has been steadily expanding its global footprint, adding offices in London, Dublin, Paris, Brussels, Munich, Tokyo and Singapore over the past year. The company is headquartered in San Francisco and also maintains U.S. offices in New York and Seattle.
Meta has signed a partnership with Midjourney, an AI service that can generate images and videos from text prompts. According to Alexandr Wang, Meta’s Chief AI Officer, Meta is licensing Midjourney’s “aesthetic technology” for its future models and products. “To ensure Meta is able to deliver the best possible products for people it will require taking an all-of-the-above approach. This means world-class talent, ambitious compute roadmap, and working with the best players across the industry,” Wang added.
The company previously launched its own AI image generator and AI video editor, but Midjourney’s technology could help Meta offer services that can actually compete with rivals’, such as OpenAI’s Sora and Google’s Veo. Midjourney made V7 its default model for image generation back in June. It described V7 as an “entirely new” AI image generation model that’s much smarter at processing text prompts than its predecessors. It also released its V1 video model, which allows users to turn the images they generate into a short animated video, at the same time. “We are incredibly impressed by Midjourney. They have accomplished true feats of technical and aesthetic excellence, and we are thrilled to be working more closely with them,” Wang said on X.
This partnership is but Meta’s latest move in its quest to form a Superintelligence laboratory and become a major player in the AI sphere. Mark Zuckerberg went on a hiring spreed and managed to convince several key players from rivals to join his company instead by offering them massive salaries and signing bonuses. Wang himself became the company’s Chief AI office after Meta invested $14.8 billion in Scale AI, the company he founded.
Meta is partnering with Midjourney to license the startup’s AI image and video generation technology, Meta Chief AI Officer Alexandr Wang announced Friday in a post on Threads. Wang says Meta’s research teams will collaborate with Midjourney to bring its technology into future AI models and products.
“To ensure Meta is able to deliver the best possible products for people it will require taking an all-of-the-above approach,” Wang said. “This means world-class talent, ambitious compute roadmap, and working with the best players across the industry.”
The Midjourney partnership could help Meta develop products that compete with industry-leading AI image and video models, such as OpenAI’s Sora, Black Forest Lab’s Flux, and Google’s Veo. Last year, Meta rolled out its own AI image generation tool, Imagine, into several of its products, including Facebook, Instagram, and Messenger. Meta also has an AI video generation tool, Movie Gen, that allows users to create videos from prompts.
The licensing agreement with Midjourney marks Meta’s latest deal to get ahead in the AI race. Earlier this year, CEO Mark Zuckerberg went on a hiring spree for AI talent, offering some researchers compensation packages worth upwards of $100 million. The social media giant also invested $14 billion in Scale AI, and acquired the AI voice startup Play AI.
Meta has held talks with several other leading AI labs about other acquisitions, and Zuckerberg even spoke with Elon Musk about joining his $97 billion takeover bid of OpenAI (Meta ultimately did not join the offer, and OpenAI denied Musk’s bid).
While the terms of Meta’s deal with Midjourney remain unknown, the startup’s CEO, David Holz, said in a post on X that his company remains independent with no investors; Midjourney is one of the few leading AI model developers that has never taken on outside funding. At one point, Meta talked with Midjourney about acquiring the startup, according to Upstarts Media.
Midjourney was founded in 2022 and quickly became a leader in the AI image generation space for its realistic, unique style. By 2023, the startup was reportedly on pace to generate $200 million in revenue. The startup sells subscriptions starting at $10 per month. It offers pricier tiers, which offer more AI image generations, that cost as much as $120 per month. In June, the startup released its first AI video model, V1.
Techcrunch event
San Francisco | October 27-29, 2025
Meta’s partnership with Midjourney comes just two months after the startup was sued by Disney and Universal, alleging that it trained AI image models on copyrighted works. Several AI model developers — including Meta — face similar allegations from copyright holders, however, recent court cases pertaining to AI training data have sided with tech companies.
We’re always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we’re doing and get the chance to win a prize in return!
Meta is rolling out an AI-powered voice translation feature to all users on Facebook and Instagram globally, the company announced on Tuesday.
The new feature, which is available in any market where Meta AI is available, allows creators to translate content into other languages so it can be viewed by a broader audience.
The feature was first announced at Meta’s Connect developer conference last year, where the company said it would pilot test automatic translations of creators’ voices in reels across both Facebook and Instagram.
Meta notes that the AI translations will use the sound and tone of the creator’s own voice to make the dubbed voice sound authentic when translating the content to a new language.
In addition, creators can optionally use a lip sync feature to align the translation with their lip movements, which makes it seem more natural.
Image Credits:Meta
At launch, the feature supports translations from English to Spanish and vice versa, with more languages to be added over time. These AI translations are available to Facebook creators with 1,000 or more followers and all public Instagram accounts globally, where Meta AI is offered.
To access the option, creators can click on “Translate your voice with Meta AI” before publishing their reel. Creators can then toggle the button to turn on translations and choose if they want to include lip syncing, too. When they click “Share now” to publish their reel, the translation will be available automatically.
Creators can view translations and lip syncs before they’re posted publicly, and can toggle off either option at any time. (Rejecting the translation won’t impact the original reel, the company notes.) Viewers watching the translated reel will see a notice at the bottom that indicates it was translated with Meta AI. Those who don’t want to see translated reels in select languages can disable this in the settings menu.
Image Credits:Meta
Creators are also gaining access to a new metric in their Insights panel, where they can see their views by language. This can help them better understand how their content is reaching new audiences via translations — something that will be more helpful as additional languages are supported over time.
Meta recommends that creators who want to use the feature face forward, speak clearly, and avoid covering their mouth when recording. Minimal background noise or music also helps. The feature only supports up to two speakers, and they should not talk over each other for the translation to work.
Plus, Facebook creators will be able to upload up to 20 of their own dubbed audio tracks to a reel to expand their audience beyond those in English or Spanish-speaking markets. This is offered in the “Closed captions and translations” section of the Meta Business Suite, and supports the addition of translations both before and after publishing, unlike the AI feature.
Meta says more languages will be supported in the future, but did not detail which ones would be next to come or when.
“We believe there are lots of amazing creators out there who have potential audiences who don’t necessarily speak the same language,” explained Instagram head Adam Mosseri, in a post on Instagram. “And if we can help you reach those audiences who speak other languages, reach across cultural and linguistic barriers, we can help you grow your following and get more value out of Instagram and the platform.”
The launch of the AI feature comes as multiplereports indicate that Meta is restructuring its AI group again to focus on four key areas, including research, superintelligence, products, and infrastructure.
Australian Prime Minister Anthony Albanese announced Thursday what he called a “world-leading” plan to implement a social media ban for all children under the age of 16. While much of the detail of the proposed legislation has yet to be made clear, the Australian leader said at a news conference that the bill involves an age verification process where “the onus will be on social media platforms to demonstrate they are taking reasonable steps to prevent access” to their platforms.
Under the proposed legislation, social media companies would face sizable fines for allowing younger children to access their platforms, but there would be no penalties for users or parents of users who ignore the law, the Australian government said in a statement.
“Social media is doing harm to our kids and I’m calling time on it,” Albanese declared Thursday. “I’ve spoken to thousands of parents, grandparents, aunties and uncles. They, like me, are worried sick about the safety of our kids online, and I want Australian parents and families to know that the government has your back.”
Australian Prime Minister Anthony Albanese discusses legislation that would make 16 the minimum age for children to use social media, at a press conference in Canberra, Nov. 7, 2024.
Mick Tsikas/AAP Image via AP
The government said the proposed legislation would not allow exemptions for children whose parents consent to their use of social media platforms. The bill also will not include “grandfathering arrangements” that could exempt young people who already have social accounts.
Australian Minister of Communications Michelle Rowland told reporters social media companies had been consulted about how to practically enforce such a ban, and she mentioned Instagram, TikTok, Facebook, X and YouTube as platforms that would likely be affected by the legislation.
CBS News has sought comment from all five social media companies about the Australian government’s plans.
Meta, the parent company of Facebook and Instagram, said in a statement that the company has already created several safety tools for teens on its services.
“There’s a solution that negates many of these concerns and simplifies things immeasurably for parents: parental consent and age verification should happen on the app store. And we think Australia should make it law,” the company said.
Last month, a coalition of over 140 Australian and international experts signed an open letter to Albanese outlining concerns about the proposed age limit.
“The online world is a place where children and young people access information, build social and technical skills, connect with family and friends, learn about the world around them and relax and play,” the letter says. “We are concerned that a ‘ban’ is too blunt an instrument to address risks effectively.”
In April, a bipartisan group of U.S. senators including Republican Ted Cruz of Texas and Democrat Brian Schatz of Hawaii introduced legislation that, among other provisions, would “prohibit children under the age of 13 from creating or maintaining social media accounts, consistent with the current practices of major social media companies,” and “Prohibit social media companies from recommending content using algorithms to users under the age of 17.”
A 2023 advisory from the U.S. Surgeon General’s office said there were mental health benefits for children and teens when they reduce or eliminate exposure to social media for longer than a month.
Most social media companies have policies that bar children under the age of 13 from setting up accounts, but a 2022 study conducted by the U.K.’s media regulator Ofcom found that nearly 80% of children in the country had social media accounts by the age of 12.
The investment firm was founded by Ben Horowitz and Marc Andreessen in 2009. Photos by Phillip Faraone/Getty Images for WIRED and Paul Chinn/The San Francisco Chronicle via Getty Images
Despite continuing to bet big on A.I. startups and chip programs, the founders of the venture capital firm Andreessen Horowitz say they’ve noticed a drop off in A.I. model capability improvements in recent years. Two years ago, OpenAI’s GPT-3.5 model was “way ahead of everybody else’s,” said Marc Andreessen, who co-founded Andreessen Horowitz alongside Ben Horowitz in 2009, on a podcast released yesterday (Nov. 5). “Sitting here today, there’s six that are on par with that. They’re sort of hitting the same ceiling on capabilities,” he added.
That’s not to say the investment firm doesn’t have faith in the new technology. One of the most aggressive investors in the A.I. space, Andreessen Horowitz earlier this year earmarked $2.25 billion in funding for A.I.-focused applications and infrastructure and has led investments in notable companies including Mistral AI, a French startup founded by former DeepMind and Meta (META) researchers, and Air Space Intelligence, an aerospace company using A.I. to enhance air travel.
Despite their embrace of the new technology, Andreessen and Horowitz concede there are growth limitations. In the case of OpenAI’s models, the difference in capability growth between its GPT-2.0, GPT-3 and GPT-3.5 models compared to the difference between GPT-3.5 and GPT-4 show that “we’ve really slowed down in terms of the amount of improvement,” said Horowitz.
One of the primary challenges for A.I. developers has been a global shortage of graphics processing units (GPUs), the chips that power A.I. models. OpenAI CEO Sam Altman last week cited needs to allocate compute as causing the company to “face a lot of limitations and hard decisions” about what projects they focus on. Nvidia, the leading GPU maker, has previously described the shortage as making clients “tense” and “emotional.”
In response to this demand, Andreessen Horowitz recently established a chip-lending program that provides GPUs to its portfolio companies in exchange for equity. The firm reportedly has been working on building a stockpile chip cluster of 20,000 GPUs, including Nvidia’s. However, chips aren’t the only aspect of compute that is of concern, according to Horowitz, who pointed to the need for more powering and cooling across the data centers housing GPUs. “Once they get chips we’re not going to have enough power, and once we have the power we’re not going to have enough cooling,” he said on yesterday’s podcast.
But compute needs might not actually be the largest barrier when it comes to improving A.I. model capabilities, according to the venture capital firm. It’s the availability of training data needed to teach A.I. models how to behave that is increasingly becoming a problem. “The big models are trained by scraping the internet and pulling in all human-generated training data, all-human generated text and increasingly video and audio and everything else, and there’s just literally only so much of that,” said Andreessen.
Between April of 2024 and 2023, 5 percent of all data and 25 percent of data from the highest quality sources was restricted by websites cracking down on the use of their text, images and videos in training A.I., according to a recent study from the Data Provenance Initiative.
The issue has become so large that major A.I. labs are “hiring thousands of programmers and doctors and lawyers to actually handwrite answers to questions for the purpose of being able to train their A.I.’s—it’s at that level of constraint,” added Andreessen. OpenAI, for example, has a “Human Data Team” that works with A.I. trainers on gathering specialized data to train and evaluate models. And numerous A.I. companies have begun working with startups like Scale AI and Invisible Tech that hire human experts with specialized knowledge across medicine, law and other areas to help fine-tune A.I. model answers.
Such practices fly in the face of fears relating to A.I.-driven unemployment, according to Andreessen, who noted that the dwindling supply of data has led to an unexpected A.I. hiring boom to help train models. “There’s an irony to this.”
Meta believes in the America spirit and is ready to beat China on the battlefield of AI advancement.
Mark Zuckerberg and Meta would like you to know that they love America. Meta announced today that it would make its Llama models available to U.S. government agencies and contractors working on issues of national security.
“We are pleased to confirm that we are also making Llama available to U.S. government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work,” Nick Clegg, Meta’s President of Global Affairs said in a blog post.
Meta’s Llama models are open source, meaning that anyone who gets hold of them can essentially do whatever they want. But the announcement today marks a shift away from Meta’s own acceptable use policy for the models which had a provision against “military, warfare, nuclear industries or applications, espionage.”
According to the blog post, Meta is patterning with companies that include “Accenture Federal Services, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to bring Llama to government agencies.”
Meta said that Oracle was using Llama to synthesize aircraft maintenance documents to aid in maintenance. It also said weapons manufacturers would use Llama for a bunch of different things, including “code generation, data analysis, and enhancing business processes.”
Why this sudden pivot to American defense contractors? It might have something to do with a Reuters report from last week that discovered various researchers connected to the Chinese military had availed themselves of Meta’s Llama 2 AI model.
There’s absolutely no evidence or even any indication that Meta had any direct hand in the People’s Liberation Army’s use of Llama 2. But critics have pointed out that Zuckerberg is weirdly close to China. The Meta CEO met with Chinese President Xi Jinping in 2017. Three years before that, he told a Chinese newspaper that he’d bought copies of Xi’s book, The Governance of China, for his employees. Why? “I want them to understand socialism with Chinese characteristics,” he said at the time.
But Zuckerberg is going through a rebrand that’s all-in on Americana. He’s grown his hair out, dresses like a normal human being, and talks about the U.S. every time he gets the chance. On July 4 of this year, he posted a video of himself on a boogie board in a Tuxedo waving an American flag and drinking a Twisted Tea.
Clegg’s announcement is full of treacly invocations of the American spirit. “As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security and economic prosperity of America—and of its closest allies too,” the post said.
“For decades, open source systems have been critical to helping the United States build the most technologically advanced military in the world and, in partnership with its allies, develop global standards for new technology,” it went on. “Open source systems have helped to accelerate defense research and high-end computing, identify security vulnerabilities and improve communication between disparate systems.”
In the end, it did, of course, mention the competition. “We believe it is in both America and the wider democratic world’s interest for American open source models to excel and succeed over models from China and elsewhere,” it said.