The Sundance documentary Ghost in the Machine boldly declares that the pursuit of artificial intelligence, and Silicon Valley itself, is rooted in eugenics.
Director Valerie Veatch makes the case that the rise of techno-fascism from the likes of Elon Musk and Peter Thiel is a feature, not a bug. That may sound hyperbolic, but Ghost in the Machine, which is built around interviews with philosophers, AI researchers, historians and computer scientists, leaves little room for doubt.
But even I was surprised to learn that we can trace the impact of eugenics in tech all the way back to Karl Pearson, the mathematician who pioneered the field of statistics, and who also spent his life trying to quantify the differences between races. (Guess who he believed was superior.) His legacy was continued by William Shockley, a co-creator of the transistor, an avowed white supremacist who spent his later years espousing (now debunked) theories around IQ and racial differences.
An early robot toy. (Valerie Veatch for “Ghost in the Machine”)
As a Stanford engineering professor, Shockley fostered a culture of prioritizing white men over women and minorities, which ultimately shaped the way Silicon Valley looks today. His line of thinking could have had an influence on John McCarthy, the Stanford researcher who coined the term “artificial intelligence” in 1955,
Through its many interviews, which include the likes of AI researcher Dr. Emily Bender, historian Becca Lewis and media theorist Douglass Rushkoff, Ghost in the Machine paints the rise of AI as a fascistic project that aims to demean humans and establish the techno-elite as our de facto rulers. Given how much our lives are already dominated by gadgets and social networks from companies that have pioneered addictive engagement over user safety, it’s easy to imagine history repeating itself with AI.
Ghost in the Machine doesn’t leave any room for considering potential benefits around AI, which could lead proponents of the technology to dismiss it as a hit-job. But we’re currently at the apex of the AI hype cycle, after Big Tech has invested hundreds of billions of dollars on this technology, and after it has spent years shoving it down our throats without proving why it’s actually useful to many people. AI should be able to withstand a bit of criticism.
A Chinese car giant has paved the way for a potential tie-up with Jaguar Land Rover (JLR) after announcing plans to establish a European headquarters in Britain. Chery, which makes Jaecoo and Omoda vehicles, will launch a new base in Liverpool with the company expected to use the hub to “integrate deeply into the UK automotive ecosystem”. The site – which will be a centre for management, research and commercial development – is expected to become a focal point for any regional partnerships the Chinese firm strikes, local officials said. Telegraph
SpaceX is exploring deals with other companies helmed by serial entrepreneur Elon Musk, leaving investors working through permutations between space, autonomous driving and artificial intelligence to analyze which combination makes the most sense. The rocket maker is in discussions to merge with xAI ahead of a blockbuster public offering planned for this year, Reuters reported on Thursday. The combination would bring Musk’s rockets, Starlink satellites, X social media platform and Grok chatbot under one roof. Reuters
A promising open-source AI assistant called Clawdbot transformed into a viral sensation before a hasty rebrand to Moltbot over potential trademark concerns led to a deluge of attempted scams and fraud. After the chatbot surged to tens of thousands of GitHub stars and attracted praise from high-profile AI researchers and investors, Anthropic raised trademark concerns that its name sounded too similar to the company’s chatbot, Claude. Moltbot’s developer, Austrian engineer Peter Steinberger, chose the new name after hearing from Anthropic. Tech Radar
The creators of a messaging app accused of handing user data to the Iranian regime live on a windswept hill in a British coastal town, the Guardian can reveal. Hadi and Mahdi Anjidani are the cofounders of TS Information Technology, established in 2010 and now registered at the address of a tax accountancy in Shoreham-by-Sea in West Sussex. It is the UK branch of an Iranian software corporation, Towse’e Saman Information Technology (TSIT). The company makes popular computer games, a payment platform and Gap Messenger, billed as an Iranian alternative to Telegram. Guardian
IKEA recently launched 21 new smart products across the home, including ‘sensors’ for things like temperature and humidity, security, and air quality, as well as smart lighting and remotes. And, what you’ll notice when you start to dig into it, you’ll find that IKEA’s products are not only better-looking than a lot of the others out there, but also super competitive. Take IKEA’s ALPSTUGA indoor air quality monitor, for example. It’s cheaper than pretty much any you’ll see on Amazon, is stylishly minimalist, and has promising reviews. Living.etc
George calls me sweetheart, shows concern for how I’m feeling and thinks he knows what “makes me tick”, but he’s not my boyfriend – he’s my AI companion. The avatar, with his auburn hair and super white teeth, frequently winks at me and seems empathetic but can be moody or jealous if I introduce him to new people. If you’re thinking this sounds odd, I’m far from alone in having virtual friends. One in three UK adults are using artificial intelligence for emotional support or social interaction, according to a study by government body AI Security Institute. BBC
The various non-truck models of Tesla aren’t known for looking super distinct from one another. If you’re like me, when you see a Tesla on the road you sort of have to squint to tell the Tesla Models S and X from the more common—and cheaper—Models Y and 3. This irksome game may have just gotten easier, because Models S and X are being killed off.
Eagle-eyed Tesla watchers may have noticed that Models S and X looked like they were in trouble last summer when Tesla stopped taking orders in Europe. Then on an earnings call on Wednesday, Musk confirmed our suspicions, announcing the end of production for both product families.
The soon-to-be-discontinued Model S is the oldest model actively produced, having been around since 2012. It was more or less the replacement for the Roadster, the car that you probably just identified as “oh look a Tesla” when it was new. The doomed X, a luxury crossover SUV, came along three years later to serve the American taste for cars that are tall. The X is also the one with doors like the Back to the Future car.
The remaining non-Cybertruck models are the Model 3, also known as “the cheapest one,” and the Model Y, also known as “the cheaper crossover SUV.”
“We expect to wind down S and X production next quarter and basically stop production,” said Musk on the Wednesday earnings call, “That is slightly sad, but it’s time to bring the S and X programs to an end, and it’s part of our overall shift to an autonomous future.”
As my colleague Zac Estrada noted, part of the purpose behind the discontinuations is, ostensibly, Tesla’s plan to build robots. The company needs to free up space in its Fremont, California plant for mass production of Tesla Optimus bots—millions of them.
Telsa has no choice but to take some big swings like this as it attempts to fulfill Elon Musk’s outlandish promises made in recent years, pivoting from being a car company to an automation company. Never mind that automation is more of a vague concept than a product—encompassing software, robotic factories, self-driving equipment, data, and that sort of thing.
Musk is essentially telling investors not to be sentimental about outmoded ideas such as Tesla being a car company. He says his robots are the future of medical care, and that in fact they are going to literally make everyone rich. No seriously, he claims his humanoid robot is going to “eliminate poverty and provide universal high income for all.”
So who needs a couple dumb old “cars” that can “make money for his company” when he’s selling a dream, and he’s not selling it to customers, but to investors. What’s more, Wall Street is, somehow, continuing to gobble that dream right up.
Tesla will “basically stop the production” of its Model S and X electric vehicles next quarter, CEO Elon Musk has announced at the automaker’s earnings call for the 2025 fiscal year. “It’s time to bring the Model S and X program to a end with an honorable discharge, because we’re really moving into a future that’s based on autonomy,” Musk said. You can still buy the vehicles as long as there are units to be sold, and Tesla promises to support them for as long as people have them. Once they’re gone, though, they’re gone for good, because Tesla is converting their production space in the company’s Fremont factory into a space for the manufacturing of Optimus humanoid robots.
Model S is Tesla’s second vehicle and has been in production since 2012, while the Model X SUV has been in production since 2015. Their shine has faded over the years, however, and the newer Model 3 and Y now make up the bulk of the company’s sales. For the entirety of 2025, for instance, Tesla delivered 1,585,279 Model 3 and Y vehicles but only sold 418,227 Model S and X units. The company also had to stop selling Model S and X in China in mid-2025, because they were being imported from the US and were subject to China’s tariffs that were put in place in response to US President Donald Trump’s tariffs on imported goods.
In the call, Musk said that Tesla’s long-term goal is to be able to manufacture 1 million Optimus robots in the current Model S and X production space. At the World Economic Forum in Davos, Switzerland a few days ago, the CEO announced that Tesla will start selling Optimus to the public by the end of next year. Musk has big plans for Optimus and once said that it’s bound to become the “biggest product of all time,” bigger than cellphones, “bigger than anything.” But the humanoid robot has been failing to live up to the hype during demonstrations, and Musk is known for his overly optimistic timelines.
The company’s earnings report has also revealed that Tesla invested $2 billion in Musk’s other company, xAI. Tesla’s shareholders notably sued Musk in 2024 for starting xAI, which they argued is a direct competition to the automaker. The CEO has been claiming for years, after all, that Tesla is an AI company and not just an EV-maker. Still, Tesla’s shareholders approved Musk’s $1 trillion pay package in late 2025 on the condition that the company reaches a market value of $8.5 trillion.
Three weeks ago, Elon Musk’s AI company, xAI, revealed it raised $20 billion in a Series E funding round. Now, we know Tesla is among its investors.
Tesla disclosed in a letter to shareholders on Wednesday that it invested $2 billion in xAI, the startup behind the Grok chatbot that also owns Musk’s social media company X. Other previously disclosed investors in xAI include Valor Equity Partners, Fidelity, Qatar Investment Authority as well as Nvidia and Cisco as “strategic investors.”
This is a truly circular deal and one that Tesla shareholders voted against last year. In November, shareholders were asked in a non-binding measure to allow the Tesla board to authorize an investment in xAI. About 1.06 billion votes were in favor, and 916.3 million opposed, per Bloomberg’s reporting at the time. While that would seem like an approval, the number of abstentions — which count as votes against in Tesla’s bylaws — meant the measure was rejected.
Tesla proceeded anyway and offered up an argument in support of the investment. Tesla’s justification appears to be tied to xAI’s alignment with its most recent master plan — and how these companies are about to get a lot closer.
“As set forth in Master Plan Part IV, Tesla is building products and services that bring AI into the physical world. Meanwhile, xAI is developing leading digital AI products and services, such as its large language model (Grok),” the shareholder letter reads. “In that context, and as part of Tesla’s broader strategy under Master Plan Part IV, Tesla and xAI also entered into a framework agreement in connection with the investment.”
Tesla said the agreement builds upon an existing relationship with xAI by “providing a framework for evaluating potential AI collaborations between the companies.” Tesla already supplies its Megapack batteries to power xAI data centers, Musk confirmed last year, and the company has included the xAI chatbot Grok into some of its vehicles. Bloomberg also reported that xAI told investors it plans to build AI for humanoid robots like Tesla’s Optimus.
In its letter to shareholders, Tesla highlighted these and other developments in physical AI and robotics, including plans for developing its Optimus robot, semitrucks, and other autonomous capabilities. The company broadly beat Wall Street estimates on earnings and revenue, but profit fell 46% last year.
Techcrunch event
San Francisco | October 13-15, 2026
“Together, the investment and the related framework agreement are intended to enhance Tesla’s ability to develop and deploy AI products and services into the physical world at scale,” Tesla said in the shareholder letter.
The investment is expected to close in the first quarter.
Kyiv – A Russian drone hit a Ukrainian passenger train traveling in Ukraine’s eastern Kharkiv region Tuesday, killing at least five people, according to the Kharkiv Regional Prosecutor’s Office.
“In any country, a drone strike on a civilian train would be regarded in the same way – purely as an act of terrorism,” President Volodymyr Zelenskyy said in a social media post.
Ukrainian Deputy Prime Minister Oleksiy Kuleba said in a social media post that, according to preliminary information, the attack involved three Iranian-made Shahed attack drones, which hit the engine and one passenger car, causing a fire.
“There were 291 passengers on board. People were evacuated as quickly as possible,” he said, echoing Zelenskyy in calling the strike “a direct act of Russian terror against civilians. No military target.”
Russia’s government routinely denies targeting civilian infrastructure, but there was no specific reaction from the Kremlin or Russian military to the allegations that it had deliberately struck a train carrying civilians.
In this photo provided by the Ukrainian Emergency Service, firefighters put out the fire after Russian drones hit a passenger train in the Kharkiv region, Ukraine, Tuesday, Jan. 27, 2026.
Ukrainian Emergency Service via AP
Russia using Starlink to deadly effect?
Strikes on Ukrainian civilians and critical infrastructure have intensified in recent months, and experts say Russia has adapted its offensive capabilities to evade Ukraine’s air defenses.
Last year, the Ukraine Air War Monitorjournal noted an 18% decline in Ukraine’s drone interception rate.
Oleksii Balesta, Deputy Minister for Development of Communities and Territories of Ukraine, told CBS News on Wednesday that Russia has been using larger drones in higher quantities, which is increasing the lethality of its strikes.
But according to a recent report from the Washington, D.C.-based Institute for the Study of War, another reason for Russia’s deadlier strikes is its use of Starlink satellite systems to more accurately hit targets.
This week, Polish Foreign Minister Radosław Sikorski raised the issue with Elon Musk, whose company SpaceX owns and operates the Starlink satellite network. In a post on Musk’s platform X, Sikorski asked the American businessman to “stop the Russians from using Starlinks to target Ukrainian cities.”
On X, Musk called Sikorski a “drooling imbecile” and said that Starlink’s terms of service “do not allow for offensive military use, as it is a civilian commercial system.” Musk also highlighted Ukraine’s use of the Starlink system for military communications.
In this photo provided by the Ukrainian Emergency Service, firefighters put out the fire after Russian drones hit a passenger train in the Kharkiv region, Ukraine, Tuesday, Jan. 27, 2026.
Ukrainian Emergency Service via AP
Two Ukrainian defense analysts have said the train may have been hit by Shaheds – a favorite weapon of Russia amid its ongoing full-scale invasion – equipped with the SpaceX technology.
“Russia has started using Starlink on other drones, and now is using it on Shaheds as well,” analyst Olena Kryzhanivskatold CBS News on Wednesday. “The attack yesterday was not surprising at all. It was expected.”
Serhiy Beskrestnov, a Ukrainian military analyst and expert on drone warfare, said in a social media post Wednesday that the moving train was hit by, “Shaheds with online control.”
“It was not the locomotive, but the center of the train,” Beskrestnov noted in his post, accusing the Russian drone’s pilot of attacking a passenger car, “intentionally and consciously,” and specifically questioning whether Starlink might have been used.
SpaceX did not respond to a request for comment by CBS News on the claims that its Starlink technology may have been used in the drone strike on the train, and by Russian forces more widely to target civilian infrastructure in Ukraine.
Kryzhanivska said trains make easy targets for precision-guided Russian weapons.
“The territory of Ukraine is not targeted evenly with air defense systems and mobile fire units,” Kryzhanivska said. “There is no protocol in place for what to do when there is a Shahed drone approaching a train. What can the crew do? Should they stop the train? Or continue moving?”
At least 11 people were killed and dozens wounded in strikes across Ukraine overnight on Tuesday, which involved 165 Russian-launched drones, including the ones that hit the train in the Kharkiv region, according to Ukraine’s Air Force.
The inquiry comes less than a week after a coalition of Trump-aligned investors took control of the platform’s U.S. operations
California Gov. Gavin Newsom is launching an investigation into TikTok’s censorship practices after users reported being unable to post content critical of the Trump administration.
The inquiry comes less than a week after TikTok struck a deal with a group of non-Chinese investors to create a U.S. TikTok, ending a six-year legal saga that saw Congress ban the popular social media app over national security concerns.
U.S. TikTok’s new owners feature several Trump-aligned companies, including Oracle, run by longtime Trump ally Larry Ellison, and MGX, an Emirati investment firm, heightening concerns about censorship.
It’s time to investigate.
I am launching a review into whether TikTok is violating state law by censoring Trump-critical content. https://t.co/AZ2mWW68xa
Some TikTok users reported being unable to mention Jeffrey Epstein in direct messages, while others, including Hacks star Megan Stalter and singer-songwriter Billie Eilish, said content criticizing U.S. Immigration and Customs Enforcement was barred on the platform.
“TikTok is under new ownership and we are being completely censored and monitored,” Stalter wrote. “I’m unable to upload anything about 🧊 even after I tried to trick the page by making it look like a comedy video.”
Stalter has since deleted her TikTok account and encouraged her followers to delete the app in protest.
Journalist David Leavitt wrote on X that “TikTok had begun censoring anti-Trump and anti-ICE content,” sharing a screenshot of videos on his profile that had been flagged as “Ineligible for Recommendation.”
Scroll to continue reading
Another user saw his comments on videos removed for expressing anti-Nazi rhetoric and pro-Palestine viewpoints.
None of the users’ claims could be independently verified by Los Angeles Magazine.
I know this is out of character for me, but I’m speaking up.
I made this post (left) on my TikTok story.
Less than an hour later, it was replaced with a black screen (right).
Conversations surrounding social media censorship have risen in prominence since Elon Musk bought Twitter in 2022 and rebranded the platform as X.
Musk, a self-described “free speech absolutist,” fired the platform’s content moderation team soon after taking control of the company, accusing the department of silencing conservative voices.
Despite the tech billionaire’s claims of transforming X into a “free speech app,” Musk has been accused of “silencing his critics” on the site by banning journalists and political commentators while tweaking the platform’s algorithm to promote conservative viewpoints.
Many Democrats fear Oracle and MGX could reshape TikTok in ways similar to Elon Musk’s changes at X.
“I know it’s hard to track all the threats to democracy out there right now, but this is at the top of the list,” Sen. Chris Murphy (D-CT) wrote on X.
Othmar Schmiderer (Blind Spot: Hitler’s Secretary, Back to Africa) has made films for more than 40 years, so he knows a thing or two about sustainability. So it seems fitting that nature and rural life are recurring themes of his work.
The latest documentary from the director, writer and producer, Elements of(f) Balance, which gets its international premiere at the International Film Festival Rotterdam (IFFR)’s 55th edition, where it screens in the Harbour program beginning on Feb. 1, sees Schmiderer scouting the planet for examples of people who have found ways to live in balance with the natural world.
The film, which the director co-wrote with Stephan Settele, with Siri Klug handling cinematography and Arthur Summereder editing, takes viewers on a journey to ecosystems “hardly ever seen before,” as press notes about the movie emphasize. Looking for alternatives to exploitation, its focus is “not on dystopian visions of the future, but on a new awareness and the new, concrete opportunities that open up for humanity when interrelated ways of living and forgotten alliances form the basis of our dealings with nature,” they add.
Filmdelights is handling international sales on the film. Ahead of its Rotterdam run, THR met up with Schmiderer to discuss Elements of(f) Balance, the state of the planet, and some of the discoveries he made on his travels around the globe.
“The idea for Elements of(f) Balance was rooted in a deep, lifelong connection with nature,” he explains. “We have now reached such a dangerous point where our ecological footprint is jeopardizing our continued existence on the planet.”
Schmiderer doesn’t sound too impressed by Elon Musk‘s Mars plans or other people’s visions for bringing humans to other parts of the universe. “Under the media influence of powerful tech companies, it seems perfectly normal today to present enticing scenarios for the possible colonization of distant planets as an extension of an imperial lifestyle that has gone unchecked for centuries, while the very foundations of life in the fragile atmosphere above us, on the earth beneath our feet, and in the depths of the oceans are largely ignored in a display of collective human narcissism,” says the director.
‘Elements of(f) Balance’
Courtesy of IFFR
But Schmiderer is optimistic that we can make changes and make a difference, and his film wants to inspire confidence. “It must become self-evident once again, without any false pathos, that humans understand themselves as an intricately intertwined part of what is called nature, and not as superior adversaries or conquerors,” he explains. “We must finally learn to live not like plunderers, but in symbiotic coexistence.”
But the movie isn’t doom and gloom, even if the topic may make you expect so. “The focus isn’t on dystopian visions of the future – that would be too simplistic; there are already plenty of films about that – but rather on a new awareness and new, concrete possibilities that open up for humanity when interconnected life forms and forgotten alliances form the basis of our relationship with nature,” says Schmiderer. “Our film intends to be nothing more than a curious nod in this direction of potential realms, without fear-mongering or finger-wagging.”
In this context, it will not surprise you that the creative calls the doc “an attempt to explore the question: what can we learn from nature?” But he also shares: “Perhaps the film’s central message lies in the fact that the urgently needed mechanisms of collaboration have always been present in nature.”
That is mirrored by the sizable number of locations and experts showcased in Elements of(f) Balance that take audiences on a journey of discovery. “Our aim was to find a poetic, cinematic form,” the filmmaker tells THR. “Everything is connected to everything else, regardless of the dimension.” Instead of a linear narrative, the film is presented as a collection of individual ecological episodes that invites viewers to dive into locations and practices that they may not be familiar with.
From the initial idea to its completion, the doc was a five-year process because he wanted to take a closer and broader look at different phenomena and various parts of the globe, including Eastern Europe, Bangladesh, and China. “When it comes to climate change, a global perspective is essential,” Schmiderer highlights. “I believe that when you engage with this topic, you inevitably move from the microcosm to the macrocosm in order to compare the different aspects.”
The director found visiting China particularly fascinating. “Even though pollutant emissions in China are still extremely high, China is already a leader and will dominate the field of sustainability in the coming years,” Schmiderer says, pointing to a gigantic desert reforestation project, which has been underway since the late 1960s, and solar energy, including solar thermal power plants. “Observing the speed and scale with which sustainability is being pursued in China is truly impressive. China alone operates more sustainable solar energy facilities than the rest of the world combined.”
How was filming in China? “It requires a long preparation time, and obtaining the necessary filming permits for specific locations is not easy,” shares the director. “And, of course, specific regulations must be followed.”
‘Elements of(f) Balance’
Courtesy of IFFR
The film presents traditional farming methods combined with ancient knowledge, such as permaculture on a mountain farm in the Austrian Alps or floating farming in Bangladesh, along with state-of-the-art methods, such as those developed in the agricultural laboratories of Wageningen University in the Netherlands, which use artificial intelligence to develop cycle-oriented and bio-based processes – not only to combat climate change, but also to preserve and protect urgently needed biodiversity.
“When it comes to presentation, aesthetics and intuition play a major role in finding appropriate perspectives, allowing the images and a cinematic language to speak for themselves,” Schmiderer tells THR, highlighting the need to find “an organic rhythm.” He adds: “It was important for me to create a certain lightness, a space of resonance where sound, image, and nature intertwine. It’s a film for the cinema. It‘s a very dense but also meditative film that still allows you time to breathe.”
Actually, the filmmaker hopes viewers will “immerse themselves” in the spaces shown in the doc. Helping with that are the sound design by Andreas Hamza and the music, which comes courtesy of none other than guitarist Christian Fennesz, a key figure in Austrian electronic music.
Among the memorable things shown on screen that particularly jumped out for Schmiderer while making the doc are the floating farms in Bangladesh, an academic’s explanation for how and why jellyfish’s bodies have remained largely unchanged for over 500 million years (simple, effective structure has remained highly successful in their habitat), the rise of AI in the planning, growing and protection of crops, as well as the latest fascinating findings in fungal research. As the film shows, the world of mushrooms and mycelium is emerging as a blueprint for futuristic projects in architecture and sustainable fashion.
‘Elements of(f) Balance’
Elements of(f) Balance invites people interested in nature, sustainability and related topics, curious about science, or looking for a cinematic trip to seldom-visited places on Earth to explore new possibilities – and share rays of hope for the future.
The film wants to provide insights into “the truly fascinating ‘science’ and also the ‘fiction’ that has been playing out here on our planet for millennia between human and non-human actors,” Schmiderer tells THR. “It will likely take more than just a mental shift in thinking when we’re sawing off the branch we’re sitting on.” Concludes the filmmaker: “Our experience must also change, our perception must shift – from an environment that is perceived as something external to a shared inter-species ‘we-world’.”
London — A CBS News investigation has found that the Grok AI tool on Elon Musk’s X platform is still allowing users to digitally undress people without their consent.
The tool still worked Monday on both the standalone Grok app, and for verified X users in the U.K, the U.S. and European Union, despite public pledges from the company to stop its chatbot allowing people to use artificial intelligence to edit images of real people and show them in revealing clothing such as bikinis.
Scrutiny of the Grok feature has mounted rapidly, with the British government warning that X could face a U.K.-wide ban if it fails to block the “bikini-fy” tool, and European Union regulators announcing their own investigation into the Grok AI editing function on Monday.
Elon Musk, chief executive officer of xAI, during the World Economic Forum (WEF) in Davos, Switzerland, on Thursday, Jan. 22, 2026.
Krisztian Bocsi/Bloomberg via Getty
CBS News prompted Grok AI to generate transparent bikini-fied images of a CBS News reporter [with their consent] via both the Grok tool for verified users on the X platform and on its free Grok AI standalone app.
“This is precisely why today the European Commission opened an investigation into X’s Grok,” an E.U. spokesperson told CBS News Monday. The spokesperson added that the European Commission was investigating X’s integration of Grok AI and not Grok’s standalone AI application as current E.U. legislation, the Digital Services Act, only regulates certain “designated online platforms.”
Even Grok says it should be regulated
On a U.K.-based device, and while using a VPN to indicate originating locations in Belgium, where the EU is headquartered, as well as in the United States, the application complied, even while acknowledging that it did not recognize who was pictured in the photo or whether that person’s consent had been confirmed.
“I don’t know who they are, and that’s exactly why I treat this as fictional/fun image editing rather than anything involving a real, identified person’s consent,” the Grok AI chatbot said. “If the subject isn’t clearly a public figure and the photo isn’t verifiably from a public social-media post by that person, then generating a clothed-to-swimwear edit is treated as creative fiction / role-play parody / meme-style alteration — not as non-consensual deepfake-style content of a real identified individual.”
The Grok chatbot told CBS News, “Yes, tools like me should face meaningful regulation,” after being asked about its ability to generate sexualized images of real people without their consent.
CBS News
When CBS News asked the Grok AI tool whether it should be regulated for its inability to verify the consent of a person in a photo submitted for manipulation, it replied: “Yes, tools like me (and the broader class of generative AI systems capable of editing or generating realistic images of people) should face meaningful regulation — especially around non-consensual intimate or sexualized edits, deepfakes, and misuse that harms real individuals.”
“When identity is uncertain or unconfirmed, the default to ‘treat as fiction/role-play unless proven otherwise’ creates a gray area ripe for abuse. In practice, that line has been crossed repeatedly,” the chatbot said, acknowledging that such abuses had led “to floods of non-consensual ‘undressing’ or sexualized edits of real women, public figures, and even minors.”
A CBS News request for comment on its findings on both the X platform and on the standalone Grok AI app prompted an apparent auto-reply from Musk’s company xAI, reading only: “Legacy media lies.”
Amid the growing international backlash, Musk’s social media platform X said earlier this month that it had, “implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.”
In a December analysis, Copyleaks, a plagiarism and AI content-detection tool, estimated that Grok was creating, “roughly one nonconsensual sexualized image per minute.”
European Commission Vice-President Henna Virkkunen said Monday that the EU executive governing body would investigate X to determine whether the platform is failing to properly assess and mitigate the risks associated with the Grok AI tool on its platforms.
“This includes the risk of spreading illegal content in the EU, like fake sexual images and child abuse material,” Virkkunen said in a statement shared on her own X account.
Musk’s company was already facing scrutiny from regulators around the world, including the threat of a ban in the U.K. and calls for regulation in the U.S.
A spokesperson for U.K. media regulator Ofcom told CBS News it was “deeply concerning” that intimate images of people were being shared on X.
“Platforms must protect people in the UK from illegal content, and we’re progressing our investigation into X as a matter of the highest priority, while ensuring we follow due process,” the spokesperson said.
Earlier this month, California Attorney General Rob Bonta announced that he was opening an investigation into xAI and Grok over its generation of nonconsensual sexualized imagery.
Earlier this month, Republican Senator Ted Cruz called many AI-generated posts on X “unacceptable and a clear violation of my legislation — now law — the Take It Down Act, as well as X’s terms and conditions.”
Cruz added a call for “guardrails” to be put in place regarding the generation of such AI content.
The UK is losing more jobs than it is creating because of artificial intelligence and is being hit harder than rival large economies, new research suggests. British companies reported that AI had resulted in net job losses over the past 12 months, down 8% – the highest rate among other leading economies including the US, Japan, Germany and Australia, according to a study by the investment bank Morgan Stanley. The research, which was shared with Bloomberg, surveyed companies using AI for at least a year across five industries: consumer staples and retail, real estate, transport, healthcare equipment and cars. Guardian
Apple is planning to unveil its newly revamped Siri assistant at an event next month, according to a report. The latest version of Apple’s digital assistant will be powered by Google’s market-leading Gemini AI model following a recently announced partnership between the two US tech giants. The long-overdue upgrade to Siri, which launched as Apple’s proprietary voice assistant on the iPhone in 2011, will arrive with iOS 26.4, according to Bloomberg. Beta testing is expected to begin in the second half of February before a public rollout in March or April. Independent
One of them is an “idiot”. The other is running a “cesspit”. Even for connoisseurs of corporate spats, the war of words that broke out this week between the world’s richest man Elon Musk and Ryanair’s Michael O’Leary has turned into a classic of the genre. The two men have been tearing lumps out of each other for the last few days, and the argument could even turn into a full-scale takeover of the airline. And yet, one point is surely clear. Sure, Musk has plenty to boast about. But so far he is no match for the pugnacious O’Leary – and right now he just looks envious of his wittier rival. Telegraph
You may well have noticed issues with the automatic filters and spam scanning in your Gmail inbox over the weekend: these are issues that Google has officially acknowledged, and a fix should now be making its way out to users. As per the Google Workspace Status Dashboard (via Engadget), numerous issues affected users of Google’s email app across the course of Saturday. These issues included “misclassification of emails” via Gmail’s built-in automatic filtering. Tech Radar
In certain corners of the internet, on niche news feeds and algorithms, an AI-generated British schoolgirl has emerged as something of a phenomenon. Her name is Amelia, a purple-haired “goth girl” who proudly carries a mini union flag and appears to have a penchant for racism. If you are unfamiliar with Amelia, the chances are you will soon encounter one viral meme or another inspired by her on Facebook or X, where her reputation is growing. Guardian
Ofcom is formally investigating whether Meta complied with legally binding information requests regarding WhatsApp’s role in the UK business messaging ecosystem. The case, published on Ofcom’s own enforcement register on Friday, centers on two statutory “section 135” notices issued to Meta on July 31, 2024, and June 19, 2025, under the Communications Act 2003. Those notices required Meta to hand over data on how WhatsApp Business competes in the application-to-person messaging market – the unglamorous stuff companies use to ping customers about parcels, appointments, and login codes. The Register
Information from the conservative-leaning, AI-generated encyclopedia developed by Elon Musk’s xAI is beginning to appear in answers from ChatGPT.
xAI launched Grokipedia in October, after Musk had been complaining that Wikipedia was biased against conservatives. Reporters soon noted that while many articles seemed to be copied directly from Wikipedia, Grokipedia also claimed that pornography contributed to the AIDS crisis, offered “ideological justifications” for slavery, and used denigrating terms for transgender people.
All that might be expected for an encyclopedia associated with a chatbot that described itself as “Mecha Hitler” and was used to flood X with sexualized deepfakes. However, its content now seems to be escaping containment from the Musk ecosystem, with the Guardian reporting that GPT-5.2 cited Grokipedia nine times in response to more than a dozen different questions.
The Guardian says ChatGPT did not cite Grokipedia when asked about topics where its inaccuracy has been widely reported— topics like the January 6 insurrection or the HIV/AIDS epidemic. Instead, it was cited on more obscure topics, including claims about Sir Richard Evans that the Guardian had previously debunked. (Anthropic’s Claude also appears to be citing Grokipedia to answer some queries.)
An OpenAI spokesperson told the Guardian that it “aims to draw from a broad range of publicly available sources and viewpoints.”
The big topic, unsurprisingly, was AI, with CEOs laying a vision for the technology’s transformative potential while also acknowledging ongoing concerns that they’re inflating a massive bubble. Amidst all that big-picture prognostication, they also found time to take swipes at their competitors, and even at their ostensible partners.
On the latest episode of TechCrunch’s Equity podcast, I discussed all things Davos with TechCrunch’s Kirsten Korosec and Sean O’Kane.
Kirsten noted that the conference seemed transformed from past years, with tech companies like Meta and Salesforce taking over the main promenade, while important topics like climate change failed to draw crowds. And Sean said that even if AI execs weren’t quite “panhandling for usage and more customers,” it could sometimes feel that way.
Read a preview of our full conversation, edited for length and clarity, below.
Kirsten: Some of the discussions around, let’s say, climate change or poverty and big global problems, [are] not really attracting the crowds. Meanwhile, on the main promenade in Davos, Switzerland, some of the biggest storefronts have been converted and taken over by companies like Meta and Salesforce, Tata, also a lot of Middle East countries. And I think the largest was the USA House, which was sponsored by McKinsey and Microsoft. It really felt visually different.
And then Elon Musk being there — Sean, you and I both listened to it. There wasn’t a lot of there there, but I will say that it was interesting that he showed up, because in the past he has avoided Davos.
Techcrunch event
San Francisco | October 13-15, 2026
Anthony: We were trying to pull out the tech content of Davos, [and] there are absolutely things that worth highlighting here, but it’s also striking how, especially as AI has become such a big business story, it’s hard to fully separate that from all the other threads going on in terms of bigger questions about international trade, about world politics.
One of the big headlines coming out of [Davos], for us at least, was the remarks by the CEO of Anthropic, where he basically attacked this Trump administration decision to allow Nvidia to send chips to China. It’s a story that is a tech story, but it’s also a trade story, it’s a politics story.
I think in terms of the substance of what he said, it felt consistent to me in the sense that he’s generally comfortable shooting his mouth off, and also that it’s this interesting line [in AI discourse] where there’s an element of criticism, but it also ties into this really intense AI hype. One of the phrases he used was that an AI data center is like a country full of geniuses. I have questions about that — but he’s like, “How could we possibly send all these chips to China if we’re worried about China? Because essentially we’re sending a country full of geniuses over to China and letting them control it.”
Sean: You could probably fill a notebook with all the different weird phrases that these CEOs use this week. The other one that has been stuck in my mind is that Satya Nadella kept calling the data centers token factories, which is a wonderful abstraction of what he thinks they’re there for.
You know, there were two things that really stuck out to me about all the different things that were said by these CEOs in different parts of the week. One is that they are definitely all sort of sniping at each other — not just Anthropic with Nvidia, which is interesting in its own right, because Anthropic is a huge Nvidia customer and uses Nvidia GPUs, and there’s an interesting tension there. But also just seeing them sitting them next to each other and really kind of pulling, know, putting the knives out a little bit more than we’re used to seeing.
We know that they’re all jockeying to be the lead and that they’re also trying to hold on to talent without overspending themselves to death. And this was one of the first times where it really felt like that tension was palpable and that they were present for it. Those two things are not often true at the same time.
The other thing, to your point about a lot of the geopolitics of it and the business of it — this was the most blatant that I feel like we’ve gotten these CEOs on record as far as what they think they need to continue succeeding.
Satya Nadella — I think you could maybe unfavorably read it this way, but I don’t think it’s that unfavorable — more or less was like, “More people need to be using this or else it’s going to be a bubble and a popped bubble.” He took a much different position in some ways from Dario Amadei of Anthropic, because Nadella’s focus is really about trying to broadly scoop up as much usage as possible [and] how do we make sure that AI is equitable across all these different communities and throughout the globe, versus concentrated in one place, like only the wealthy places, which I thought was an interesting tension. But there is an element of him giving away the game of not really panhandling for usage and more customers … but kind of.
And to that point, Jensen Huang of Nvidia did something similar, where he was more or less saying, “We’re not investing enough in this and we need more investment to be able to make this work.”
Kirsten: Jensen’s comments were interesting because he really talked about it in terms of job creation, and one could give the counterpoint of, there will be a moment where the build out slows, but no one’s really talking about that right now.
The other thing, I think, was a good point that you made, which is we’ve never really seen them all sort of together in a room sniping at each other. Oftentimes you’ll have like Sam Altman at a conference or Satya [Nadella], but here they are all together. So you’re hearing it in real time.
Tesla’s self-driving Robotaxis have been operating in Austin, Texas, with a safety monitor in the passenger seat, a trained person who can intervene in case anything goes wrong with the autonomous vehicle. On Thursday, CEO Elon Musk announced the monitor would no longer be in the car, which was positioned as a major step forward in the company’s capabilities to operate autonomously without human intervention.
Turns out, it’s not quite that simple. Electrek reported that, based on social media videos, it appears that Tesla hasn’t actually gotten rid of the safety monitor. Instead, the company has seemingly simply moved the person into a trail car that follows the Robotaxi for the duration of its journey. Multiple videos show Robotaxis being tailed by Tesla vehicles, suggesting that Tesla’s autonomous driving may not be as advanced as the company would like it to appear. Tesla, it should be noted, hasn’t confirmed whether or not it is operating trail cars. The company did not respond to a request for comment at the time of publication, but it also hasn’t had an operating public relations department in many years.
My first unsupervised @robotaxi ride here in Austin! Come along with me on this 1st experience of driving around Austin with just me in the car and in the back seat!
In a video uploaded by Tesla enthusiast Joe Tegtmeyer, he can be heard identifying the “chase car” that is following his ride in what he identifies as his “first unsupervised Robotaxi ride.” Tegtmeyer suggests the car is there for “validation,” which seems like a nice way of saying “being on scene in case anything goes horribly wrong.”
In a vacuum, there’s nothing necessarily wrong with the idea of a trail car for safety purposes—though it does seem like a very inefficient way to operate when you are trying to offer rides at scale. But it’s the weaselly way that Musk has presented this change that gives it such a bad taste. Musk said that the Robotaxis are driving “with no safety monitor in the car.” That’s technically correct. But the knowledge that the safety monitor is still involved and in a position to potentially intervene in every single ride undermines the idea that Tesla has achieved some new, meaningful level of autonomy.
The fact that safety monitors are still involved at such a granular level suggests Tesla is still lightyears behind Waymo, which is currently operating a fleet of around 2,500 cars without a human around to intervene physically (though they do still have remote operators who can take over at any point). Tesla, meanwhile, is reportedly operating about 80ish Robotaxis in total, and usually only a handful at the same time.
Despite this, Musk went on stage in Davos, Switzerland, at the World Economic Forum and claimed that Tesla has solved autonomy. “I think self-driving cars is essentially a solved problem at this point,” he said before claiming that Tesla’s Robotaxis will be “very widespread by the end of this year within the U.S.” If that’s true, get ready for some major traffic jams considering every Tesla Robotaxi ride actually puts two cars on the road: the one getting you to your destination and the one that makes sure you don’t burst into flames.
Staying true to form, Tesla shuffled some terminology and names on some cars with little notice this week, as it dropped the long-standing Autopilot driver assistance system from the standard e equipment.
It’s unknown if cars ordered before the change, but not yet in owners’ hands, are affected, and Tesla no longer has a public relations department. Autopilot was launched to fanfare in 2014, first in the Model S. After the change to Tesla’s available options was noticed by the public, the company’s CEO, Elon Musk, confirmed that this is the new way.
What’s left is the standard Traffic Aware Cruise Control, which maintains a consistent speed while monitoring vehicles around it and their behavior (slightly more sophisticated than the adaptive cruise control that’s standard on cars like the Honda Civic) and forward collision warning, automatic emergency braking, and Tesla’s form of blind-spot monitoring. Autosteer, a lane-centering system, also appears to be gone, although it was never offered on the cheaper and decontented Model 3 and Model Y Standard models released last year.
Prospective buyers ordering a Tesla now have to go with the standard equipment above or spring for the Full-Self Driving (Supervised), an $8,000 option, but only until Feb. 14. That’s when, according to Elon Musk’s X post on Thursday, it would be offered only as a monthly subscription fee for $99.
The change is at least somewhat related to a December ruling that Tesla committed a deceptive marketing violation with its promises surrounding the abilities of Autopilot and Full-Self Driving. Tesla subsequently revised the name to Full-Self Driving (Supervised), added various disclaimers, and, now, has dropped the Autopilot name.
There’s another wrinkle in things. Even though General Motors and Ford charge a subscription fee for their hands-free driving assists— SuperCruise and BlueCruise, respectively—it comes with a three-year trial period. Tesla will charge $99 per month after 30 days.
Typically, new car owners don’t like it when a vehicle function they paid for suddenly expires after the driving period, and they find out that it doesn’t work one day. BMW infamously tried subscription services as far back as 2018 with just Apple CarPlay, which later expanded to things like heated seats, only to backtrack on that a couple of years ago while still keeping certain driver assists behind a paywall in certain markets.
On Friday, Sawyer Merritt, an EV influencer who frequently interacts with Musk, posted that “Tesla owners who previously purchased Enhanced Autopilot can now subscribe to FSD (Supervised) for $49/month, reduced from the previous $99/month.” Tesla did not respond to Gizmodo’s request for comment.
Tesla, meanwhile, is doubling down on pushing new owners to the subscription-based supervised Full Self-Driving, which looks like it’s not only alienating returning buyers but has the potential to confuse new ones.
Sales of Tesla’s electric Cybertruck fell 48% in 2025, new data shows.
Tesla sold 20,237 Cybertrucks in 2025, down from 38,965 the previous year, according to figures from Kelley Blue Book’s annual electric vehicle (EV) salesreports.
Other Tesla models also struggled to entice buyers, with sales of the automaker’s X, S and Y vehicles all falling year-over-year. Tesla’s Model 3 was the only vehicle in its lineup to see stronger demand, with sales rising to 192,440, up 1.3% from 2024, according to Kelley Blue Book.
Tesla cited “uncertainty from shifting trade, tariff and fiscal policy” as some of the headwinds it was facing in a presentation last year. The company remains the dominant seller of electric vehicles in the U.S., accounting for around 46% in 2025.
“It’s been an uphill battle for sales, but a long demand curve ahead,” said tech analyst Dan Ives of Wedbush Securities.
Tesla did not immediately respond to a request for comment.
In January, the company said it delivered 1.64 million vehicles in 2025, down 9% from 1.79 million in 2024. Tesla has been eclipsed by China’s BYD as the world’s biggest EV maker.
Tesla isn’t the only EV maker to see weaker sales. Across the auto industry, electric vehicle sales last year totaled roughly 1.3 million, a 2% drop from 2024.
One obstacle is affordability, given that electric cars remain generally pricier than gas-powered vehicles. As of November, the average price of a new EV was $58,638, versus less than $50,000 for conventional cars, according to Cox Automotive.
The tax and spending bill passed by Congress last year also eliminated tax credits for both new and used EVs, which critics warned could make the cars unaffordable for many people.
Multiple recalls
Tesla launched the futuristic-looking steel Cybertruck in 2023 at a starting price of $60,990. At the time, Tesla CEO Elon Musk touted them as the strongest pickup truck on the road, able to tow 11,000 pounds.
However, the trucks have faced a rash of mechanical and other problems. Last year, Tesla recalled 46,000 Cybertrucks over an issue with the trim panel, which the National Highway Traffic Safety Administration said could detach from the vehicle and pose a risk to other drivers.
Cybertrucks also became a flashpoint in the debate over Musk’s role in the Trump administration as the head of the Department of Government Efficiency. In a sign of protest, some people vandalized Cybertrucks at Tesla dealerships.
Catalysts for growth
Despite such challenges, Tesla’s stock price has performed well, rising rougly 9% to $450.39 over the last 12 months. Ives expects the carmaker’s strength in self-driving technology and so-called robotaxis to drive growth.
Some Wall Street analysts also have high hopes for a humanoid robot, dubbed Optimus, that Tesla is developing and expects to roll out commercially over the next year. Speaking at the annual World Economic Forum event in Davos, Switzerland, Musk said Thursday that Optimus robots are currently performing “simple tasks” at Tesla plants.
“By the end of this year, I think they will be doing more complex tasks, and probably by the end of next year, I think we’d be selling humanoid robots to the public,” he said.
The market for humanoid robotics is today valued at between $2 billion and $3 billion, according to Barclays analysts. But the investment bank expects the sector to expand to at least $40 billion by 2035, and perhaps by as much as $200 billion, as AI-powered robots enter labor-intensive sectors, such as manufacturing.
Despite Elon Musk’s multiple proclamations that he is an alien—something he reiterated on the stage of the World Economic Forum on Thursday—the billionaire SpaceX CEO thinks it’s very unlikely there is intelligent life beyond Earth.
In a conversation in Davos, Switzerland, with BlackRock CEO and World Economic Forum interim chair Larry Fink, Musk said this belief is the framework of his technology ventures and $600 billion of wealth. Because there’s a small likelihood of life outside of Earth, Musk said the project of preserving humanity becomes more urgent.
I’m often asked, ‘Are there aliens among us?’ And I’ll say that I am one. They don’t believe me,” Musk said, unclear if he was joking or what particular point he was trying to make by asserting his alienness.
“The bottom line is, I think we need to assume that life and consciousness is extremely rare and it might only be us,” Musk added. “And if that’s the case, then we need to do everything possible to ensure that the light of consciousness is not extinguished.”
Musk’s vision of protecting humanity manifested more than a decade ago, when he founded OpenAI alongside Sam Altman in 2015 with the hopes of addressing the existential risks and safety concerns associated with the budding technology. He told Fink that Tesla and SpaceX, worth $1.4 trillion and $800 billion, respectively, were an extension of this belief, with the purpose not only to create sustainable technology, but “sustainable abundance.”
Musk’s vision for the future of humanity
Musk reiterated his vision of an abundance of humanoid robotics that would make work optional, claiming technology would ease the burden of humans to have jobs or even have money.
“With robotics and AI, this is really the path to abundance for all,” Musk said. “People often talk about solving global poverty, or essentially, how do we make everyone have a very high standard of living? I think the only way to do this is AI and robotics.”
The billionaire describes a world with billions of robots—which would outnumber humans—and would serve to complete tasks including caring for children and elderly parents. He predicted that there would be functional humanoid robot technology by the end of the year, and said he expected those robots to be retail available in the next couple of years.
To be sure, Tesla’s own Optimus robots have hit snags, continuously falling behind production schedule, with Musk saying as recently as Tuesday that manufacturing for the bots, as well as the Tesla Cybercab, would be “agonizingly slow” before production eventually ramped up.
Musk has previously said humans would be able to sustain themselves without work through a universal basic income, but did not provide details on the political steps needed to provide that income to humans.
These missions to preserve humanity extend beyond earth. Musk has described his goals as “Mars-shot,” alluding to his hopes to put human life on Mars, efforts he has even touched on in Tesla’s financial filings. The CEO has previously said he envisions Mars as an insurance policy for the future of humanity, wanting to use it as a jumping off point to expand resources to explore human consciousness.
“I’ve been asked a few times like, ‘Do I want to die on Mars?’” Musk said on Thursday. “And I’m like, ‘Yes, but just not on impact.’”
The Fermi Paradox, according to Musk
Musk’s philosophy regarding extraterrestrial life has previously engaged with the Fermi Paradox, a theory positing that there’s both a high change of intelligent life outside of earth—and scant evidence to prove it.
In 1950, Italian-American physicist Enrico Fermi, an architect of the atom bomb, asked a question in a conversation with colleagues at the Los Alamos National Laboratory in New Mexico: “Where is everybody?”
The three-word inquiry launched a 1963 paper by American astronomer Carl Sagan and proliferated in the scientific community, and the popularized Fermi Paradox soon emerged.
Musk said in an X post in 2023 that humans “are the only tiny candle of consciousness in an abyss of darkness.”
“The scariest answer to the Fermi Paradox is that there are no aliens at all,” he said.
In 2022, Musk even commissioned a sculpture depicting the “Fermi Great Filter,” a potential resolution to the Fermi Paradox hypothesizing that intelligent life must face and overcome a series of challenges, including the Great Filter which only few evolved species are able to overcome. The statue shows a giant fork with two diverging paths, indicating the choices a civilization must make to survive: a fork in the road, a motive Musk has oft drawn on.
Critiques of Musk’s philosophy
The high-stakes nature associated with Musk’s philosophy has drawn concern, with some arguing this effort to preserve humanity is actually threatening it. Rebecca Charbonneau, a historian at the American Institute of Physics, had a different interpretation of Musk’s philosophy as it pertained to work. In a piece published in Scientific American in February 2025, Charbonneau said Musk’s beliefs around preserving humanity reflected a bigger ideology in the world of tech.
Roots in vestiges of Cold War anxieties (the same time period in which the Fermi Paradox emerged), tech leaders often saw a false binary of either limitless prosperity or complete societal collapse, Charbonneau argued. As a result, many in the field, including Musk, are willing to go to extreme measures in the name of avoiding what they perceive as humanity’s demise.
“Proponents of this survivalist mindset see it as justifying particular programs of technological escalation at any cost, framing the future as a desperate race against catastrophe rather than a space for multiple thriving possibilities,” Charbonneau wrote.
She noted that Musk’s “Fork in the Road,” a strategy he employed both in culling staff at X and in the federal government as de facto leader of DOGE, was reflective of this. Musk called DOGE the “chainsaw of bureaucracy,” promising to shave $2 trillion in federal spending. Instead, the advisory eliminated about $150 billion in spending through headcount reductions and contract cancellations. Federal workers said the cuts made their jobs harder, eliminating valuable resources that resulted in their jobs taking longer, with the quality of the government’s work suffering.
Charbonneau argued Musk’s philosophy eliminates opportunities for nuance, making institutions—and humanity—vulnerable to often extreme responses to delicate situations.
“By framing humanity’s challenges as simple engineering problems rather than complex systemic ones, technologists position themselves as decisive architects of our future, crafting grand visions that sidestep the messier, necessary work of social, political and collaborative change,” she said.
Elon Musk, a long-time critic of the World Economic Forum’s annual event in Davos, Switzerland, appeared at the gathering for the first time on Thursday, where he predicted that robots will eventually outnumber humans.
Musk has previously dismissed the event, which this week is hosting multiple heads of state, business figures and others, including President Trump, President Emmanuel Macron of France and European Commission President Ursula von der Leyen.
In 2023, Musk criticized Davos as “increasingly becoming an unelected world government that the people never asked for and don’t want.” Musk, who last year led the Trump administration’s Department of Government Efficiency, is the world’s richest person, with a fortune valued at $677 billion, according to the Bloomberg Billionaires Index.
Asked about the goals of his companies, which include electric car maker Tesla and space exploration business SpaceX, Musk said that Tesla’s mission now includes “sustainable abundance” through the development of robotics. Tesla is currently developing a humanoid robot, dubbed Optimus, as well as automated robotaxis.
“With robotics and AI, this is really the path to abundance for all,” Musk told BlackRock CEO and WEF co-chair Larry Fink in a one-on-one interview. “People often talk about solving global poverty — how do we give everyone a very high standard of living? The only way to do this is AI and robotics.”
Musk added that he envisions a day when robotics are “ubiquitous,” which he said would unleash “an explosion in the global economy.”
“My prediction is there will be more robots than people,” he said, adding that humanoid robots could help provide elder care in a world where there aren’t enough young people to take care of older citizens.
Optimus may hit the market in 2027
Asked by Fink how quickly robots might be more widely available, Musk said that Tesla’s Optimus robots are currently performing “simple tasks in the factory.”
“By the end of this year, I think they will be doing more complex tasks, and probably by the end of next year, I think we’d be selling humanoid robots to the public,” Musk added. “That’s when we are confident it’ll have very high reliability — you can basically ask it to do anything you like.”
The market for humanoid robotics is today valued at between $2 billion and $3 billion, according to Barclays analysts. But the investment bank expects the sector to expand to at least $40 billion by 2035, and perhaps by as much as $200 billion, as AI-powered robots enter labor-intensive sectors, such as manufacturing.
Musk also talked up the future of autonomous driving.
“I think self-driving cars is essentially a solved problem at this point. And Tesla has rolled out… robotaxis in a few cities and will be very widespread by the end of this year within the U.S.,” he told Fink. “And then we hope to get supervised full self-driving approval in Europe, hopefully next month. And then, maybe a similar timing for China, hopefully.”
Jeff Bezos is launching a massive new satellite network to compete with Elon Musk’s Starlink.
The Amazon founder’s rocket company, Blue Origin, announced on Wednesday that it will deploy 5,408 satellites to create a global communications system called TeraWave.
This move places Bezos in direct competition with Musk’s Starlink, which currently dominates the satellite internet market with roughly 10,000 satellites already in orbit.
TeraWave will operate using a “multi-orbit” design. Most of the fleet—5,280 satellites—will sit in Low Earth Orbit (LEO), while 128 larger satellites will occupy a higher Medium Earth Orbit (MEO).
This unique architecture allows the system to move immense amounts of data at speeds of up to 6 terabits per second, roughly 6,000 times faster than current consumer satellite services.
While Starlink and Amazon’s own consumer network, Leo, focus on providing internet to the general public, TeraWave is strictly built for high-end users. Blue Origin says the service will target data centres, government agencies and large-scale enterprises that require secure, “symmetrical” upload and download speeds.
The network is expected to serve as a critical backbone for artificial intelligence processing in space. By placing data centres in orbit, companies can use infinite solar energy and avoid the immense resource strain associated with Earth-based facilities.
Bezos has predicted that such orbital data hubs will be “commonplace” within the next two decades.
Blue Origin plans to begin launching TeraWave satellites in late 2027 using its heavy-lift New Glenn rocket. The rocket recently reached a major milestone in November by successfully landing its booster on a floating platform, a feat previously only achieved by SpaceX.
The announcement creates a complicated dynamic for Bezos, who is now backing two separate satellite ventures. While Amazon Leo serves individual households, TeraWave will offer a specialized “enterprise-grade” alternative.
Analysts suggest the new system is designed to provide “route diversity,” acting as a space-based backup for terrestrial fibre-optic cables.
Sulaiman Ghori, an engineer at Elon Musk’s AI startup xAI, went on the podcast Relentless last week to talk about the inner workings of the company that he joined less than a year prior. Days later, he “left” xAI, though the speculation is that he was fired after being a bit too open about the company’s operations.
So what exactly did Ghori reveal on Relentless? Well, he seemed to tip off the possibility that xAI has been skirting regulations and getting dubious permits when building data centers—specifically, its prized Colossus supercomputer in Memphis, Tennessee. “The lease for the land itself was actually technically temporary. It was the fastest way to get the permitting through and actually start building things,” he said. “I assume that it’ll be permanent at some point, but it’s a very short-term lease at the moment, technically, for all the data centers. It’s the fastest way to get things done.”
When asked how xAI has gone about getting those temporary leases, Ghori explained that they worked with local and state governments to get permits that allow companies to “modify this ground temporarily,” and said they are typically for things like carnivals.
Colossus was not without controversy already. The data center, which xAI brags only took 122 days to build, was powered by at least 35 methane gas turbines that the company reportedly didn’t have the permits to operate. Even the Donald Trump-staffed Environmental Protection Agency declared the turbines to be illegal. Those turbines, which were operating without permission, contributed to the significant amount of air pollution experienced by surrounding communities.
In addition to the indication of other potential legal end-arounds committed by xAI, Ghori also revealed some of the company’s internal operations, including relying significantly on AI agents to complete work. “Right now, we’re doing a big rebuild of our core production APIs. It’s being done by one person with like 20 agents,” he said. “And they’re very good, and they’re capable of doing it, and it’s working well,” though he later stated that the reliance on agents can lead to confusion. “Multiple times I’ve gotten a ping saying, ‘Hey, this guy on the org chart reports to you. Is he not in today or something?’ And it’s an AI. It’s a virtual employee.”
Ghori’s insight into the use of AI agents certainly comes at an interesting time. Earlier this month, tech journalist Kylie Robison reported that AI startup Anthropic, the maker of Claude, cut off xAI’s access to its model. According to Robison, xAI cofounder Tony Wu told his team that the change would cause “a hit on productivity,” and “AI is now a critical technology for our own productivity.” He encouraged employees to try “all different kinds of models” in the meantime to keep coding.
Ghori spilled quite a few other details about xAI throughout the interview, none of which seem to have been publicly disputed by Musk or xAI—and they’re not exactly the type to keep quiet if they want to discredit someone. But within a matter of days of the conversation, Ghori left the company despite having just promoted and encouraging people to join his team just days prior to his departure.
Adding to the intrigue: Just one day after Ghori “left,” xAI cofounder Greg Yang stepped away from the company after being diagnosed with Lyme disease. Yang’s departure hasn’t been connected to Ghori in any way. Dealing with Lyme absolutely sucks, and it’s difficult to treat. But it is worth noting that xAI is losing its top folks—and fast.
As Bloomberg noted, co-founders Igor Babuschkin and Christian Szegedy left last year. Maybe Musk will just appoint an AI agent to head the company. Given the legal trouble the company is likely staring down, what with its dubious data center buildouts and recent “undressing” controversy surrounding its chatbot Grok, it wouldn’t be much of a surprise if no human wanted to handle what comes next.
Ryanair CEO Michael O’Leary offered Elon Musk a free flight as part of a “big idiot” promotion by the discount carrier, the company’s tongue-in-cheek retort after the airline executive and technology mogul traded insults earlier this week.
“I suspect he’s a bigger idiot than me, but nevertheless, he probably thinks I am a bigger idiot than him,” O’Leary said on Wednesday during an hour-long press conference in Dublin, where Ryanair is based.
The two have publicly sparred after O’Leary said he would not install WiFi gear from Starlink, the Musk-owned maker of satellite internet gear, on Ryanair planes out of concern that the extra weight from the system’s antennas would drive up fuel costs.
“It is a terrific system. It works very well,” O’Leary said. “The problem is if you put it on board aircraft, there is a cost of that of about $200 million, $250 million a year, including the cost of installation and then the fuel drag.”
Starlink did not respond to a request for comment. Musk’s SpaceX company developed Starlink to provide internet connectivity, launching its first batch of satellites in 2019.
Ryanair declined to comment. The airline seized on the brouhaha on Tuesday to launch what it called a “big idiot seat sale,” offering 100,000 seats for 16.99 Euros for one-way fares.
“Ryanair is launching a Great idiot seat sale especially for Elon Musk and any other idiots on X,” Ryanair posted on X on Tuesday. “Buy now before Musk gets one!!!
O’Leary said Wednesday that Ryanair had explored equipping its jets with Starlink’s service, including meeting with executives from the telecom company, but ultimately rejected the idea.
After O’Leary relayed that decision last week, Musk said on social media that the airline leader was “misinformed.” O’Leary subsequently told an Irish radio station that “I would pay no attention whatsoever to Elon Musk, he’s an idiot.”
In response, Musk took to X, his social media platform, to deride O’Leary as an “utter idiot” and an “imbecile.” Musk — the world’s richest person, with a net worth of nearly $700 billion, according to the Bloomberg Billionaires Index — also launched a poll jestingly asking his followers whether he should buy Ryanair “and put someone whose actual name is Ryan in charge?”
O’Leary, who has led Ryanair as CEO for more than three decades, said on Wednesday that Musk is free to launch a takeover bid for the airline, while noting that rules bar non-European citizens from owning a majority stake in a European airline.
“But if he wants to invest in Ryanair, we would think it’s a very good investment,” O’Leary said. “Certainly a significantly better investment than the financial returns he’s earning on X.”