Sulaiman Ghori, an engineer at Elon Musk’s AI startup xAI, went on the podcast Relentless last week to talk about the inner workings of the company that he joined less than a year prior. Days later, he “left” xAI, though the speculation is that he was fired after being a bit too open about the company’s operations.
So what exactly did Ghori reveal on Relentless? Well, he seemed to tip off the possibility that xAI has been skirting regulations and getting dubious permits when building data centers—specifically, its prized Colossus supercomputer in Memphis, Tennessee. “The lease for the land itself was actually technically temporary. It was the fastest way to get the permitting through and actually start building things,” he said. “I assume that it’ll be permanent at some point, but it’s a very short-term lease at the moment, technically, for all the data centers. It’s the fastest way to get things done.”
When asked how xAI has gone about getting those temporary leases, Ghori explained that they worked with local and state governments to get permits that allow companies to “modify this ground temporarily,” and said they are typically for things like carnivals.
Colossus was not without controversy already. The data center, which xAI brags only took 122 days to build, was powered by at least 35 methane gas turbines that the company reportedly didn’t have the permits to operate. Even the Donald Trump-staffed Environmental Protection Agency declared the turbines to be illegal. Those turbines, which were operating without permission, contributed to the significant amount of air pollution experienced by surrounding communities.
In addition to the indication of other potential legal end-arounds committed by xAI, Ghori also revealed some of the company’s internal operations, including relying significantly on AI agents to complete work. “Right now, we’re doing a big rebuild of our core production APIs. It’s being done by one person with like 20 agents,” he said. “And they’re very good, and they’re capable of doing it, and it’s working well,” though he later stated that the reliance on agents can lead to confusion. “Multiple times I’ve gotten a ping saying, ‘Hey, this guy on the org chart reports to you. Is he not in today or something?’ And it’s an AI. It’s a virtual employee.”
Ghori’s insight into the use of AI agents certainly comes at an interesting time. Earlier this month, tech journalist Kylie Robison reported that AI startup Anthropic, the maker of Claude, cut off xAI’s access to its model. According to Robison, xAI cofounder Tony Wu told his team that the change would cause “a hit on productivity,” and “AI is now a critical technology for our own productivity.” He encouraged employees to try “all different kinds of models” in the meantime to keep coding.
Ghori spilled quite a few other details about xAI throughout the interview, none of which seem to have been publicly disputed by Musk or xAI—and they’re not exactly the type to keep quiet if they want to discredit someone. But within a matter of days of the conversation, Ghori left the company despite having just promoted and encouraging people to join his team just days prior to his departure.
Adding to the intrigue: Just one day after Ghori “left,” xAI cofounder Greg Yang stepped away from the company after being diagnosed with Lyme disease. Yang’s departure hasn’t been connected to Ghori in any way. Dealing with Lyme absolutely sucks, and it’s difficult to treat. But it is worth noting that xAI is losing its top folks—and fast.
As Bloomberg noted, co-founders Igor Babuschkin and Christian Szegedy left last year. Maybe Musk will just appoint an AI agent to head the company. Given the legal trouble the company is likely staring down, what with its dubious data center buildouts and recent “undressing” controversy surrounding its chatbot Grok, it wouldn’t be much of a surprise if no human wanted to handle what comes next.
AJ Dellinger
Source link