This week we talk about OpenAI, nudify apps, and CSAM.
We also discuss Elon Musk, SpaceX, and humanistic technology.
Recommended Book: Who’s Afraid of Gender? by Judith Butler
Transcript
xAI is an American corporation that was founded in mid-2023 by Elon Musk, ostensibly in response to several things happening in the world and in the technology industry in particular.
According to Musk, a “politically correct” artificial intelligence, especially a truly powerful, even generally intelligent one, which would be human or super-human-scale capable, would be dangerous, leading to systems like HAL 9000 from 2001: A Space Odyssey. He intended, in contrast, to create what he called a “maximally truth-seeking” AI that would be better at everything, including math and reasoning, than existing, competing models from the likes of OpenAI, Google, and Anthropic.
The development of xAI was also seemingly a response to the direction of OpenAI in particular, as OpenAI was originally founded in 2015 as a non-profit by many of the people who now run OpenAI and competing models by competing companies, and current OpenAI CEO Sam Altman and Elon Musk were the co-chairs of the non-profit.
Back then, Musk and Altman both said that their AI priorities revolved around the many safety issues associated with artificial general intelligence, including potentially existential ones. They wanted the development of AI to take a humanistic trajectory, and were keen to ensure that these systems aren’t hoarded by just a few elites and don’t make the continued development and existence of human civilization impossible.
Many of those highfalutin ambitions seemed to either be backburnered or removed from OpenAI’s guiding tenets wholesale when the company experienced surprising success from its first publicly deployed ChatGPT model back in late-2022.
That was the moment that most people first experienced large-language model-based AI tools, and it completely upended the tech industry in relatively short order. OpenAI had already started the process of shifting from a vanilla non-profit into a capped for-profit company in 2019, which limited profits to 100-times any investments it received, partly in order to attract more talent that would otherwise be unlikely to leave their comparably cushy jobs at the likes of Google and Facebook for the compensation a non-profit would be able to offer.
OpenAI began partnering with Microsoft that same year, 2019, and that seemed to set them up for the staggering growth they experienced post-ChatGPT release.
Part of Musk’s stated rationale for investing so heavily in xAI is that he provided tens of millions of dollars in seed funding to the still non-profit OpenAI between 2015 and 2018. He filed a lawsuits against the company after its transition, and when it started to become successful, post-ChatGPT, especially between 2024 and 2026, and has demanded more than $100 billion in compensation for that early investment. He also attempted to take over OpenAI in early 2025, launching a hostile bid with other investors to nab OpenAI for just under $100 billion. xAI, in other words, is meant to counter OpenAI and what it’s become.
All of which could be seen as a genuine desire to keep OpenAI functioning as a non-profit arbiter of AGI development, serving as a lab and thinktank that would develop the guardrails necessary to keep these increasingly powerful and ubiquitous tools under control and working for the benefit of humanity, rather than against it.
What’s happened since, within Musk’s own companies, would seem to call that assertion into question, though. And that’s what I’d like to talk about today: xAI, its chatbot Grok, and a tidal wave of abusive content it has created that’s led to lawsuits and bans from government entities around the world.
—
In November of 2023, an LLM-based chatbot called Grok, which is comparable in many ways to OpenAI’s LLM-based chabot, ChatGPT, was launched by Musk’s company xAI.
Similar to ChatGPT, Grok is accessible by apps on Apple and Android devices, and can also be accessed on the web. Part of what makes its distinct, though, is that it’s also built into X, the social network formerly called Twitter which Musk purchased in late-2022. On X, Grok operates similar to a normal account, but one that other users can interact with, asking Grok about the legitimacy of things posted on the service, asking it normal chat-botty questions, and asking it to produce AI-generated media.
Grok’s specific stances and biases have varied quite a lot since it was released, and in many cases it has defaulted to the data- and fact-based leanings of other chatbots: it will generally tell you what the Mayo clinic and other authorities say about vaccines and diseases, for instance, and will generally reference well-regarded news entities like the Associated Press when asked about international military conflicts.
Musk’s increasingly strong political stances, which have trended more and more far right over the past decade, have come to influence many of Grok’s responses, however, at times causing it to go full Nazi, calling itself Mechahitler and saying all the horrible and offensive things you would expect a proud Nazi to say. At other times it has clearly been programmed to celebrate Elon Musk whenever possible, and in still others it has become immensely conspiratorial or anti-liberal or anti-other group of people.
The conflicting personality types of this bot seems to be the result of Musk wanting to have a maximally truth-seeking AI, but then not liking the data- and fact-based truths that were provided, as they often conflicted with his own opinions and biases. He would then tell the programmers to force Grok to not care about antisemitism or skin color or whatever else, and it would overcorrect in the opposite direction, leading to several news cycles worth of scandal.
This changes week by week and sometimes day by day, but Grok often calls out Musk as being authoritarian, a conspiracy theorist, and even a pedophile, and that has placed the Grok chatbot in an usual space amongst other, similar chatbots—sometimes serving as a useful check on misinformation and disinformation on the X social network, but sometimes becoming the most prominent producer of the same.
Musk has also pushed for xAI to produce countervailing sources of truth from which Grok can find seeming data, the most prominent of which is Grokipedia, which Musk intended to be a less-woke version of Wikipedia, and which, perhaps expectedly, means that it’s a far-right rip off of Wikipedia that copies most articles verbatim, but then changes anything Musk doesn’t like, including anything that might support liberal political arguments, or anything that supports vaccines or trans people. In contrast, pseudoscience and scientific racism get a lot of positive coverage, as does the white genocide conspiracy theory, all of which are backed by either highly biased or completely made up sources—in both cases sources that Wikipedia editors would not accept.
Given all that, what’s happened over the past few months maybe isn’t that surprising.
In late 2025 and early 2026, it was announced that Grok had some new image-related features, including the ability for users to request that it modify images. Among other issues, this new tool allowed users to instruct Grok to place people, which for this audience especially meant women and children, in bikinis and in sexually explicit positions and scenarios.
Grok isn’t the first LLM-based app to provide this sort of functionality: so called “nudify” apps have existed for ages, even before AI tools made that functionality simpler and cheaper to apply, and there have been a wave of new entrants in this field since the dawn of the ChatGPT era a few years ago.
Grok is easily the biggest and most public example of this type of app, however, and despite the torrent of criticism and concern that rolled in following this feature’s deployment, Musk immediately came out in favor of said features, saying that his chatbot is edgier and better than others because it doesn’t have all the woke, pearl-clutching safeguards of other chatbots.
After several governments weighed in on the matter, however, Grok started responding to requests to do these sorts of image edits with a message saying: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”
Which means users could still access these tools, but they would have to pay $8 per month and become a premium user in order to do so. That said, the AP was able to confirm that as of mid-January, free X users could still accomplish the same by using an Edit Image button that appears on all images posted to the site, instead of asking Grok directly.
When asked about this issue by the press, xAI has auto-responded with the message “Legacy Media Lies.” The company has previously said it will remove illegal content and permanently suspend users who post and ask for such content, but these efforts have apparently not been fast or complete, and more governments have said they plan to take action on the matter, themselves, since this tool became widespread.
Again, this sort of nonconsensual image manipulation has been a problem for a long, long time, made easier by the availability of digital tools like Photoshop, but not uncommon even before the personal computer and digital graphics revolution. These tools have made the production of such images a lot simpler and faster, though, and that’s put said tools in more hands, including those of teenagers, who have in worryingly large numbers taken to creating photorealistic naked and sexually explicit images of their mostly female classmates.
Allowing all X users, or even just the subset that pays for the service to do the same at the click of a button or by asking a Chatbot to do it for them has increased the number manyfold, and allowed even more people to created explicit images of neighbors, celebrities, and yes, even children. An early estimate indicates that over the course of just nine days, Grok created and posted 4.4 million images, at least 41% of which, about 1.8 million, were sexualized images of women. Another estimated using a broader analysis says that 65% of those images, or just over 3 million, contained sexualized images of men, women, and children.
CSAM is an acronym that means ‘child sexual abuse material,’ sometimes just called child porn, and the specific definition varies depending on where you are, but almost every legal jurisdiction frowns, or worse, on its production and distribution.
Multiple governments have announced that they’ll be taking legal action against the company since January of 2026, including Malaysia, Indonesia, the Philippines, Britain, France, India, Brazil, and the central governance of the European Union.
The French investigation into xAI and Grok led to a raid on the company’s local office as part of a preliminary investigation into allegations that the company is knowingly spreading child sexual abuse materials and other illegal deepfake content. Musk has been summoned for questioning in that investigation.
Some of the governments looking into xAI for these issues conditionally lifted their bans in late-January, but this issues has percolated back into the news with the release of 16 emails between Musk and the notorious sex traffic and pedophile Jeffrey Epstein, with Musk seemingly angling for an invite to one of Epstein’s island parties, which were often populated with underage girls who were offered as, let’s say companions, for attendees.
And this is all happening at a moment in which xAI, which already merged with social network X, is meant to be itself merged with another Musk-owned company, SpaceX, which is best known for its inexpensive rocket launches.
Musk says the merger is intended to allow for the creation of space-based data centers that can be used to power AI systems like Grok, but many analysts are seeing this as a means of pumping more money into an expensive, unprofitable portion of his portfolio: SpaceX, which is profitable, is likely going to have an IPO this year and will probably have a valuation of more than a trillion dollars. By folding very unprofitable xAI into profitable SpaceX, these AI-related efforts could be funded well into the future, till a moment when, possibly, many of today’s AI companies will have gone under, leaving just a few competitors for xAI’s Grok and associated offerings.
Show Notes
https://www.wired.com/story/deepfake-nudify-technology-is-getting-darker-and-more-dangerous/
https://www.theverge.com/ai-artificial-intelligence/867874/stripe-visa-mastercard-amex-csam-grok
https://www.ft.com/content/f5ed0160-7098-4e63-88e5-8b3f70499b02
https://www.theguardian.com/global-development/2026/jan/29/millions-creating-deepfake-nudes-telegram-ai-digital-abuse
https://apnews.com/article/france-x-investigation-seach-elon-musk-1116be84d84201011219086ecfd4e0bc
https://apnews.com/article/grok-x-musk-ai-nudification-abuse-2021bbdb508d080d46e3ae7b8f297d36
https://apnews.com/article/grok-elon-musk-deepfake-x-social-media-2bfa06805b323b1d7e5ea7bb01c9da77
https://www.nytimes.com/2026/02/07/technology/elon-musk-spacex-xai.html
https://www.bbc.com/news/articles/ce3ex92557jo
https://techcrunch.com/2026/02/01/indonesia-conditionally-lifts-ban-on-grok/
https://www.bbc.com/news/articles/cgr58dlnne5o
https://www.nytimes.com/2026/01/22/technology/grok-x-ai-elon-musk-deepfakes.html
https://en.wikipedia.org/wiki/XAI_(company)
https://en.wikipedia.org/wiki/OpenAI
https://en.wikipedia.org/wiki/ChatGPT
https://en.wikipedia.org/wiki/Grok_(chatbot)
https://en.wikipedia.org/wiki/Grokipedia
https://www.cnbc.com/2025/02/10/musk-and-investors-offering-97point4-billion-for-control-of-openai-wsj.html




