Grok's bikini scandal shows the true danger of AI
The ability to manipulate reality through these sorts of deep-fakes poses the greatest threat to our lives today.
This column was first published by The Washington Examiner.
Since taking over the platform formerly known as Twitter, Elon Musk has turned X into a fly-by-the-seat-of-your-pants social media platform of absolute free speech, for better and for worse. Yes, it’s fantastic that there now exists a platform on which conservative opinions are treated with more respect than your average puppy-abusing leper, but one of many downsides—fueled by Musk’s decision to monetize engagement—has been the explosion of, to put it politely, garbage. Scrolling on X these days feels more like a fever dream dominated by copyright infringements, right-wing echo chambers, and yes, pornography.
On the subject of pornography, the latest scandal surrounding the X-aligned artificial intelligence service, Grok, not only cements the fact that it remains the trashiest of all AIs out there, but demonstrates the true threat of AI.
This particular saga surrounds the ability of Grok users to generate sexualized and naked pictures of real people—including children—sparking condemnation from anyone with even the tiniest shred of morality. Numerous governments—albeit partly motivated by their preexisting hatred of Elon Musk, of course—have used this explosion of sexually explicit content as justification to block the AI service, adding to a level of blowback that drove X to announce that Grok would be prevented from generating these images, at least in certain locations.
“We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content,” the company announced in a statement, announcing “technological measures” that will prevent “the editing of images of real people in revealing clothing such as bikinis,” with an additional “geoblock in jurisdictions where such content is illegal.”
Except, according to The Guardian, Grok continues to allow users to generate and post “highly sexualised videos of women in bikinis.”
There are two important points to note here. First, the fact that this wasn’t done in the first place is yet another indictment of the fast-and-loose attitude of Elon Musk and his leadership of X. Other AI programs already limit—to varying degrees—the creation of such content, and unless you’ve never interacted with a teenage boy in your life, this was a pretty obvious potential use-case that would have been fairly easy to prevent pro-actively, rather than reactively.
Second, and far more importantly, this scandal is itself a small window into the true danger of artificial intelligence. While pseudo-intellectuals wring their hands over replacement of jobs or destruction of artistic creation or the descent of society into an apocalyptic combination of Terminator or Wall-E, it’s the ability to manipulate reality through these sorts of deep-fakes that poses the greatest threat to our lives today.
Why? Well, for one, because photographic and video evidence remains—for now—an immutable tool used to judge guilt or innocence. Thanks to childishly-unleashed platforms like Grok, we now have the ability not just to embarrass or humiliate or degrade, but to destroy. It’s disgusting enough to generate naked images of a real-life person, and even more disgusting to do so with an image of a child. But imagine the next step: what happens when AI can be used to generate incriminating images and videos? What happens when someone is jailed for a crime they never committed, were it not for AI?
This is the danger of artificial intelligence that is being masked by the absurd and infantile nature of our online culture. And if we’re not willing to face that threat, can we at least agree that the removal of clothing from children—the most vulnerable among us that MAGA promised to protect above all else—is not a matter of free speech?
