News

The Indignity of a Computer Undressing You

Why Christians need to talk about Grok’s policies on AI-image generation.

A pixelated image of a woman's neck and shoulders.
Christianity Today January 26, 2026
Maxim Shevchenko / Pexels / Edits by CT

Last month, Elon Musk’s AI chatbot Grok granted user requests to undress images of nonconsensual women and minors. Responding to global outrage, X initially said it would place this image-editing capability behind a paywall for subscribers only, but the company later amended its policy to add “technological measures” that would prevent this capability for all users. Last Wednesday, X added a geolocation block “in jurisdictions where such content is illegal, [geoblocking] the ability of all users in those locations to generate images of real people in bikinis, underwear, and similar attire.” 

To understand these changes better, The Bulletin sat down with senior contributor Mike Cosper; editor at large Russell Moore; and Christine Emba, contributing writer for The New York Times, senior fellow at the American Enterprise Institute, and author of Rethinking Sex: A Provocation. Here are edited excerpts from their conversation in episode 243.

This news is so gross and off-putting. Why do Christians need to talk about this at all?

Russell Moore: Theology shows itself in real life rather directly here. Elon Musk has said he thinks it’s quite likely that all of reality is a simulation. If everybody is simply a simulation, then you treat them like machines. That’s what we have here: the treating of human beings as consumable material, as just so many pixels. Child sexual abuse material is being used here with the argument that it’s not really abusive if it’s not a real child, which is horrifying at every level. This reveals a very predatory view of kids. 

Second, you have the problem of deepfakes. Someone can post a picture of someone at their high school and say, “Picture her without clothes.” AI can do that very convincingly. Teenage girls are being bullied, humiliated, and intimidated by this. Congress is working on restricting the use of this now. There are lawmakers who are saying, “We need to work to outlaw it.” 

The thing about what X and Grok are doing right now that’s especially infuriating and gross is that they are coming in and saying they are going to geoblock this image generation only in areas where it’s illegal. It’s a Romans 2 sort of revelation that they can do this. They actually can control this, and they won’t, because it’s essential to what this entire movement is headed toward: deeply dehumanizing not just the people who are being victimized but also the users themselves.

Christine Emba: This is a disgusting development, but in some sense it was always the direction in which this technology was going to be taken and is meant to be taken. Unfortunately, the pornographic is very profitable for companies. We’ve seen OpenAI, now Grok, and other AI companies noting that they’re releasing “spicy” modes or allowing image generation, saying that they don’t want to treat their users like children. They allow or invite this sort of like erotic role-play, and I think it’s disgusting. 

This particular instance, the ability to nonconsensually shame women and minors, is horrifying. The fact that Elon Musk and his defenders think that this is okay, that we should be able to do this, that it’s not hurting anyone because they’re not real images—this betrays a level of callousness that is awful to me. It also reveals a severe misunderstanding of the human person, of how shame works, of how images persist in the mind, of how somebody can be harmed by the use of their most intimate self, even if it is a fake image. I am a woman who is a public figure and lives on the internet to some extent. The idea that something like that could be just created and shared around without your consent is really frightening to me. 

It’s unclear whether this will be blocked in the United States, but the United States did pass the Take It Down Act in 2025, which was explicitly written to prevent this sort of material from being shared. The legislation states that if somebody issues a line of complaint to a company that takes part in the sharing or creation of this sort of material, that company is required to take down the material within 48 hours. This law is on the books, but obviously Grok had been creating these images, hundreds of thousands of them over the past several weeks. For whatever reason, politicians in the US were too afraid of Elon Musk or too busy sitting on their hands to actually enforce the laws on the books.

There is a feeling in government and society right now of inevitability: Technology and AI are coming. Elon Musk and Sam Altman know what they’re doing. We just have to sit back and take it. There’s nothing to be done. These people are billionaires with so much power. They could primary somebody who challenges them in court or fund a campaign against a lawmaker who pressures them to change their product to make it more socially healthy. We’re just going to watch and let it happen and see how that plays out.

We’ve seen how this attitude of inevitability played out with social media and smartphones. It’s appalling that we’re going to sit back and just do it again with something perhaps even more dangerous and corrosive to minds. 

Mike Cosper: The Take It Down Act places the burden of moral and ethical responsibility on the victims of this kind of pornographic material rather than on the people who are posting it. You have to know it’s out there. You have to know where it is. You have to be able to report it, and the organizations that post and share this stuff are notoriously bad and slow at actually responding to those requests.

On December 31, the Grok X account posted an apology. It said,

I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM [child sexual abuse material]. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.

The use of the personal pronoun I is striking here. We talk about AI just being a tool. Is Silicon Valley trying to convince us it’s something more?

Moore: I’m not sure that it’s Silicon Valley trying to convince us. In many ways, it is more than what many people have expected. There are things going on that even the developers don’t yet know, and that’s part of the problem. This is such a new era. If you had said that there was going to be a machine that would be talking about regret in the first-person singular ten years ago, it would’ve sounded science fiction–y. But guess what? Here we are, in science fiction in a lot of ways. 

It’s not just that parents, for instance, are trying to figure out how to deal with technology that’s way beyond what they know. Lawmakers are struggling too. When it comes to social media companies, we feel like we’re too late. With AI, we’re unsure because it’s too early and we don’t know where it’s going. That puts us in a really, really difficult place as a country.

Cosper: A common idea among Silicon Valley developers is “Move fast and break things.” Let’s move quickly and try things, and when they don’t work, we’ll iterate. We’ll solve the problems down the line. This is the world Musk comes from, so I tend to be skeptical of the notion that folks who are operating in that way are defined by any serious ethic when it comes to human dignity and respect for their neighbors. 

Emba: We tend to think of computers and programs as tools. But researchers found that when you personalize a tool like a large language model, instead of something that feels like consulting a dictionary, you begin to feel affection for it. You begin to feel like you’re interacting with something real. It feels like a friend, and so you use it a lot more often. That’s why these companies chose to personalize these AI agents, why they talk to you as friends. This is also why we’re seeing people falling in love with their AI chatbots or being convinced in some cases by their AI that committing suicide is okay. 

Companies know that this personalization leads to weird relationships and an inability to stop using [the chatbots]—all sorts of negative social contagions—but they’re not interested in the social good. They never have been. They just want more people using their product. The fact that legislators and individuals are not quite picking up on that yet is really alarming to me.

How do we talk to young people both about the dangers of these platforms and their AI-generated images and about humanity’s value in a way that makes sense to kids growing up in a digital world?

Emba: When we talk to younger adults about this, there’s already beginning to be a realization that maybe the online world and all these technologies have not been great for us. We’re seeing Gen Z and Gen Alpha pushing back a little bit on smartphones, on being online all the time, and noting the importance of the real world. I think that’s wonderful. We should continue to encourage this kind of thought, this idea that the real is what is out there in real life, not what you’re seeing on a screen.

We also need to talk more about the importance of personal creativity and the ability to use your own mind and imagination—to have your own thoughts that are not handed to you by a company that does not have your best interests at heart. I think kids can understand the importance of being able to think with their own brains and develop that usage. 

We’re going to have to continue to find ways to talk about, in a pluralistic society, the dignity and worth of every human person. That humans are worth more than machines and should not be abused. That we have a responsibility to grow and support each other as humans made in the image of God. That we are supposed to be masters of technology, not let technology master us. That’s beginning to feel like a harder sell in an environment that suggests if you aren’t good at using this, if you aren’t online, if you aren’t on this platform, you’re left out, you’re going to fall behind. We need to continue to talk about how the most real person is the person who lives in the world, in contact with others. 

Moore: We’re really at a point where there is a genuine question of wisdom. What do we individually do? I don’t have the authority to bind anybody’s conscience on that except to say that is a question that we all ought to be asking. 

Emba: I’ve found that often when I am disgusted by a site and its behavior and I choose to leave it for some period of time, I’m pleasantly surprised at how much I don’t need to be there. My life is not impacted by not checking in on this website that’s designed to steal my attention several times a day. In fact, it’s better. If there are family members or friends who I have on these platforms, maybe I should call them. Maybe I should send them a note.

It’s very easy to say, I need to be here. I need to be doing this. Again, this feeling of I don’t want to be left behind. But we’re forcing ourselves into a collective-action problem by all committing to be in this place that we don’t want to be until someone else leaves!

If we are able to give ourselves the interior freedom to make these choices for ourselves, that could have a really important impact. It’s easy to say it’s just technology, but thinking carefully about what this product is asking for, who is behind it, and what their ideals are—and if we want to be the sort of person implementing those ideals in our lives—might also give us some hints as to what we should and should not be spending our time on.

Our Latest

Analysis

The Indignity of a Computer Undressing You

The Bulletin with Christine Emba

Why Christians need to talk about Grok’s policies on AI-image generation.

My Healing Was God’s Work, Not Mine 

Natalie Mead

After six years of debilitating chronic migraine disorder, I’d lost my confidence in the Lord. He was still faithful.

Being Human

Steve & Lisa Cuss’ Insights into Communication Styles and Their Impact on Well-Being

Why is it so hard to transform communication styles for deeper connections?

The Russell Moore Show

How to Use Faith Language in Everyday Conversation

Russell answers a listener question on how we can use language about our faith in conversation about the mundane and ordinary parts of life – without overspiritualizing.

Human Worth in the Attention Economy

James tells us to guard against partiality. That means rejecting disdain for mothers, blue-collar workers, and others the world devalues.

Authority Is a Responsibility, Not an Excuse

The Trump administration should be able to execute on its immigration mandate without executing people like Alex Pretti in the streets.

The Bulletin

Sunday Afternoon Reads: Kidnapped Girls, Whispered Prayers, Resilient Faith

The courageous faith of Nigerian teenagers kidnapped by Boko Haram.

The Bulletin

Greenland Ambitions, Worship Service Protest, and Talarico Shares His Faith

Mike Cosper, Clarissa Moll, Russell Moore

Trump’s Greenland talk concerns Europe, protesters disrupt a church service, and a Democratic politician shares his beliefs.

Apple PodcastsDown ArrowDown ArrowDown Arrowarrow_left_altLeft ArrowLeft ArrowRight ArrowRight ArrowRight Arrowarrow_up_altUp ArrowUp ArrowAvailable at Amazoncaret-downCloseCloseEmailEmailExpandExpandExternalExternalFacebookfacebook-squareGiftGiftGooglegoogleGoogle KeephamburgerInstagraminstagram-squareLinkLinklinkedin-squareListenListenListenChristianity TodayCT Creative Studio Logologo_orgMegaphoneMenuMenupausePinterestPlayPlayPocketPodcastRSSRSSSaveSaveSaveSearchSearchsearchSpotifyStitcherTelegramTable of ContentsTable of Contentstwitter-squareWhatsAppXYouTubeYouTube