Like ancient Israel in the time of the judges, the artificial intelligence industry has no “king”—no regulatory body, no US federal laws, only piecemeal state-level oversight and patchy ethical principles that are largely ignored. Companies such as OpenAI suggest regulation is needed and then balk when actual regulations are proposed. Meanwhile, harms well-documented by tech companies themselves (including misinformation and manipulation, data bias, and damage to the environment) continue largely unabated.
While the AI landscape lacks the bloody tent-peg violence of the Book of Judges, it is marked by similar confusion, conflict, and lack of consensus (4:21; 21:25). AI companies and users are doing what is right in their own eyes. Engineers, lawmakers, lobbyists, critics, and consumers operate as if they were living in one house but in separate rooms with the doors shut.
Narratives promoting hype and fear clamor for our attention: One article describes AI as progress; the next sees it as devolution. Different parties paint the same proposed law as necessary or destructive. For instance, California lawmakers overwhelmingly voted for regulations that would have made tech companies legally responsible for rare, catastrophic harms caused by their large AI models and also required safety tests. OpenAI said the bill could slow innovation in Silicon Valley. Prominent voices in AI, including a top AI scientist at Meta, also opposed the bill, which was vetoed by Gov. Gavin Newsom.
While the current technology behind AI might be new, the overarching story is not. For centuries, humans have done what is right in their own eyes, just like in the time of the judges. Yet it isn’t all bad news. Even the darkness of Judges had faithful leaders like Deborah and Gideon, who sought to follow God. So too today, there are voices advocating for the good of humanity across the AI landscape.
How then should Christians think about AI? How do we respond to AI innovations in a time of no king, no consensus, and no clear path forward? First, we must avoid giving in to hype or fear in our response. Instead, we must look to what God continually called the Israelites to do; we must love the Lord and love our neighbors. Because we love God, we must put others before ourselves when we think about personal, collective, and societal uses of AI.
The sudden and largely unexplained emergence of AI into popular culture in 2023 produced an immediate knowledge vacuum. The technology appeared before anyone outside technical spheres had a chance to consider what it might mean. Even the creators of the software didn’t know what they had on their hands. Charlie Warzel at The Atlantic noted that “OpenAI didn’t expect ChatGPT to amount to much more than a passing curiosity among AI obsessives on Twitter.” As such, OpenAI did not give much of an explanation for ChatGPT’s purpose or exactly what it could do. If AI’s creators were not prepared for the AI boom, it’s safe to say no one was.
This vacuum of meaning was filled by a cornucopia of competing narratives that created our current moral, legal, and ethical chaos. Viewpoints varied from transhuman (“Humans will merge with AI”) and utopian (“AI will remove all work and usher in an era of leisure”) to dystopian (“Superintelligent AI could kill everyone”) and fearful (“AI will take your job”). They were hype-driven (“AI will supercharge the economy”), critical (“AI could increase inequality”), or dismissive (“AI is a fad”).
One of the most prominent narratives attached to AI is technological inevitability: the idea that because something can be done, it will be done. This theory has been used to make AI seem impossible to stop while waving away challenges such as workplace implementation, environmental concerns, potential job loss, data use, and copyright infringement. The idea of technological inevitability is often pitched in geopolitical contexts. For example, a company named Scale AI in 2023 warned Congress that the US may fall behind China in AI development, “casting the AI race as a patriotic battle between ‘democratic values’ and an authoritarian regime.” More explicitly, prominent AI developer Jürgen Schmidhuber said, “You cannot stop it. Surely not on an international level,” because, he argued, different countries will have different goals for the technology. “So, of course, they are not going to participate in some sort of moratorium.”
Yet technology is not inevitable; we don’t have a mandate to pursue AI or any other tech advancement. This counter-narrative to technological inevitability challenges tech makers to justify their creations. Yet there is no overarching regulator or law requiring the justification of technologies despite their harms. Instead, each company does what is right in its own eyes.
Users also are doing what is best in their own eyes. AI has formative effects that, at their worst, can alienate us from our God-given activities, especially at work. For instance, excessive professional use of AI assistants may estrange us from the work God has called us to do, removing a key tool that God uses to shape and sanctify us into Christlikeness.
The idea that AI is inevitable may lead people to avoid questioning assumptions about AI products and their potential harms to customers, clients, and coworkers. People may focus on AI’s promises of workplace efficiency, but Christ followers must take into account inefficient but timeless Christian priorities such as mercy, goodness, patience, and perseverance when making decisions about AI.
When we consider AI’s impact on the workforce, we should think beyond our own jobs and consider our neighbors: Will AI’s text-generating capacities replace white-collar jobs? In some cases, concerns of losing a job to a bot are well-founded, as 1.7 million factory workers throughout the world have been replaced by robots over the past 25 years. Even though large-scale white-collar labor replacement has not happened yet, the pace of AI growth can cause anxiety among white-collar employees, similar to what US blue-collar manufacturing workers have faced for years. Both groups can use this experience to empathize with each other.
Our individual choices to use or reject AI tools contribute to large-scale trends such as these. Considering the effects on our neighbors might make this not a simple yes or no but a multi-faceted decision requiring prayer, nuance, and careful consideration.
I have had to grapple with this balance of benefits and harms in my own AI use and applications. I work at Arizona State University in the technical communication program. My research lab created the Arizona Water Chatbot, which delivers information about water management and conservation to Arizonans.
With multiple desert regions, Arizona presents a complex and difficult water-management situation that includes declining amounts of water in the Colorado River and an ongoing megadrought. Information about these issues can be technical and difficult to understand, as each city has a different combination of sources supplying its water. Federal laws and local rules governing the water supply are continually renegotiated, and new forms of water reuse and recycling can also be complex. To further complicate matters, narratives about Arizona running out of water frequently collide with mandates for most urban areas to affirm that they have at least 100 years of water supply available.
While creating an AI bot to explain these issues may sound like an easy “yes” to promote human flourishing and good stewardship of creation, the reality is more nuanced. AI technology requires massive amounts of water to cool its data centers—up to 1.25 million gallons a day for just one huge center in Arizona. (The water bot has used approximately 35 gallons of water so far to answer the public’s questions.)
My team and I had to evaluate whether it was worth using water to share valuable information about conserving water. We believe the value of a bot that generates easily understandable information for the public on a crucial issue is worth the environmental cost that our computing requires. But some of our neighbors disagree that water in Arizona should be used for AI. We had to take the potential benefits and threats to our neighbors and the environment into careful consideration. Ultimately, we made the choice to use AI chatbots.
It was not an easy choice, because the environmental effects of AI use are significant and I believe God created the earth and calls us to take care of it (Gen. 2:15; Lev. 25:2–7). Even beyond water use, the energy costs are steep. An MIT article cited estimates from the International Energy Agency that electricity demand from data centers could double before the end of the decade. While “data centers account for 1% to 2% of overall global energy demand, similar to what experts estimate for the airline industry,” it added, “that figure is poised to skyrocket… potentially hitting 21% by 2030.”
The amount of energy needed to run these centers can stress power grids and affect energy availability for residents. With no regulation to guide this industry, energy demands may indeed grow and the attendant energy use may become a significant challenge. (Some AI companies are controversially trying to build their own green or nuclear energy plants.)
Yet as we analyzed the environment in our water chatbot project, we also considered the lived experience of those around us. What do our neighbors need? Is it worth using short-term ecological resources to help people understand their changing ecological environment? I think so.
The environment is not the only concern. When we all do what is right in our own eyes without looking out for others, it becomes easy to exploit our neighbors’ data.
AI companies often train their tools using mountains of data without permission. In fact, author Robin Sloan argued that “the only trove known to produce noteworthy capabilities is: the entire internet, or close enough. The whole browsable commons of human writing.” This type of data collection lives in an ethical gray area where there are few rules. People in the United States are allowed to automatically collect or “scrape” words and images published to the internet for education and noncommercial uses.
However, authors, comedians, artists, and musicians have sued AI companies such as OpenAI and Meta, claiming that they are violating copyright laws when their generative AI systems use scraped data to produce revenue. “The success and profitability of OpenAI are predicated on mass copyright infringement without a word of permission from or a nickel of compensation to copyright owners,” reads one lawsuit that includes author John Grisham as a plaintiff. AI companies have counter-claimed that their use is allowed under a copyright provision called “fair use,” among other arguments. There is as of yet no clear guidance on what to do here. In the meantime, AI companies appear to be doing what is right in their own eyes.
Even if legal questions are answered in court, ethical questions remain. Loving our neighbors—and respecting our neighbors’ data—is a serious challenge for designers of AI tools when the livelihoods of millions of artists and creatives are involved. Seeking permission from content creators or paying them to use their work may be the most loving way forward but may also prove to be impossible at scale. If so, the right thing to do would be to stop using AI in its current form. AI companies would have to seek more ethically sourced ways forward or stop developing AI altogether.
For now, each person and company must weigh the potential harm to their neighbors with the potential good AI could do for their neighbors. That’s what my team and I are trying to do with the chatbot. As someone who creates AI tools for the public good, I’m not making money off them. My team is not trying to build outputs that take work away from people, nor are we doing this work for our own glory. We are trying to help people. Yet we are still using tools from companies that take data without permission.
Ultimately, I hold this work with open hands. If current AI data sourcing practices are deemed illegal, I will discontinue the bot and seek other ways to help people with technology.
Because there are few rules or norms for AI, some will think that an AI water bot is inherently unethical and should be shut down (or should never have been created). Others will think that shutting down a project seeking the public good over data concerns is an unnecessary overstep. Even when Christians disagree, we should do so in love, without seeking to win the argument for our own self-justification and without making a villain of the person with whom we disagree.
Followers of Christ must think about how their use of AI affects others. As we do this, we can lead the way in loving our neighbors, whether we’re building AI technology in Silicon Valley or choosing to not use AI to write a report at work. AI may not have a king, but we do.
Stephen Carradini is an associate professor of technical communication at Arizona State University. His work focuses on ethical and effective implementation of emerging technologies.