Recently, Google hit the pause button on a military artificial intelligence project amidst thorny ethical questions raised by its own employees. Increasingly in new drone and surveillance systems, human knowledge and actions are augmented and soon might be sidelined altogether. Should we shirk our responsibility and pass authority onto these machines? For Christians, the complex conversation about how AI should be developed as weapons centers on a biblical understanding of human dignity and responsibility.

For Google employees, protest began in April 2018 over involvement in a program to continue work on an AI-based image recognition program for the Department of Defense arguing that Google should not be in the business of war since the company’s historic slogan has been “Do no evil.”

The program, simply referred to as “Project Maven,” is designed to be used in identifying enemy targets on the battlefield. The research would improve an AI system, which processes a massive amount of video data captured every day by US military drones and reports back to military and civilian analysts with potential targets for future military engagement. The New York Timesreported that the Pentagon has spent billions of dollars in recent years to develop these systems and often partners with leading technology firms.

Yet, thousands of Google’s employees, including many senior engineers, signed a letter to CEO Sundar Pichai in protest of the firm’s involvement in Project Maven. In June, Google announced that it would not renew the government contract for Project Maven. Employees rejoiced at this decision, but Amazon founder and CEO Jeff Bezos and others have criticized the move arguing that dropping the research will increase the likelihood of war and civilian deaths.

Even with more precise computer-aided targeting, the United States Pentagon said earlier this year that “no one will ever know” how many innocent lives have been lost during the fight against the Islamic State and Syria. These tools are not 100 percent accurate and do fail, but they are more accurate than previous weapons systems. Even as specific numbers vary on the number of innocent lives lost, many argue the number of human casualties is less than it would be without the drones because ground troops are not needed as much and the systems are more accurate than previous target systems driven in large part by fallible humans.

But in July, 2,400 AI researchers signed a pledge to block the development of fully autonomous weapon systems because of the massive moral and ethical implications of the technology.

Article continues below

Many people, including Christians, will understandably disagree on the best means and tools to be used in warfare. However, real human lives are at stake. So how might we go about navigating these issues as a society and apply wisdom to the implementation of artificial intelligence on the battlefield?

The dignity of all people

Christians believe that all people are created in the image of God and that our dignity is based solely on that reality (Gen. 1:26–27). This dignity extends even to our enemies because they too are created in the image of God. Because of our sin and rebellion against God, we ushered in the age of destruction, death, sickness, and brokenness, and the natural order of things is turned upside down. God’s creation also turned on each other, committing murder and devising war.

Google’s employees protested their company’s involvement in Project Maven because many did not want to be involved in the killing of other human beings. Soon after Google pulled out of the project, they released a set of AI principles to guide them in artificial intelligence work. It commits to not designing AI as “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

Even though the company does not use the Christian understanding of human dignity, they were reflecting the God-given dignity and worth in which all people are created. This is common grace.

While the Scriptures teach that killing of other humans is not the way that God designed our world to work, we are called to seek justice for wrongs committed because the taking of the life of another image-bearer is such a serious matter. We don’t seek to engage in war or bloodshed, but we are to seek justice for those oppressed and downtrodden (Isa. 1:17).

This approach to a just war was first promoted by Augustine of Hippo based on his understanding of Romans 13:4. Just war theory explains that the only just cause for war is the protection of peace and the punishment of the wicked.

So, artificial intelligence can be used in ways that magnify our dignity but can also be used in ways that minimize our dignity in the name of efficiency, profits, and even military victory. AI is neither good nor bad, but a God-given tool to use with wisdom.

In line with just war theory, AI can strengthen targeting systems so that they are more accurate when used in long-range missiles and drones preventing accidental killings of the innocent.

Article continues below
Human in the loop

But just war theory also prohibits the use of “indiscriminate force” in war, meaning that weapons should be not used that cannot be controlled or contained. A nuclear bomb is an example of this type of weapon because we cannot precisely target enemies, and the bomb’s blast will affect anyone present including innocent men, women, and children.

We pursue justice, which can include war, because our God is a just God, but the human role in the use of these AI weapons raises valid questions about who is responsible and in control of them.

The role that humans play in decision making for these weapon systems was operationalized by military strategist and United States Air Force Colonel John Boyd in a loop called OODA. This loop has four stages: (1) observe, (2) orient, (3) decide, and (4) act. As decision making for identifying and engaging potential targets is increasingly automated and augmented, humans will naturally become less and less involved in the decision-making loop because of the speed at which life-altering decisions will have to be made.

At least for the time being, US military engagement will have a human in the decision-making loop per a directive from the Department of Defense in 2017, which aims to “minimize the probability and consequences of failures that could lead to unintended engagements or to loss of control of the system.” But that directive may not last if other countries and groups start to use more autonomous weapons because we may not be able to adequately defend against them based upon the speed and accuracy that the AI will achieve.

By shirking our responsibility to enact justice for wrongs committed and passing it to these automated systems, our true motivations are often revealed that show less of a concern for the human beings affected by our decisions and more of our desire for a quick ending to combat.

Our Brother’s Keeper

Humans are uniquely positioned by God to be morally responsible for those who are facing injustice. But humans also have the grave responsibility to care for and love even our enemies.

A major and often overlooked danger of AI-empowered weapons is a tendency to dehumanize our enemies. Soldiers may not come even close to them in combat, which can lead them to not think of these men and women as humans with families, livelihoods, hope, and dreams. Our enemies become mere pieces of data on the virtual battlefield. They become blips on a screen rather than flesh and blood human beings. AI can desensitize us to the realities of war where real human lives are lost.

Article continues below

So how should we think about the use of automated weapon systems in today’s military? While dangerous and potentially dehumanizing, the development of these AI tools should be pursued if we have any hope of defending the oppressed. We must seek to use these tools for good and not evil, to protect the innocent, and to fight for what is righteous. We should champion the dignity and worth of all people, including our enemies. But we must do so with eyes wide open, recognizing their limitations and the responsibility that we hold for how that justice is enacted.

Jason Thacker serves as the creative director and associate research fellow at The Ethics and Religious Liberty Commission, the public policy arm of the Southern Baptist Convention. He is the author of a forthcoming book with Zondervan on artificial intelligence and human dignity. He is married to Dorie and they have two sons.