In this series

As a Marine Corps intelligence officer in my previous life, my job was to help senior leaders answer the difficult question of “What should I do?” I was to collect millions of pieces of data, organize them into information, put them into context, and through analysis give them meaning so we could have knowledge of the situation at hand. The senior officers would take that knowledge and then, using wisdom born out of experience, decide on a course of action. These decisions could mean life or death for the men and women under them, and as officers, we regularly felt the weight of that responsibility.
Today, as a father, husband, and CEO, many decisions I make still carry much weight. As humans, we’re always in life stages that contain risk: the development of our children, the relationships of our marriages, the jobs of fellow employees. We all make decisions daily—hopefully with wisdom. In the era of artificial intelligence, it only makes sense to appropriately incorporate this powerful tool into our decision-making processes.
The process I followed as an intelligence officer of moving from data to wisdom is not too different from how many of us make decisions in our everyday lives. In fact, the Data, Information, Knowledge, Wisdom (DIKW) framework established by systems thinker Russell Ackoff more than three decades ago captures the steps many of us unconsciously go through when moving from basic data to wisdom. Within this framework, data becomes information when it’s organized and put into context; it becomes knowledge when the information is analyzed and given meaning; and it becomes wisdom when we understand how to apply it.
If we’re being honest, we’re drowning in data and inundated with information, yet we’re always looking for knowledge and wisdom. Thankfully, AI tools can help with a lot of this process. Their algorithmic and computational capabilities are perfectly suited for collecting terabytes of data and organizing and contextualizing it into information for decision-making—typically in a matter of seconds rather than the hours or days it would take a mortal.
While AI provides informational knowledge, it’s also a slave to the data and information available to it. For the complex decisions of life—the ones involving real risk, often relational—we need more than information.
Knowledge can be informational, experiential, or (ideally) both. The Bible’s definition of knowledge expands beyond that of merely organized data and contextualized information. Instead, the Christian view of knowledge is deeply rooted in embodied people experiencing life in relationship with others—whether with God or with other humans (John 13:35; 2 Cor. 4:5–7).
Scripture speaks about how God wants us to know him—certainly through informational knowledge but ultimately in a personal relationship. Moses knew God. David knew God. In a similar but different way, Adam knew Eve. The knowledge God wants us to have is intimate and experiential, not merely informational.
AI can help us with so much in life, but like any created thing, it also has its limits. Rather than applying AI tools to generate data and information, many are attempting to leverage such systems to generate experiential knowledge and even wisdom.
A recent essay in Harvard Business Review highlighted the top use for AI as “therapy/companionship.” Acknowledging the differences between therapy and companionship, author Marc Zao-Sanders grouped them into one category because both “fulfill a fundamental human need for emotional connection and support.” Human connection and being known by others is a core part of what it means to be human. While the article speaks to the advantages of applying AI in this way, the arguments are made on the grounds of efficiency: It’s available 24-7, it’s inexpensive or free, and it does not “judge.”
But AI is not effective in terms of relationships. AI cannot provide companionship for the same reasons that it is limited in providing experiential knowledge or wisdom: It is not an embodied person who has perspectives and experiences from which to empathize and come alongside our lives. And therein lies the challenge. AI cannot know what it means to be a human because AI isn’t human. And in the midst of Western culture’s loneliness epidemic, mimicking relationships with AI has already had tragic consequences.
Attempts to apply AI in this way are akin to going back into Plato’s cave in search of the shadows of self-gratifying relationship, rather than enjoying the essence of relationships. In so doing, we are privileging efficiency over effectiveness, the artificial for the real.
While AI is very well-suited for complicated problems, humans are complex; what we need are experiential knowledge and God-given wisdom. We could attempt to use AI beyond its limits of processing complicated problems and providing informational knowledge and move into the realms of experiential knowledge and companionship, but it would be unwise to do so.
Vineet Rajan is the CEO of Forte, the fastest-growing platform for mental fitness. He has graduate degrees from Stanford and Cambridge, is a Marine combat veteran, and is an adviser for AI and Faith.