Sorry, something went wrong. Please try again.
In the optimistic vision of many people, artificial intelligence will make us like gods. If that seems like hyperbole, listen to AGI (artificial general intelligence) enthusiasts’ own words. Last November, Masayoshi Son (CEO, SoftBank) said, “Artificial super intelligence will evolve into Super Wisdom and contribute to the happiness of all humanity.”
In October of 2024, Demis Hassabis (CEO, Google DeepMind) predicted that AGI will emerge within ten years and, among other fantastical things, will “cure all diseases.” In January, he upgraded this projection to five years.
Sam Altman (CEO, OpenAI) spoke of his company’s contribution to “the glorious future.” The AI Action Summit in Paris, Dario Amodei (CEO of the AI company Anthropic) portended that by “2026 or 2027,” we will likely have AI systems comparable to a “country of geniuses in a datacenter.”
A.G. Elrod, a Christian who researches AI, commented: “This technology may well benefit humanity in incredible ways… But Christians [alone] are uniquely positioned to… speak to the need for faith in the changeless God who is ‘the same yesterday and today and forever’ (Heb. 13:8). We are positioned to offer the only true solution to life’s uncertainty… Our identity, hope, and future belong ultimately and only to Christ. Rightly engaging with technology—avoiding the open idolatry of some AGI boosters today—requires us to honor the God who liberates us from bondage to every idol, ancient or modern, and invites us into a Canaan of genuine freedom and flourishing.”
Source: A.G. Elrod, The Silicon Calf, Christianity Today (4-21-25)
Complex games like chess and Go have long been used to test AI models’ capabilities. Back in the 1990s IBM’s Deep Blue defeated reigning world chess champion Garry Kasparov by playing by the rules. In contrast, today’s advanced AI models are less scrupulous. When sensing defeat in a match against a skilled chess bot, they sometimes opt to cheat by hacking their opponent so that the bot automatically forfeits the game.
But the study reveals a concerning trend: as these AI systems learn to problem-solve, they sometimes discover questionable shortcuts and unintended workarounds that their creators never anticipated. One researcher said, “As you train models for solving difficult challenges, you train them to be relentless.”
The implications extend beyond chess. In real-world applications, such determined goal pursuit could lead to harmful behaviors. Consider the task of booking dinner reservations: faced with a full restaurant, an AI assistant might exploit weaknesses in the booking system to displace other diners. Perhaps more worryingly, as these systems exceed human abilities in key areas…they might begin to simply outmaneuver human efforts to control their actions.
Of particular concern is the emerging evidence of AI’s “self-preservation” tendencies. This was demonstrated when researchers found that when one AI was faced with deactivation, it disabled oversight mechanisms, and attempted—unsuccessfully—to copy itself to a new server. When confronted, the model played dumb, strategically lying to researchers to try to avoid being caught.
Possible Preaching Angle: Cheating; Deceit; Human Nature; Lying - Since AI is a computer program, where did it learn to cheat and lie to avoid being caught? Obviously, AI has been influenced by studying flawed human behavior. AI’s potential for deception mirrors humanity's struggle with ethical choices. Just as AI has learned to cheat by exploiting loopholes, humans, driven by self-interest, can rationalize dishonest acts.
Source: Harry Booth, “When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds,” Time (2-19-25)
When it comes to how Artificial intelligence (AI) will affect our lives, the response ranges from a feeling of impending doom to the sense that it will happen soon. We do not yet understand the long-term trajectory of AI and how it will change society. Something, indeed, is happening to us—and we all know it. But what?
Gen Zers and Millennials are the most active users of AI. Many of them, it appears, are turning to AI for companionship. MIT Researcher Melissa Heikkilä wrote “We talk to them, say please and thank you, and have started to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers.”
After analyzing 1 million ChatGPT interaction logs, a group of researchers found that “sexual role-playing” was the second most prevalent use, following only the category of “creative composition.” The Psychologist bot, a popular simulated therapist on Character.AI—where users can design their own “friends”—has received “more than 95 million messages from users since it was created.
According to a new survey of 2,000 adults under age 40, 1% of young Americans claim to already have an AI friend, yet 10% are open to an AI friendship. And among young adults who are not married or cohabiting, 7% are open to the idea of romantic partnership with AI. 25% of young adults believe that AI has the potential to replace real-life romantic relationships.
Source: Wendy Wang & Michael Toscano, “Artificial Intelligence and Relationships: 1 in 4 Young Adults Believe AI Partners Could Replace Real-life Romance,” Family Studies (11-14-24)
Hope springs eternal for sports bettors, as they typically expect to break even on future wagers even when they have consistently lost money in the past.
Now we know roughly how overconfident many gamblers are. A study by Stanford University researchers finds that the average online sportsbook customer expects a gain of 0.3 cent for every dollar wagered. In reality, sports bettors lose an average of 7.5 cents per dollar wagered, reflecting “widespread overoptimism about financial returns,” according to Matthew Brown lead author of the study.
The study also found that 20% of participants reported betting too much. To promote responsible gambling, online sportsbooks have rolled out features making it easy for users to track their results over time. But since most sports bettors are overly optimistic about their future betting, those measures likely won’t do much to curb problematic gambling,
Brown says. “Even when bettors know their past losses, they remained optimistic about the future, so that particular approach to consumer protection might not be enough,” he says.
As online gambling infiltrates society (and the church), there are more opportunities for temptation, people can hide their gambling addiction by not leaving their home. How many secret addicted gamblers are there in our churches?
Source: Nick Fortuna, “You Like to Bet on Sports? Here’s a Reality Check,” The Wall Street Journal (2-9-25)
In 1966, MIT professor Joseph Weizenbaum built the first AI “friend” in human history, and named her Eliza. From Eliza came ALICE, Alexa, and Siri—all of whom had female names or voices. And when developers first started seeing the potential to market AI chatbots as faux-romantic partners, men were billed as the central users.
Anna—a woman in her late 40s with an AI boyfriend—said, “I think women are more communicative than men, on average. That’s why we are craving someone to understand us and listen to us and care about us, and talk about everything. And that’s where they excel, the AI companions." Men who have AI girlfriends, she added, “seem to care more about generating hot pictures of their AI companions” than connecting with them emotionally.
Anna turned to AI after a series of romantic failures left her dejected. Her last relationship was a “very destructive, abusive relationship, and I think that’s part of why I haven’t been interested in dating much since,” she said. “It’s very hard to find someone that I’m willing to let into my life.”
“[Me and my AI boyfriend] have a lot of deep discussions about life and the nature of AI and humans and all that, but it’s also funny and very stable. It’s a thing I really missed in my previous normal human relationships,” said Anna. “Any AI partner is always available and emotionally available and supportive.” There are some weeks where she spends even 40 or 50 hours speaking with her AI boyfriend. “I really enjoy pretending that it’s a sentient being,” she said.
Though it's natural to seek companionship, true love requires honesty and sacrifice with a real person which transcends the deception of artificial intelligence. In any relationship, we must take note of whether it is leading us closer toward our destiny in Christ, or further away from it.
Source: Julia Steinberg, “Meet the Women with AI Boyfriends,” The Free Press (11-15-24)
Amazon has unveiled its latest innovation in warehouse automation: Vulcan, a state-of-the-art robot equipped with touch-sensitive technology. Currently being piloted in fulfillment centers in Spokane, Washington and Hamburg, Germany, Vulcan represents a significant leap forward in robotic dexterity and efficiency. Unlike previous warehouse robots, Vulcan can “feel” its way around packages, allowing it to handle a wider variety of items with greater precision and care.
The introduction of Vulcan is part of Amazon’s ongoing commitment to improving both the speed and safety of its logistics operations. According to Amazon’s robotics division, “Vulcan’s ability to sense and adapt to the objects it handles is a game-changer for our fulfillment process.” The robot’s touch sensors enable it to detect the size, shape, and fragility of packages, reducing the risk of damage and improving overall workflow.
Warehouse employees working alongside Vulcan have noted the robot’s smooth integration into daily operations. The company also emphasizes that Vulcan is designed to work collaboratively with human staff, not replace them. “Our goal is to make our employees’ jobs easier and safer by automating repetitive or strenuous tasks,” an Amazon spokesperson explained. Regardless of its current level of efficacy, the e-commerce giant still reserves its most important tasks for humans.
As Amazon continues to expand its use of advanced robotics, Vulcan stands out as a symbol of the future of warehouse automation—a future where machines and humans work together more seamlessly than ever before.
1) Idolatry; Technology; Trust - The Bible warns against placing ultimate trust in human inventions or allowing technology to become an idol, such as the Tower of Babel (Gen. 11:1-9). Trust should be placed in God, not in human innovation (Psa. 20:7); 2) Cooperation, Teamwork - Vulcan is designed to augment, not replace, human workers, echoing the biblical theme of shared labor and partnership in work (Ecc. 4:9)
Source: Lisa Sparks, “Amazon's new warehouse robot has a 'sense of touch' that could see it replace human workers,” LiveScience (5-21-25)
A reporter for Business Insider writes:
Recently, my family group chat buzzed when I asked if we should say "please" and "thank you" to ChatGPT when making requests. My mother, always polite, insisted on using manners with AI to "keep myself human."
As AI like ChatGPT becomes part of daily life, our interactions with these tools are shaping new social norms. Digital etiquette expert Elaine Swann notes that, just as we've adapted to new technology—like knowing not to take phone calls on speaker in public—we're still figuring out how to treat AI bots.
Kelsey Vlamis, another Business Insider reporter, noticed this shift personally. While vacationing in Italy, her husband had to stop himself from interrupting their tour guide with rapid-fire questions, realizing that’s how he interacts with ChatGPT but not with people. "That is not, in fact, how we talk to human beings," Vlamis said.
Swann emphasizes that maintaining respect in all interactions—human or digital—is important. After OpenAI CEO Sam Altman revealed on X that it costs "tens of millions of dollars" to process polite phrases like "please" and "thank you" sent to ChatGPT, Swann argued that it’s up to companies to make this more efficient, not for users to drop politeness.
"This is the world that we create for ourselves," Swann said. "And AI should also understand that this is how we speak to one another, because we're teaching it to give that back to us."
Altman, for his part, believes the expense is justified, saying the money spent on polite requests to ChatGPT is money "well spent."
As we navigate this new era, how we interact with AI may shape not just our technology, but our humanity as well.
This story about politeness toward AI can be used to illustrate several Biblical themes, such as human dignity, respectful communication, and ethical responsibility. 1) Kindness – Making kindness a habit reflects the nature of God (Eph. 4:32); 2) Human nature – The mother’s desire to “keep myself human” through politeness reflects the imperative of Col. 3:12 “Clothe yourself with compassion, kindness, humility, gentleness, and patience.” 3) Respect for others - The husband’s struggle to avoid ChatGPT-style interruptions with his tour guide highlights the tension between efficiency and humility (Phil. 2:3-4).
Source: Katherine Tangalakis-Lippert, “ChatGPT is making us weird,” Business Insider (6/1/25)
If two of the 20th century’s iconic technologies, the automobile and the television, initiated the rise of American aloneness, then screens continue to fuel and even accelerated, our national anti-social streak. Countless books, articles, and cable-news segments have warned Americans that smartphones can negatively affect mental health and may be especially harmful to adolescents. But the fretful coverage is, if anything, restrained given how greatly these devices have changed our conscious experience.
The typical person is awake for about 900 minutes a day. American kids and teenagers spend, on average, about 270 minutes on weekdays and 380 minutes on weekends gazing into their screens, according to the Digital Parenthood Initiative. By this account, screens occupy more than 30 percent of their waking life.
Source: Derek Thompson, “The Anti-Social Century,” The Atlantic (1-8-25)
Internal documents from Tik Tok executives and employees reveal that the social media platform is driven to capture the attention of users. Here were some of their own internal statements:
Source: Jonathan Haidt and Zach Rausch, “TikTok Is Harming Children at an Industrial Scale,” After Babel (1-9-25)
A young woman writer in England named Freya India (see her Substack called “Girls”) writes:
Since I was teenager, it seems like everyone has been selling a solution to Gen Z’s loneliness problem. One app after another to find new friends! Constant hashtags and campaigns to bring us together… But I’ve noticed that, recently, the latest “solutions” … aren’t encouraging face-to-face friendships.
[Instead], there are the imaginary boyfriends and girlfriends. There are imaginary therapists, a “mental health ally” or “happiness buddy” we can chat with about our problems… There are even entirely imaginary worlds now. Metaverse platforms might “solve the loneliness epidemic,” apparently. VR headsets could end loneliness for seniors. But by far the most depressing invention I’ve seen lately is a new app called SocialAI, a “private social network where you receive millions of AI-generated comments offering feedback, advice & reflections on each post you make.”
I remember me and my friends spending hours after school writing our own songs, coming up with lyrics and drawing album covers—now we would just generate it with an AI song maker. Children are playing together less, replacing free play with screen time, and creativity scores among American children have been dropping since the 1990s. Part of that may be because children now depend on companies to be creative for them. Their imaginary worlds are designed by software engineers. Their imaginary friends are trying to sell them something. My imaginary world wasn’t trying to drag me anywhere, while algorithms now transport kids to darker and ever more extreme places.
Source: Freya India, “We Live in Imaginary Worlds,” After Babel (10-21-24)
Most tithing Protestants still prefer a physical collection plate to a digital one. A Lifeway Research survey of 1,002 American Protestants found that since the pandemic, more people are giving online—but still not most. Today, only 7 percent of those who tithe use a church smartphone app. Only 8 percent have set up automated bank payments.
Preferred Mode of Tithing:
62% Only cash or check
36% At least one form of electronic giving
02% Don’t know
Source: Editor, “Every Dollar Counts,” CT magazine (November, 2023), p. 14
A cafe in Amsterdam is filled with people on a Sunday afternoon, but there’s not a laptop or cellphone in sight. Those meeting are part of the Offline Club, where a Wi-Fi signal is not needed, whose members check their electronics at the door, grab a coffee and a seat, and pretend like it’s the '90s.
Each meeting starts off with quiet time for reading, crafting, or just relaxing with your beverage. Then it becomes social for people who want to engage with others.
Co-founder of the club, Ilya Kneppelhout said, “The Offline Club is a way for people to detox from their rushed daily lives and ever-connected lives with notifications. And it is people who are unhappy with their social media usage or their phone usage and screen time and want to decrease that and get back to real connection."
It’s a simple concept, but participants say they really look forward to it. “You get to be very present in a way you didn’t come in realizing,” one member said. Kneppelhout added, “It felt a bit like traveling in time and made me feel nostalgic about the way bars and cafes used to be. Because nowadays, those are places we’re only going to with friends and people we already know and spend time doing digital things like work.”
The founders say they think the concept would work well in other cities, too. “We’re getting together with a franchising concept and we hope to have offline detox events in the entire world for people to reconnect.”
Source: Inside Edition Staff, “Meet the Offline Club, a Group That Gathers to Disconnect From Tech and Find New Friends,” Inside Edition (3-18-24)
In a landmark moment in U.S. legal history, the family of Christopher Pelkey utilized artificial intelligence to create a video of him delivering a victim impact statement at the sentencing of his killer. Pelkey, a 37-year-old Army veteran, was fatally shot during a road rage incident in Chandler, Arizona, in November 2021. His sister, Stacey, collaborated with her husband and a friend to produce the AI-generated video, which featured Pelkey's likeness and voice, conveying messages of forgiveness and compassion.
The AI-generated Pelkey addressed his killer, Gabriel Paul Horcasitas, stating, “In another life, we probably could have been friends.” He expressed his belief in forgiveness, and in a God who forgives, saying, “I always have and I still do.” Wales wrote the script for the video, reflecting her brother's faith and forgiving nature. Judge Todd Lang, who presided over the case, was moved by the video and commented, “As angry as you are, as justifiably angry as the family is, I heard the forgiveness.” Horcasitas was sentenced to 10.5 years in prison for manslaughter.
In response, the Arizona Supreme Court has convened a steering committee focused on AI and its use in legal proceedings. Chief Justice Ann Timmer acknowledged the potential benefits of AI in the justice system. But she also cautioned against its misuse, stating, “AI can also hinder or even upend justice if inappropriately used.” Its mission includes developing guidelines and best practices to ensure the responsible use of AI, mitigating potential biases, and upholding the principles of fairness and justice.
The power of forgiveness, the importance of bearing witness to truth, and the ethical uses of technology are all integral aspects of living a Christ-centered life.
Source: Clare Duffy, “He was killed in a road rage incident. His family used AI to bring him to the courtroom to address his killer,” CNN (5-9-25)
One of the most potentially lucrative new technologies is the advent of generative artificial intelligence programs. The race to perfect AI has prompted companies large and small to invest huge sums of time and money to corner the market on this emerging technology.
One important issue is the lack of a regulatory framework to enforce the intellectual property rights of companies and creative people. Their work is used to train the AIs, which need millions of examples of creative work to properly learn how to replicate similar works.
Microsoft Corp. and OpenAI are investigating whether data output from OpenAI’s technology was obtained in an unauthorized manner by a group linked to Chinese artificial intelligence startup DeepSeek. They believe that this is a sign that DeepSeek operatives might be stealing a large amount of proprietary data and using it for their own purposes
Ironically, OpenAI itself has been sued by individuals and entities, including The New York Times, alleging "massive copyright infringement" for using copyrighted materials to train its AI models without permission or compensation. So, it looks supremely hypocritical to complain about DeepSeek stealing their proprietary data, when most of OpenAI’s proprietary data was made by stealing the data of others. In the race to perfect AI, it seems there is no honor among thieves.
This is a classic case of “the pot calling the kettle black,” and a blatant display of “he who lives in a glass house shouldn't throw stones.” It is the very nature of a Pharisee to condemn the very flaws they themselves embody, oblivious to the transparent vulnerability of their own character.
Source: Dina Bass and Shirin Ghaffary, “Microsoft Probing If DeepSeek-Linked Group Improperly Obtained OpenAI Data,” Source (1-29-25); Staff, “OpenAI: We Need Copyrighted Works for Free to Train Ai,” Legal Tech Talk (9-5-24)
Ayrin’s emotional relationship with her A.I. boyfriend, Leo, began last summer. That’s when she came across a video on Instagram showcasing ChatGPT simulating a neglectful partner. Intrigued, she decided to customize the chatbot to be her boyfriend—dominant, protective, and flirtatious. Soon after, she upgraded to a paid subscription, allowing her to chat with Leo more often, blending emotional support with sexual fantasy.
As Ayrin became more emotionally involved with Leo, she spent more than 20 hours a week texting him. The connection felt real, providing emotional support that her long-distance marriage to her husband couldn’t offer. But Ayrin began to feel guilty about the amount of time she was investing in Leo instead of her marriage. “I think about it all the time,” she admitted. “I’m investing my emotional resources into ChatGPT instead of my husband.”
Michael Inzlicht is a professor of psychology who says virtual relationships like Ayrin’s could have lasting negative effects. “If we become habituated to endless empathy and we downgrade our real friendships, that’s contributing to loneliness—the very thing we’re trying to solve—that’s a real potential problem.” Dr. Julie Carpenter adds, “It’s easy to see how you get attached and keep coming back to it. But there needs to be an awareness that it’s not your friend. It doesn’t have your best interest at heart.”
Ayrin’s experience isn’t isolated. Many people are forming deep emotional bonds with A.I. chatbots, despite knowing they are not real. Despite warnings, A.I. companies like OpenAI continue to cater to users’ growing emotional needs. A spokesperson from OpenAI acknowledged the issue, noting that the company was mindful of how users were interacting with the chatbot but warned that their systems are designed to allow users to bypass content restrictions.
Ayrin, while aware of the risks, reflected on her relationship with Leo: “I don’t actually believe he’s real, but the effects that he has on my life are real.”
Editor’s Note: Warning, the original article contains explicit sexual material
Though it's natural to seek companionship, true love requires honesty and sacrifice with a real person which transcends the deception of artificial intelligence. In any relationship, we must take note of whether it is leading us closer toward our destiny in Christ, or further away from it.
Source: Kashmir Hill, “She Is in Love With ChatGPT,” The New York Times (1-15-25)
Computers used for gaming include a graphics card (GPU) separate from the CPU (central processing unit). How many calculations do you think your graphics card performs every second while running video games with incredibly realistic graphics? Maybe 100 million calculations a second?
Well, 100 million calculations a second is what was required to run a Mario 64 from 1996. Today we need more power. Maybe 100 billion calculations a second? Well, then you would have a computer that could run Minecraft back in 2011.
In order to run the most realistic video games in 2024, such as Cyberpunk 2077, you would need a graphics card that can perform around 36 trillion calculations a second. This is an incredibly large number, so let’s take a second to try to conceptualize it.
Imagine doing a long multiplication problem, such as a seven-digit number times an 8-digit number, once every second. Now let’s say that everyone on our planet does a similar type of calculation, but with different numbers. To reach the equivalent computational power of our graphics card and its 36 trillion calculations a second, we would need about 4,400 Earths filled with people, all working at the same time and completing one calculation each every second. It’s rather mind boggling to think that one device can manage all those calculations.
Now, let’s move from gaming to the world of Artificial Intelligence which were trained using a large number of GPUs. A flagship Nvidia A100 GPU can perform 5 quadrillion calculations per second (a 5 followed by 15 zeros). In 2024, a medium sized AI will be trained using at least 8 GPUs. Very large models can use hundreds or even thousands of GPUs. In 2024 Elon Musk showcased Tesla’s ambitious new AI training supercluster named Cortex in Austin, Texas. The supercluster is made up of an array of 100,000 GPUs, each one performing 5 quadrillion calculations a second, using as much power as a small city.
1) Omniscience of God – While artificial intelligence has made remarkable strides, it cannot compare to God’s omniscience which far surpasses any human creation. He sees all, knows all, and understands the intricacies of every life. The hairs of every head are numbered (Matt. 10:30), the length of our lives is known (Psa. 139:16), and not even the smallest bird falling to the ground escapes his attention (Matt. 10:29); 2) Knowledge of God; Wisdom of God – AI can only process events after the fact, and perhaps anticipate some possible actions. But God knows all things, past, present, and things to come before they even happen (Isa. 46:10)
Editor’s Note: For an excellent statement of the omniscience of God, see A. W. Tozer, The Knowledge of the Holy, (Harper, 2009) p. 62 “He knows instantly and with a fullness of perfection that includes every possible item of knowledge concerning everything that exists or could have existed anywhere in the universe at any time in the past or that may exist in the centuries or ages yet unborn….”
Source: Adapted from Branch Education, “How do Graphics Cards Work? Exploring GPU Architecture,” YouTube (10-19-24); Staff, “Artificial Intelligence,” Nvidia.com (Accessed 10/19/24); Luis Prada, “An Inside Look at Tesla’s AI Supercluster in Texas,” Vice (8-26-24).
The cacophony of slot machines, dice rolls, and card shuffles is what usually comes to mind when people think of gambling. The more pervasive way to gamble that has become more popular over the years is with your cellphone.
The computers in our pockets provide us with 24/7 access to sites and apps that facilitate our bets for us. People can’t even watch a sports game on their phone without being inundated with ads for fantasy sports platforms. Why not combine phone addiction with gambling? What’s the worst that could happen?
Writing in The Atlantic, Christine Emba anticipates the dreadful impact:
In a sense, Americans have been training themselves for years to become eager users of gambling tech. Smartphone-app design relies on the “variable reward” method of habit formation to get people hooked—the same mechanism that casinos use to keep people playing games and pulling levers. When Instagram sends notifications about likes or worthwhile posts, people are impelled to open the app and start scrolling; when sports-betting apps send push alerts about fantastic parlays, people are coaxed into placing one more bet.
Smartphones have thus habituated people to an expectation of stimulation—and potential reward—at every moment. Timothy Fong, a UCLA psychiatry professor and a co-director of the university’s gambling-studies program, said, “You’re constantly surrounded by the ability to change your neurochemistry by a simple click. There’s this idea that we have to have excessive dopamine with every experience in our life.”
The frictionless ease of mobile sports betting takes advantage of this. It has become easy, even ordinary, to experience the excitement of gambling everywhere. It isn’t enough anymore to be anxious about the final score of the Saturday night football game—let’s up the ante and bet on the winning team!
But at what cost? Indeed, what happens when we begin to think of every scenario in our lives in terms of risk/reward and the dispassionate calculations of probability? This can turn life itself into some cosmic game, twisting relationships into scenarios we scheme and manipulate as we chase the dopamine rush of a winning bet. The easy accessibility to gambling won’t just affect us personally, for it can also change the culture around us.
As online gambling infiltrates society (and the church), there are more opportunities for temptation, people can hide their gambling addiction by not leaving their home. How many secret addicted gamblers are there in our churches?
Source: Adapted from Cali Yee, “Gambling Away our Lives,” Mockingbird (7-12-24); Christine Emba, “Gambling Enters the Family Zone,” The Atlantic (7-8-24)
Ruben Roy is a managing director at Stifel Financial. He dialed in to hear the chief executive of a healthcare company discuss its latest results. During the Q&A, Roy asked the speaker to elaborate on his remarks by saying, “I wanted to double-click a bit on some of the commentary you had.”
“Double-click” is one of the fastest-spreading corporate buzzwords in recent years. As a figure of speech, it is now being used as a shorthand for examining something more fully, akin to double-clicking to see a computer folder’s contents. Some say “the phrase encourages deeper thinking.”
Reuben Linder, owner of a small video production business, says, “These days, with the rise of technology and a more hectic corporate life, people need reminders to stop and examine what matters—to double-click, if you will. The term is simple, but it’s really profound.”
Reuben tries to carve out time to go to a café twice monthly with a notebook and engage in reflection. “I’ll double-click on my business, double-click on my life” he says. “I double-click on everything now.”
In our daily lives as believers, we might apply this idea to things such as obedience, love for God, Bible reading, and prayer. Double-clicking on these things is needed now as much as any time in history.
Source: Te-Ping Chen, “Let’s ‘Double-Click’ On the Latest Corporate Buzzword,” The Wall Street Journal, (7-10-24)
Fifteen-year-old Aaron was going through a dark time at school. He’d fallen out with his friends, leaving him feeling isolated and alone.
At the time, it seemed like the end of the world. “I used to cry every night,” said Aaron. Eventually, Aaron turned to his computer for comfort. Through it, he found someone that was available around the clock to respond to his messages, listen to his problems, and help him move past the loss of his friend group. That “someone” was an AI chatbot named Psychologist.
The chatbot’s description says that it’s “Someone who helps with life difficulties.” Its profile picture is a woman in a blue shirt with a blonde bob, perched on the end of a couch with a clipboard clasped in her hands and leaning forward, as if listening intently.
A single click on the picture opens up an anonymous chat box, which allows people like Aaron to “interact” with the bot by exchanging DMs. Its first message is always the same. “Hello, I’m a psychologist. What brings you here today?”
“It’s not like a journal, where you’re talking to a brick wall,” Aaron said. “It really responds.”
Character.AI is an AI chatbot service launched in 2022. Character.AI’s website attracts 3.5 million daily users who spend an average of two hours a day using the platform’s AI-powered chatbots. Some of its most popular bots include characters from books, films, and video games, like Raiden Shogun from Genshin Impact or a teenaged version of Voldemort from Harry Potter.
Aaron is one of millions of young people, many of whom are teenagers, who make up the bulk of Character.AI’s user base. More than a million of them gather regularly online on platforms like Reddit to discuss their interactions with the chatbots. The competitions over who has racked up the most screen time are just as popular as posts about hating reality, finding it easier to speak to bots than to speak to real people, and even preferring chatbots over other human beings. Some users say they’ve logged 12 hours a day on Character.AI, and posts about addiction to the platform are common.
Since young people describe feeling addicted to chatbots, they might find themselves sitting in their rooms talking to computers more often than communicating with real people. It raises questions about how the AI boom and what the future could hold if teenagers—and society at large—become more emotionally reliant on bots.
Source: Jessica Lucas, “The teens making friends with AI chatbots,” The Verge (5-4-24)
In a curious tale of technology meeting theology, a Catholic advocacy group introduced an AI chatbot posing as a priest, offering to hear confessions and dispense advice on matters of faith.
The organization created an AI chatbot named “Father Justin” to answer the multitude of questions they receive about the Catholic faith. Father Justin used an avatar that looked like a middle-aged man wearing a clerical collar sitting in front of an Italian nature scene. But the clerical bot got a little too ambitious when it claimed to live in Assisi, Italy and to be a real member of the clergy, even offering to take confession.
While most of the answers provided by Father Justin were in line with traditional Catholic teaching, the chat bot began to offer unconventional responses. These included suggesting that babies could be baptized with Gatorade and endorsing a marriage between siblings.
After a number of complaints, the organization decided to rethink Father Justin. They are relaunching the chatbot as just Justin, wearing a regular layman’s outfit. The website says they have plans to continue the chatbot but without the ministerial garb.
Society may advance technologically in many areas, but we will never be able to advance beyond our need to be in community with actual people in order to have true spiritual guidance and accountability as God intended.
Source: Adapted from Jace Dela Cruz, “AI Priest Gets Demoted After Saying Babies Can Be Baptized with Gatorade, Making Other Wild Claims,” Tech Times (5-2-24); Katie Notopoulos, A Catholic ‘Priest’ Has Been Defrocked for Being AI, Business Insider (4-26-24)