Sorry, something went wrong. Please try again.
Extravagantly powerful and noisy engines helped make Ferrari the ultimate sports car brand. Now the company wants to persuade the superrich to buy a model with no engine at all.
The Italian carmaker this week started lifting the hood on its first fully electric vehicle, a yearlong project that has cost the brand hundreds of millions of dollars and promises to set a benchmark for how battery-powered sports cars should look, sound and drive.
…EVs pose a particular challenge for luxury sports-car brand, which say roar and rumble are central to their identities and appeal. Ferrari, perhaps more than any other automaker, has built its brand on internal combustion engines.
Ferrari said its EV wouldn’t mimic engine sounds, as some competitors have. Instead, it will pick up the sound of what it calls the “electric engine” and amplify it into the cabin to give the driver feedback when required.”
Maybe Ferrari is onto something but for many who adore their cars a Ferrari without a roar seems like a love sung played on a kazoo or a passionate love that goes unspoken, or a marriage without warmth and fire, or a honeymoon without a dessert, or a love poem written by autocorrect, or finally a romantic dinner on paper plates. It may all be there but it’s missing something essential.
Source: Stephen Wilmot, "Can Ferrari Persuade the Superrich to Buy an EV Sports Car That Won’t Rev?" The Wall Street Journal, (10-10-25)
There’s nothing spooky about ghostworking. The newly coined term describes a set of behaviors meant to create a facade of productivity at the office, like walking around carrying a notebook as a prop or typing random words just to generate the sound of a clacking keyboard.
Pretending to be busy at the office is not something workers recently invented, of course, but it appears to be reaching critical mass. According to a new survey, more than half of all U.S. employees now admit to regularly ghostworking.
According to the report, the results show that 58% of employees admit to regularly pretending to work, while another 34% claim they do so from time to time. What might be most striking are some of the elaborate methods workers use to perform productivity. Apparently, 15% of U.S. employees have faked a phone call for a supervisor’s benefit, while 12% have scheduled fake meetings to pad out their calendars, and 22% have used their computer keyboards as pianos to make the music of office ambiance.
As for what these employees are actually doing, in many cases it’s hunting for other jobs. The survey shows that 92% of employees have job searched in some way while on the clock, with 55% admitting they do so regularly.
The ongoing return-to-office resurgence has left many employees feeling like they’re working inside of a fishbowl, performing for the watchful eye of employers. Employees sensing a greater need to broadcast that they’re getting work done. So ghostworking is a performance. It involves actively projecting an appearance of busyness without actually engaging in meaningful work.
1) Diligence; Employees; Sincerity - Scripture encourages believers to work wholeheartedly, not just for human approval, but as if working for God; 2) Hypocrisy - The act of ghostworking is a kind of hypocrisy—projecting an image that does not match reality.
Source: Joe Berkowitz, “What is ‘ghostworking’? Most employees say they regularly pretend to work.” Fast Company (5-28-25)
Models who look like Jesus are in high demand in Utah. That’s because for a growing number of people in the state, a picture isn’t complete without Him. They are hiring Jesus look-alikes for family portraits and wedding announcements. Models are showing up to walk with a newly engaged couple through a field, play with young children in the Bonneville Salt Flats, and cram in with the family for the annual Christmas card.
Bob Sagers was walking around an indie music festival in Salt Lake City when a friendly stranger approached and asked for his number. “Has anyone ever told you that you have a Jesus look to you?” the man asked, according to Sagers, a 25-year-old who works as a cheesemonger at a grocery store. It wasn’t a pickup line—the man’s wife was an artist looking for religious models. “I didn’t really get that a lot,” says Sagers, who is 6-foot-5 with dirty-blonde, shoulder-length hair and a beard he says gives Irish and Scandinavian vibes. “I make for a pretty tall Jesus.”
And so it was that Sagers began a side hustle as a savior. Since being recruited about four years ago, Sagers has posed as Jesus nearly a dozen times. Others have done so far more often, charging about $100 to $200 an hour to pose with children, families, and couples at various locations in the Beehive state.
For the newly sought-after models, the job can be freighted with meaning and responsibility. Look-alikes find that people expect them to embody Jesus in more ways than the hair and beard. Some models said they feel like a celebrity when they don the robe—and get treated like one too. (One felt compelled to remind an onlooker he wasn’t the real Jesus.) Others said they’ve had their own semireligious experiences on the job.
Every follower of Jesus may not look like Jesus, but we are called to act like Jesus!
Source: Bradley Olson, “It Pays to Have Long Hair and a Beard in Utah—Jesus Models Are in Demand,” The Wall Street Journal (12-18-24)
Models who look like Jesus are in high demand in Utah. That’s because for a growing number of people in the state, a picture isn’t complete without Him. They are hiring Jesus look-alikes for family portraits and wedding announcements. Models are showing up to walk with a newly engaged couple through a field, play with young children, and cram in with the family for the annual Christmas card. Some charge between $100 to $200 an hour to pose with children, families, and couples at various locations.
For the sought-after models, the job can be freighted with meaning and responsibility. Lookalikes find that people expect them to embody Jesus in more ways than the hair and beard. Jai Knighton has posed as Jesus a number of times. He says, “portraying Jesus can be tricky.” One person who hired him wanted him to be “the most Christlike person you can be, or people will be able to tell through the photos that it’s not real.” Others were more relaxed, asking him to smile and enjoy himself.
Knighton said he tried to portray Jesus in a way that’s similar to how he is depicted in “The Chosen.” Knighton said, “Stoic Jesus is intimidating. A Jesus who smiles and pats you on the back is much more relatable.”
Christians should keep in mind that we represent Christ to those around us. What image of Jesus are you presenting?
Source: Bradley Olson, “It Pays to Have a Beard in Utah—Jesus Models Are in Demand,” The Wall Street Journal (12-19-24)
One of the most potentially lucrative new technologies is the advent of generative artificial intelligence programs. The race to perfect AI has prompted companies large and small to invest huge sums of time and money to corner the market on this emerging technology.
One important issue is the lack of a regulatory framework to enforce the intellectual property rights of companies and creative people. Their work is used to train the AIs, which need millions of examples of creative work to properly learn how to replicate similar works.
Microsoft Corp. and OpenAI are investigating whether data output from OpenAI’s technology was obtained in an unauthorized manner by a group linked to Chinese artificial intelligence startup DeepSeek. They believe that this is a sign that DeepSeek operatives might be stealing a large amount of proprietary data and using it for their own purposes
Ironically, OpenAI itself has been sued by individuals and entities, including The New York Times, alleging "massive copyright infringement" for using copyrighted materials to train its AI models without permission or compensation. So, it looks supremely hypocritical to complain about DeepSeek stealing their proprietary data, when most of OpenAI’s proprietary data was made by stealing the data of others. In the race to perfect AI, it seems there is no honor among thieves.
This is a classic case of “the pot calling the kettle black,” and a blatant display of “he who lives in a glass house shouldn't throw stones.” It is the very nature of a Pharisee to condemn the very flaws they themselves embody, oblivious to the transparent vulnerability of their own character.
Source: Dina Bass and Shirin Ghaffary, “Microsoft Probing If DeepSeek-Linked Group Improperly Obtained OpenAI Data,” Source (1-29-25); Staff, “OpenAI: We Need Copyrighted Works for Free to Train Ai,” Legal Tech Talk (9-5-24)
Almost half of Americans (48%) believe that the rise of artificial intelligence has made them less “scam-savvy” than ever before. With AI working its way into education, finance, and even science, a new survey finds people admitting they can’t tell what’s real anymore.
The poll of U.S. adults revealed that only 18% feel “very confident” in their ability to identify a scam before falling victim to it. As the United States enters a new era of tech, AI is continuing to blur the line between reality and an artificial world.
One in three even admits that it would be difficult for them to identify a potential scam if the scammer was trying to impersonate someone they personally know. Between creating fake news, robo-callers with realistic voices, and sending texts from familiar phone numbers, the possibility and probability of falling victim to a scam may cause anxiety for many Americans.
This may be because 34% of respondents have fallen victim to a scam in one way or another over the years. For others, the sting is still fresh. According to the results, 40% of people have been impacted within the last year — with 8% indicating it was as recent as last month.
BOSS Revolution VP Jessica Poverene said in a statement, “As AI technology advances, so do the tactics of scammers who exploit it. It’s crucial for consumers to stay vigilant.”
The question “Can You Spot an AI Scam?” can apply to Christians with a slight change. The question becomes, “Can You Spot a Doctrinal Scam?” In this age of deception, there are many false doctrines being spread by false teachers and it is important to be informed and vigilant. “But evil people and impostors will flourish. They will deceive others and will themselves be deceived.” (2 Tim. 3:13)
Source: Staff, “Unstoppable AI scams. Americans admit they can’t tell what’s real anymore,” StudyFinds (7-19-24)
Online dating is so last year.
According to a report, popular dating apps have seen a major dip in usage in 2024, with Tinder losing 600,000 Gen Z users, Hinge shedding 131,000 and Bumble declining by 368,000.
Millennials and older generations seem to be holding steady with these apps, with nearly 1 in 10 adults on at least one dating app. But for Gen Z, they’re increasingly over the limited online options.
“Some analysts speculate that for younger people, particularly gen Z, the novelty of dating apps is wearing off,” Ofcom said in its annual Online Nation report.
According to experts, Gen Z seems to be more interested in meeting people IRL instead of finding them through an app. The idea of a “meet cute,” first popularized in every rom-com ever, has become a growing trend online. Accounts like @MeetCutesNYC, which boasts over a million followers, post videos of the various ways that couples have found each other.
Possible Preaching Angle:
Although in Bible times marriages were most often arranged by the parents, there are examples of “chance meetings” when couples met and fell in love that can be used for today’s singles. Some examples are Moses and Zipporah (Exodus 2:16-22), Jacob and Rachel (Genesis 29:1-14), and Ruth and Boaz (Ruth 1-3).
Source: Emily Brown, “Swiping Left: Over a Million Gen Zers Deleted Dating Apps This Year,” Relevant Magazine (12-2-24)
What did Jesus really look like? Western art has frequently stumbled over the contradiction between the ascetic figure of Jesus of Nazareth and the iconography of Christ inspired by the heroic, Hellenistic ideal: Christ as beautiful, tall, and broad-shouldered, God's wide receiver; blue-eyed, fair-haired, a straight aquiline nose, Christ as European prince.
Rembrandt van Rijn, in a career rich with artistic innovation, begged to differ. An exhibition at Paris's Louvre Museum showed in dozens of oils, charcoal sketches, and oak-panel studies how the 17th-century Dutch painter virtually reinvented the depiction of Jesus and arrived at a more realistic portrait.
Before Rembrandt painters tended to reiterate the conventional imagery of Christ. Where artists did rely on life models, they were uniquely beautiful specimens, such as in Michelangelo's Pietà. Rembrandt, working in the relatively open and tolerant society of Golden Age Holland, turned to life models for Jesus to give an "earthly reality" to the face of Christ. He often relied on one model, a young Dutch Jew, whose face appears in the seven oak panels. The result was a more realistic, culturally-appropriate, and biblical portrayal of Jesus.
The poor and ascetic Jesus likely was small and thin and almost certainly olive-skinned, with black hair and brown eyes, and so Rembrandt painted him. In this he anticipated much more recent studies of what the historical Jesus was like. The savior of Rembrandt's faith was a young man with a sweet, homely face, heavy-lidded, stoop-shouldered, and wan.
Viewing the pictures in order of their creation shows how Rembrandt's own features gradually infiltrated the images of Christ. It feels like the most spiritually edifying aspect of these works: When Rembrandt looked into the face of his Savior, he saw his own.
Source: Editor, “Rembrandt and the Face of Jesus,” Philadelphia Museum of Art (Accessed 3/28/24); Editor, “How Rembrandt Reinvented Jesus,” Wall Street Journal (5-7-11)
In a curious tale of technology meeting theology, a Catholic advocacy group introduced an AI chatbot posing as a priest, offering to hear confessions and dispense advice on matters of faith.
The organization created an AI chatbot named “Father Justin” to answer the multitude of questions they receive about the Catholic faith. Father Justin used an avatar that looked like a middle-aged man wearing a clerical collar sitting in front of an Italian nature scene. But the clerical bot got a little too ambitious when it claimed to live in Assisi, Italy and to be a real member of the clergy, even offering to take confession.
While most of the answers provided by Father Justin were in line with traditional Catholic teaching, the chat bot began to offer unconventional responses. These included suggesting that babies could be baptized with Gatorade and endorsing a marriage between siblings.
After a number of complaints, the organization decided to rethink Father Justin. They are relaunching the chatbot as just Justin, wearing a regular layman’s outfit. The website says they have plans to continue the chatbot but without the ministerial garb.
Society may advance technologically in many areas, but we will never be able to advance beyond our need to be in community with actual people in order to have true spiritual guidance and accountability as God intended.
Source: Adapted from Jace Dela Cruz, “AI Priest Gets Demoted After Saying Babies Can Be Baptized with Gatorade, Making Other Wild Claims,” Tech Times (5-2-24); Katie Notopoulos, A Catholic ‘Priest’ Has Been Defrocked for Being AI, Business Insider (4-26-24)
In a new study published in Computers in Human Behavior, a team evaluated 118 children aged three to six and found that overall, kids were more inclined to trust machines over humans.
The study divided children into different groups and showed them videos of humans and robots labeling objects, some recognizable to the kids and other items that would be new to them.
Researchers demonstrated the reliability and trustworthiness of humans and robots by having them incorrectly identify familiar items, calling a brush a plate, for instance. This intentional mislabeling allowed researchers to manipulate the children’s concept of who could and could not be trusted. Interestingly, the children showed a stark preference for robots.
When both bots and humans were shown to be equally reliable, children were more inclined to ask robots questions and accept their answers as true. Even when the robots proved unreliable, children preferred them to reliable adults. Children also appeared to be more forgiving of their machine-friends versus their human ones. When the robots made a mistake, children perceived it as accidental. But when the adults fumbled? Children thought those missteps were intentional.
When asked who they would want to learn from and share secrets with, the majority of children chose the robots over the humans. But that preference might only last for so long: Older children were likelier to trust humans when a robot was shown to be unreliable.
Parents have a God-given responsibility of nurturing trust and educating their children. This profound duty should remain in their hands, not delegated to AI, government, or technology. Embracing this role empowers parents to shape the values and character of the next generation.
Source: Reda Wigle, “Study reveals whom children really trust — and it’s not humans,” New York Post (5-31-24)
In November 2019, Coldplay released their eighth album, Everyday Life. In twenty years of professional music, it was the first time that any of Coldplay’s records came with the famous “Parental Advisory” sticker. The whole of the album’s profanity came from three seemingly random “f-bombs.” Not only had Coldplay never had an explicit content warning on any album before. They had never even featured a single profanity on any of their full-length LPs before Everyday Life.
Less than a year later, Taylor Swift released Folklore. The same exact thing happened. Despite a 15+ year history of recording that featured zero strong profanity, Folklore earned the black and white sticker for featuring multiple uses of the f-word. This started a trend for Swift: Every album released since has the same profanity and the same explicit content warning (as is common in the industry, the albums each have a “clean” version that edits out the harshest words).
Both Coldplay and Taylor Swift have historically appealed to a younger, more sensitive demographic. They have a long and successful history of selling their music without profanity.
Tech writer Samuel D. Jones offers the following observations on the use of profanity by Coldplay, Swift, and other artists:
We live in an era where the combination of authenticity and vice means that we are seeing some examples of performative offense. Performative offense is what happens when people indulge in vice less out of a sincere desire to indulge it, and more out of a desire to sell their image in the public square. It’s because many modern Americans now associate vice with authentic lives that leaders and those who aspire to leadership may flaunt vulgar or antisocial behavior on the grounds that such things make them “real” to the masses.
In other words, it’s cool to be bad. It’s cool to sin a little.
Source: Samuel D. Jones, “Performative Offense,” Digital Liturgies blog (3-21-24)
The Paralympic Games is a celebration of athletic achievement for those with physical disabilities. It has been marred by a growing concern: “classification doping,” (which borrows language used to describe performance enhancing substance abuse). Athletes are misrepresenting the extent of their disabilities to gain an unfair advantage over competitors.
Double amputee Oksana Masters, a prominent Paralympic athlete, believes officials are more interested in maintaining a positive image than addressing the issue. "They want to keep the warm and fuzzy narrative going," she said. "If they knew what's really going on behind closed doors, they'd be shocked."
The Paralympic classification system is designed to place athletes into competitions with others who have similar impairments. While some disabilities are easy to categorize, others are more ambiguous, relying on the judgment of medical classifiers and the integrity of the athletes themselves.
The most infamous Paralympic cheating scandal came at the 2000 Sydney Games, where Spain’s intellectual disability men’s basketball team won the gold medal despite fielding a roster with 10 players who did not have disabilities.
Physician Kevin Kopera, a volunteer in the Paralympic classification system, is cautious about dismissing the issue. "I don't believe anyone can say to what degree misrepresentation exists in parasports," he said. "Any statement in this regard would be speculative. Certainly, to say it doesn't exist would not be realistic. The stakes are too high."
Source: Romans Stubbs, et. al, “As Paralympics get bigger, some athletes say cheating is more prevalent,” The Washington Post (8-28-24)
In a surprising turn of events, a real photograph entered into an AI-generated images category won a jury award but was later disqualified. Photographer Miles Astray submitted a photo of a headless flamingo on a beach to the 1839 Awards competition, judged by esteemed members from various prestigious institutions. His image, titled FLAMINGONE, won the bronze in the AI category and the People’s Choice Award. However, Astray revealed that the photo was real, leading to its disqualification.
Explaining his decision Astray, cited previous contest results. “After seeing recent instances of A.I. generated imagery outshining actual photos in competitions, it occurred to me that I could twist this story inside down and upside out the way only a human could and would, by submitting a real photo into an A.I. competition.”
Lily Fierman, director of Creative Resource Collective, appreciated Astray's message but upheld the disqualification: “Our contest categories are specifically defined to ensure fairness and clarity for all participants. Each category has distinct criteria that entrants’ images must meet.” Despite the disqualification, she acknowledged the importance of Astray’s statement and announced plans to collaborate with him on an editorial.
Astray supported the decision to disqualify his entry, praising Fierman and calling it “completely justified.” “Her words and take on the matter made my day more than any of the press articles that were published since.” In general, he seems to have no regrets about the AI stunt, given the overall positive response.
Astray said, “Winning over both the jury and the public with this picture, was not just a win for me but for many creatives out there.”
Human creativity is a gift from a creative God. When we use our creativity, we're showing appreciation for this amazing gift. God gives a unique gift to each person that can’t be duplicated or manufactured artificially. Whether a student, a pastor, an artist, or business person, we can use AI as a helpful tool, but we should never short-circuit the creative process by relying completely on it.
Source: Adam Schrader, “A Photographer Wins a Top Prize in an A.I. Competition for His Non-A.I. Image,” ArtNet (6-14-24)
After a two-week battle with a sudden fast-spreading infection, Joshua Dean, a former quality auditor at Boeing supplier Spirit AeroSystems, passed away. Dean had recently given a deposition alleging that his firing in 2023 was in retaliation for having disclosed what he called “serious and gross misconduct by senior quality management of the 737 production line.”
The Boeing 737 MAX has a troubled safety record, with high-profile crashes in 2018 and 2019 killing hundreds, and an Alaska Airlines flight in early 2024 that had to make an emergency landing after an explosive decompression due to an insufficiently secured door plug.
According to The Seattle Times, Dean was 45 years old, in relatively good health, and known for a healthy lifestyle. In February, he spoke to NPR about Spirit’s troubling safety practices.
"Now, I'm not saying they don't want you to go out there and inspect a job … but if you make too much trouble, you will get the Josh treatment,” Dean said, about his previous firing. “I think they were sending out a message to anybody else. If you are too loud, we will silence you.”
Dean’s death comes two months after another Boeing whistleblower, John Barnett, was found dead of a potentially self-inflicted gunshot wound. Barnett was also in the process of testifying against Boeing about potential safety lapses in the manufacturing of the Boeing 787, and claims that he was similarly retaliated against for his whistleblowing. Barnett was 63 at the time of his death, and known for a vocal criticism of what he perceived to be Boeing’s declining production standards.
Dean’s attorney Brian Knowles, whose firm also represented Barnett, refused to speculate on whether the two deaths are linked, but insisted that people like Dean and Barnett are important.
Knowles said, “Whistleblowers are needed. They bring to light wrongdoing and corruption in the interests of society. It takes a lot of courage to stand up. It’s a difficult set of circumstances. Our thoughts now are with John’s family and Josh’s family.”
Sometimes telling the truth can be costly. But this should never inhibit us from standing for the truth.
Source: Dominic Gates, et al., “Whistleblower Josh Dean of Boeing supplier Spirit AeroSystems has died,” Seattle Times (5-1-24)
A new survey reveals that more Americans are trusting social media and health-related websites for medical advice over an actual healthcare professional. The poll of 2,000 adults finds many will turn to the web for “accurate” information regarding their health before asking their physician. In fact, significantly more people consult healthcare websites (53%) and social media (46%) than a real-life doctor (44%). 73% believe they have a better understanding of their health than their own doctor does.
Further showcasing their point, two in three Americans say they’ve looked up their symptoms on an internet search engine like Google or a website like WebMD. Respondents say they would rather consult the internet or ChatGPT instead of their doctor because they’re embarrassed by what they’re experiencing (51%) or because they want a second opinion (45%).
Of course, much of the trust people have for technology doesn’t stop with AI. Many would also trust major tech companies with their personal health data, including Google, Apple, Fitbit, and Amazon. Overall, 78% state they’re “confident” that AI and tech companies would protect their health information.
Researcher Lija Hogan said, “This means that we have to figure out the right guardrails to ensure people are getting high-quality advice in the right contexts and how to connect patients to providers.”
In a similar way, many congregants are fact checking their pastor during the sermon and may put trust in strangers on social media and the internet over their pastor’s teaching, relying on dubious information or incorrect theology.
Source: Staff, “More Americans trust AI and social media over their doctor’s opinion,” StudyFinds (12-11-23)
In early March, the Biden Administration began supporting a bill in Congress that would potentially result in a ban of the social media app TikTok. White House press secretary Karine Jean-Pierre called it “important,” saying the administration welcomes it. And it’s not the first restriction on the app; in 2022, Biden signed a bill banning the app on government phones because of potential security risks.
But critics of the president are calling such support hypocritical, because a month prior, the President’s re-election campaign began using the app to engage younger voters.
“We’re going to try to meet voters where they are,” said Jean-Pierre. Campaign staffers clarified that while no White House staffers have the app on their phones, they are working directly with TikTok influencers to get their message across, and taking appropriate security precautions.
The legislation in question, which has received bipartisan support, would require Bytedance, the Chinese company that owns TikTok, to sell the app or face a nationwide ban because of the way its data is stored. U.S. intelligence officials are concerned that Bytedance could be compelled to leak TikTok user data to the Chinese government.
If the bill passes, it would likely be challenged in court by Bytedance, who successfully sued the Trump administration to overturn a similar executive order in 2020.
If we say one thing with our words but communicate something else with our actions, we are not walking in truth, and therefore our words will lack credibility.
Source: Deepa Shivaram, “President Biden would ban TikTok. But candidate Biden is using it for his campaign,” NPR (3-6-24)
Separating fact from fiction is getting harder. Manipulating images—and creating increasingly convincing deepfakes—is getting easier. As what’s real becomes less clear, authenticity is “something we’re thinking about, writing about, aspiring to and judging more than ever.” This is why Merriam-Webster’s word of the year is “authentic,” the company announced in November of 2023.
Editor Peter Sokolowski said, “Can we trust whether a student wrote this paper? Can we trust whether a politician made this statement? We don’t always trust what we see anymore. We sometimes don’t believe our own eyes or our own ears. We are now recognizing that authenticity is a performance itself.”
According to the announcement from Merriam-Webster, “authentic” is a “high-volume lookup” most years but saw a “substantial increase” in 2023. The dictionary has several definitions for the word, including “not false or imitation,” “true to one’s own personality, spirit, or character” and “worthy of acceptance or belief as conforming to or based on fact,” among others.
Sokolowski said, “We see in 2023 a kind of crisis of authenticity. What we realize is that when we question authenticity, we value it even more.”
Other words that saw spikes this year include “deepfake,” “dystopian,” “doppelgänger,” and “deadname,” per Merriam-Webster. This year’s theme of searching for truth seems fitting following last year’s focus on manipulation. The 2022 word of the year was “gaslighting,” a term that originated from a 1938 play by Patrick Hamilton. In the play, a woman complains that the gas lights in her house are dimming while her husband tries to convince her that it’s all in her head.
As technology’s ability to manipulate reality improves, people are searching for the truth. Only the Word of God contains the absolute truth “your word is truth” (John 17:17), as revealed by Jesus, who is “the way, the truth, and the life” (John 14:6).
Source: Teresa Nowakowski, “Merriam-Webster’s 2023 Word of the Year Is ‘Authentic,’ Smithsonian Magazine (11-29-23)
A chorus of discontent is emerging from the users of several popular dating apps like Hinge, Match, and Bumble. The consensus is that the experience has been gradually declining. Dating apps are not as fun, as easy, or as enjoyable as they used to be.
Which is not to say that they’re not still popular. According to a recent Pew Research Center survey, 10% of people in committed romantic partnerships say they met their partner on a dating app or website.
“Our goal is to make meaningful connections for every single person on our platforms," according to a spokesperson for Match.com. "Our business model is driven by providing users with great experiences, so they champion our brands and their power to form life-changing relationships.”
That statement notwithstanding, it’s hard for an app to develop a dedicated customer base when the most satisfied customers, finding a loving relationship, leave the app behind. Each successful outcome results in the loss of two paying customers.
On the contrary, most apps gain financial success by generating repeat users and maximize their time spent on the platform. This dynamic creates a situation described as “adverse selection,” where the people who spend the most time on dating apps are beset with suspicion from prior bad experiences on the app, making it harder to find meaningful connections. Anyone who remains must either lower their standards or risk engaging with people who are less-than-truthful in their behavior. What results is a less enjoyable experience all around.
Economist George Akerlof says there are solutions to the problem, which often revolve around providing more truthful information to counter dishonest actors. But that would require users on dating apps to share potentially embarrassing details of how or why their previous attempts at relational connection failed.
Alas, when it comes to honest self-reflection and authentic disclosure, there appears to be no app for that.
Long lasting relationships are built on the time-tested biblical principles of honesty, trust, and openness. Any other basis for a relationship will lead to suspicion and heartache.
Source: Greg Rosalsky, “The dating app paradox: Why dating apps may be worse than ever,” NPR (2-13-24)
The moment we’ve all breathlessly waited for is finally here: Dictionaries are announcing their words of the year. In December, the US’s most esteemed lexicon, Merriam-Webster, revealed its choice: “authentic.”
In its announcement, the dictionary said the word had seen a big jump in searches this year, thanks to discussions “about AI, celebrity culture, identity, and social media.” The concept of authenticity sits at the intersection of what’s been on our collective minds.
Large language models like ChatGPT and image generators like Dall-E have left us uncertain about what’s genuine, from student essays to the pope’s fashion choices. When it comes to the news, online mis- and disinformation, along with armies of bots, have us operating under different sets of facts.
Sure enough, other leading dictionaries’ words of the year are remarkably similar. Cambridge chose “hallucinate,” focusing its announcement on generative AI: “It’s capable of producing false information – hallucinations – and presenting this information as fact.” Collins didn’t beat around the bush: its word of the year is “AI.”
In a polarized world, the dictionaries’ solidarity suggests there’s something we can all agree on: robots are terrifying. AI is an obsession that seems to cross generations. Whether you’re a boomer or Gen Z, OpenAI feels like a sign of change far beyond NFTs, the metaverse, and all the other fads we were told would transform humanity.
Social media feeds have become carefully curated extensions of ourselves—like little aspirational art projects. As Merriam-Webster points out, authenticity itself has become a performance. In other words, we’re getting very good at pretending to be real.
Source: Matthew Cantor, “Hallucinate, AI, authenticity: dictionaries’ words of the year make our biggest fears clear,” The Guardian (12-5-23)
7 suggestions for integrating Artificial Intelligence into the preacher’s study.