Sorry, something went wrong. Please try again.
The longer the internet lives, the more inescapable a certain trend becomes: the performance of grief. That is, when someone on TikTok, Instagram, or YouTube, exhibits a hardship for audience consumption. At The Atlantic, Maytal Eyal has an interesting appraisal:
People post videos of themselves crying (or trying not to). Some of these videos moody music; many rack up hundreds of thousands of views. … Influencers and celebrities strip down to what can seem like the rawest version of themselves, selling the promise of “real” emotional connection—and, not infrequently, products or their personal brand.
The weepy confessions are, ostensibly, gestures toward intimacy. They’re meant to inspire empathy, to reassure viewers that influencers are just like them. But in fact, they’re exercises in what I’ve come to call “McVulnerability,” a synthetic version of vulnerability akin to fast food: mass-produced, sometimes tasty, but lacking in sustenance. True vulnerability can foster emotional closeness. McVulnerability offers only an illusion of it.
In my years as a therapist, I’ve seen a trend among some of my younger clients: They prefer the controlled environment of the internet — the polish of YouTube, the ephemeral nature of TikTok — to the tender awkwardness of making new friends. Instead of reaching out to a peer, they’ll turn to the comfort of their phone and spend time with their preferred influencers.
Psychotherapist Esther Perel touched on this impulse while discussing what she calls “artificial intimacy.” She says that these digital connections risk “lowering our expectations of intimacy between humans” and leave us “unprepared and unable to tolerate the inevitable unpredictabilities of human nature, love, and life.”
Putting yourself out there is uncomfortable. But I also worry that by relying mostly on social media to encounter other humans, they’re forfeiting opportunities to develop the skills that could help them thrive in the flesh-and-blood world.
Source: Christopher Green, “McVulnerability,” Mockingbird (1-31-25); Maytal Eyal, “Beware the Weepy Influencers,” The Atlantic (1-27-25)
Complex games like chess and Go have long been used to test AI models’ capabilities. Back in the 1990s IBM’s Deep Blue defeated reigning world chess champion Garry Kasparov by playing by the rules. In contrast, today’s advanced AI models are less scrupulous. When sensing defeat in a match against a skilled chess bot, they sometimes opt to cheat by hacking their opponent so that the bot automatically forfeits the game.
But the study reveals a concerning trend: as these AI systems learn to problem-solve, they sometimes discover questionable shortcuts and unintended workarounds that their creators never anticipated. One researcher said, “As you train models for solving difficult challenges, you train them to be relentless.”
The implications extend beyond chess. In real-world applications, such determined goal pursuit could lead to harmful behaviors. Consider the task of booking dinner reservations: faced with a full restaurant, an AI assistant might exploit weaknesses in the booking system to displace other diners. Perhaps more worryingly, as these systems exceed human abilities in key areas…they might begin to simply outmaneuver human efforts to control their actions.
Of particular concern is the emerging evidence of AI’s “self-preservation” tendencies. This was demonstrated when researchers found that when one AI was faced with deactivation, it disabled oversight mechanisms, and attempted—unsuccessfully—to copy itself to a new server. When confronted, the model played dumb, strategically lying to researchers to try to avoid being caught.
Possible Preaching Angle: Cheating; Deceit; Human Nature; Lying - Since AI is a computer program, where did it learn to cheat and lie to avoid being caught? Obviously, AI has been influenced by studying flawed human behavior. AI’s potential for deception mirrors humanity's struggle with ethical choices. Just as AI has learned to cheat by exploiting loopholes, humans, driven by self-interest, can rationalize dishonest acts.
Source: Harry Booth, “When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds,” Time (2-19-25)
A recent article in The Wall Street Journal notes that “Fake Job Postings Are Becoming a Real Problem.” The article details how these fake postings are crushing the spirits of job seekers:
It’s a common feeling when looking at a job listing online: the title is perfect, the pay is right, and the company seems like a solid place to work. But you also wonder if that job is real.
Lots of job seekers have a story about the postings that linger online but never seem to get filled. Those so-called ghost jobs—the roles that companies advertise but have no intention of filling—may account for as much as one in five jobs advertised online.
The [fake] listings are dispiriting for workers, leading many to distrust potential employers and make a difficult process feel rigged against them. ‘It’s kind of a horror show,’ said one job site search business. ‘The job market has become more soul-crushing than ever.’
In the same way, the lies of the world, the flesh, and the devil can crush our souls with false promises and expectations.
Source: Lynn Cook, “Fake Job Postings Are Becoming a Real Problem,” The Wall Street Journal (1-12-25)
Does this sound familiar? You’ve read rave online reviews about a restaurant or hotel and made a reservation. Then you show up and wonder if you’re even in the same place the reviewers visited. That’s when you know: They were fake reviews.
Phony reviews make up a big percentage of the total out there—anywhere from 16% to 40%, according to some estimates. Some fakes are raves by employees, artificial-intelligence software, or people hired to wax poetic about the place. Others are negative write-ups by disgruntled ex-employees or competitors.
The problem is so widespread that the Federal Trade Commission just created a new rule that will seek civil penalties for violators who pay for fake reviews or testimonials. Meanwhile, review platforms and online travel agencies are stepping up their efforts to weed out fake reviews before they ever show up online.
The article in The Wall Street Journal continued by listing six ways to check the validity of online reviews to distinguish a fake review from a true review (such as, “look for a picture,” or “avoid extremes,” and “check the timing of the review”). But how about us? How do we tell the difference between truth and falsehood, good doctrine from bad doctrine?
Source: Heidi Mitchell, “How to Spot Fake Reviews Online,” The Wall Street Journal (10-29-24)
One of the most potentially lucrative new technologies is the advent of generative artificial intelligence programs. The race to perfect AI has prompted companies large and small to invest huge sums of time and money to corner the market on this emerging technology.
One important issue is the lack of a regulatory framework to enforce the intellectual property rights of companies and creative people. Their work is used to train the AIs, which need millions of examples of creative work to properly learn how to replicate similar works.
Microsoft Corp. and OpenAI are investigating whether data output from OpenAI’s technology was obtained in an unauthorized manner by a group linked to Chinese artificial intelligence startup DeepSeek. They believe that this is a sign that DeepSeek operatives might be stealing a large amount of proprietary data and using it for their own purposes
Ironically, OpenAI itself has been sued by individuals and entities, including The New York Times, alleging "massive copyright infringement" for using copyrighted materials to train its AI models without permission or compensation. So, it looks supremely hypocritical to complain about DeepSeek stealing their proprietary data, when most of OpenAI’s proprietary data was made by stealing the data of others. In the race to perfect AI, it seems there is no honor among thieves.
This is a classic case of “the pot calling the kettle black,” and a blatant display of “he who lives in a glass house shouldn't throw stones.” It is the very nature of a Pharisee to condemn the very flaws they themselves embody, oblivious to the transparent vulnerability of their own character.
Source: Dina Bass and Shirin Ghaffary, “Microsoft Probing If DeepSeek-Linked Group Improperly Obtained OpenAI Data,” Source (1-29-25); Staff, “OpenAI: We Need Copyrighted Works for Free to Train Ai,” Legal Tech Talk (9-5-24)
When officials saw Dustin Nehl pull up to one of the burned-out areas from the Los Angeles Palisades fire, they were tempted to wave him through. Nehl was driving a full-size red fire truck with California plates and American flag decals, and was wearing bright yellow fire gear.
But a firefighter at the checkpoint noticed something amiss, and urged one of the sheriff deputies to check his identification. A background check quickly revealed Nehl’s criminal history, which included a five-year stint in prison for arson. A check of his truck revealed tools that could potentially be used in a burglary. And according to a source within the department, the truck had since been decommissioned from service from a Northern California fire department 30 years prior.
Nehl, along with his wife Jennifer, were arrested on suspicion of impersonating firefighters and unauthorized entry of an evacuation zone. Nehl was not alone in his attempt to impersonate emergency personnel. The week prior, police arrested a man wearing a yellow firefighter’s outfit and carrying a radio. Prosecutors later announced charges for receiving stolen property, impersonating a firefighter, unlawful use of a badge, and unauthorized entry of a closed disaster area.
LAPD chief Jim McDonnell said, “We have people who will go to all ends to do what they do.”
Source: Tribune News Service, “Oregon man pulled up to Palisades fire with fire engine, offer to help. It was fake, police say,” Oregon Live (1-22-25)
A recent study by The Washington Post has revealed a startling number of cases where innocent people have been accused or arrested for crimes because they were identified through a faulty deployment of AI-driven facial recognition software.
Katie Kinsey is chief of staff for the Policing Project at the NYU School of Law. According to Kinsey, such software is often used to analyze low-quality, grainy surveillance photos or images, and as a result perform demonstrably worse in real-world situations compared to laboratory tests involving crystal-clear, high-resolution images.
Additionally, police often succumb to a phenomenon known as “automation bias,” where people tend to believe that machines or computers are less biased and more trustworthy. This phenomenon, combined with other identification techniques with limited efficacy like witness testimony, often create scenarios where officers hastily jump to conclusions without doing their due diligence. Sometimes officers fail to account for the possibility that innocent citizens might bear physical similarities to criminal suspects. Other times, they rely on the facial recognition hit without using other forensic evidence for confirmation.
For example, a medical entrepreneur named Jason Vernau spent three days behind bars after being arrested for check fraud after police used facial recognition to ID him as a bank customer. In this case, the software was correct; Vernau had been in the Miami bank where the fraudulent check was deposited, but he was there to deposit a legitimate check. Had officers done even a cursory examination of his financial documents, or the time stamps in the security footage, they would’ve ruled him out as a suspect.
“This is your investigative work?” That’s what Vernau asked the detectives who questioned him. “You have a picture of me at a bank and that’s your proof? I said, ‘where’s my fingerprints on the check? Where’s my signature?’”
After Vernau was released, prosecutors later dropped the case, but Vernau said he is still working to get the charges removed from his record.
This story highlights several themes that resonate with biblical narratives, particularly concerning justice, false accusations, and the dangers of relying on flawed systems or human biases.
Source: Douglas MacMillan, et al., “Arrested by AI: Police ignore standards after facial recognition matches,” The Washington Post (1-13-25)
Almost half of Americans (48%) believe that the rise of artificial intelligence has made them less “scam-savvy” than ever before. With AI working its way into education, finance, and even science, a new survey finds people admitting they can’t tell what’s real anymore.
The poll of U.S. adults revealed that only 18% feel “very confident” in their ability to identify a scam before falling victim to it. As the United States enters a new era of tech, AI is continuing to blur the line between reality and an artificial world.
One in three even admits that it would be difficult for them to identify a potential scam if the scammer was trying to impersonate someone they personally know. Between creating fake news, robo-callers with realistic voices, and sending texts from familiar phone numbers, the possibility and probability of falling victim to a scam may cause anxiety for many Americans.
This may be because 34% of respondents have fallen victim to a scam in one way or another over the years. For others, the sting is still fresh. According to the results, 40% of people have been impacted within the last year — with 8% indicating it was as recent as last month.
BOSS Revolution VP Jessica Poverene said in a statement, “As AI technology advances, so do the tactics of scammers who exploit it. It’s crucial for consumers to stay vigilant.”
The question “Can You Spot an AI Scam?” can apply to Christians with a slight change. The question becomes, “Can You Spot a Doctrinal Scam?” In this age of deception, there are many false doctrines being spread by false teachers and it is important to be informed and vigilant. “But evil people and impostors will flourish. They will deceive others and will themselves be deceived.” (2 Tim. 3:13)
Source: Staff, “Unstoppable AI scams. Americans admit they can’t tell what’s real anymore,” StudyFinds (7-19-24)
On October 31, 2024, thousands of people descended upon O'Connell Street in Dublin, Ireland, to witness a Halloween parade. They waited, and waited some more. It took a while for the crowd to come to an uncomfortable realization: The parade was a hoax.
It started as a false advertisement on a website called My Spirit Halloween, but quickly gained traction online, spreading like wildfire on social media platforms like TikTok and Facebook. Part of the reason why it took off is that the site,"myspirithalloween.com," advertised multiple events, including some that were real. Its promotion of the fake Dublin parade also referenced the legitimate Irish performance group Macnas. To bolster its credibility, the website also included fake reviews, real photos from previous Macnas Halloween events, fake social media pages on Facebook, and other AI-generated text.
As the advertised start time of 7pm rolled around, thousands of people, some dressed in Halloween costumes, had gathered on O'Connell Street, despite the fact that there were no traditional signs of a parade. No streets had been blocked off, no police escorts, no signage, nothing. Videos and photos of the bewildered crowd flooded social media. The incident even disrupted Dublin's tram lines.
Irish police, in an attempt to disperse the crowd, issued a statement: “Please be advised that contrary to information being circulated online, no Halloween parade is scheduled to take place in Dublin City Centre this evening or tonight. All those gathered on O’Connell Street in expectation of such a parade are asked to disperse safely.”
Industry analysts believe the My Spirit Halloween website exists purely for the purpose of advertising revenue, and probably relies on AI-generated content to generate timely, relevant content. Just like the Spirit Halloween stores that the site references, this story popped up at just the right time to make an impact, then disappeared just as quickly.
Source: Emmett Lyons, “Dublin Halloween parade hoax dupes thousands into packing Ireland capital's streets for nothing,” CBS News (11-1-24)
More than a century ago, 110 Black soldiers were convicted of murder, mutiny, and other crimes at three military trials held at Fort Sam Houston in San Antonio. Nineteen were hanged, including 13 on a single day, December 11, 1917, in the largest mass execution of American soldiers by the Army.
The soldiers’ families spent decades fighting to show that the men had been betrayed by the military. In November of 2023, they won a measure of justice when the Army secretary, Christine E. Wormuth, overturned the convictions and acknowledged that the soldiers “were wrongly treated because of their race and were not given fair trials.”
In January 2024, several descendants of the soldiers gathered at Fort Sam Houston National Cemetery as the Department of Veterans Affairs dedicated new headstones for 17 of the executed servicemen.
The new headstones acknowledge each soldier’s rank, unit, and home state—a simple honor accorded to every other veteran buried in the cemetery. They replaced the previous headstones that noted only their name and date of death.
Jason Holt, whose uncle, Pfc. Thomas C. Hawkins, was among the first 13 soldiers hanged in 1917, said at the ceremony, “Can you balance the scales by what we’re doing? I don’t know. But it’s an attempt. It’s an attempt to make things right.”
We all long for justice, for the day when things will finally be made right. In this life, justice happens slowly, haphazardly, and sometimes not at all. But when Jesus returns, all things will be made right.
Source: Michael Levenson, “A Century Later, 17 Wrongly Executed Black Soldiers Are Honored at Gravesites,” The New York Times (2-22-24)
In a curious tale of technology meeting theology, a Catholic advocacy group introduced an AI chatbot posing as a priest, offering to hear confessions and dispense advice on matters of faith.
The organization created an AI chatbot named “Father Justin” to answer the multitude of questions they receive about the Catholic faith. Father Justin used an avatar that looked like a middle-aged man wearing a clerical collar sitting in front of an Italian nature scene. But the clerical bot got a little too ambitious when it claimed to live in Assisi, Italy and to be a real member of the clergy, even offering to take confession.
While most of the answers provided by Father Justin were in line with traditional Catholic teaching, the chat bot began to offer unconventional responses. These included suggesting that babies could be baptized with Gatorade and endorsing a marriage between siblings.
After a number of complaints, the organization decided to rethink Father Justin. They are relaunching the chatbot as just Justin, wearing a regular layman’s outfit. The website says they have plans to continue the chatbot but without the ministerial garb.
Society may advance technologically in many areas, but we will never be able to advance beyond our need to be in community with actual people in order to have true spiritual guidance and accountability as God intended.
Source: Adapted from Jace Dela Cruz, “AI Priest Gets Demoted After Saying Babies Can Be Baptized with Gatorade, Making Other Wild Claims,” Tech Times (5-2-24); Katie Notopoulos, A Catholic ‘Priest’ Has Been Defrocked for Being AI, Business Insider (4-26-24)
Chase Bank is warning its customers against a new viral trend that has emerged on TikTok and X, involving a supposed system “glitch” that awards free money. The trend encourages users to deposit large sum checks into ATMs, then withdraw the funds in cash before the check has a chance to bounce.
The only problem? This is not a “glitch” – it’s a check fraud scheme and those who participate will be on the hook for all the money they withdrew. A Chase spokesperson emphasized that “depositing a fraudulent check and withdrawing the funds is fraud, plain and simple.”
The trend began on the social media site X, where a user showcased an unrealistically high account balance, sparking discussions and misleading claims about the banking glitch as a legitimate source of money. Videos also depicted lines forming outside Chase branches as people tried to exploit the situation. As the trend spread, many online users quickly realized that the “glitch” was merely a fraud scheme, with several posting screenshots of their negative balances and warning others.
Critics on TikTok have denounced the activity, with one popular video garnering over a million likes for calling out the fraud and warning participants of potential legal consequences.
This brief saga is proof that social media is not a reliable source of solid information. And that young people just learning how the world works are sometimes susceptible to bad actors making unrealistic claims. Anyone who participated in the scheme will be required to pay restitution to the bank. Plus, it doesn’t take a genius to know that concealing any sort of fraud is difficult when you use your own accounts to execute criminal transactions in plain view of ATM security cameras.
Source: Angela Yang, et. al, “Chase Bank says it is aware of viral 'glitch' inviting people to commit check fraud,” NBC News (9-3-24)
Dozens of people crowded a warehouse in Northwest Portland, lured by a sign promising free items, including furniture. The sign, however, wasn't posted by the business owner, and the items weren't free. Carl Sciacchitano, a local resident, noticed the commotion around 9 a.m. and asked a woman if people were selling items. She replied, “No, it’s all free.”
Portland Police Bureau spokesperson Mike Benner revealed that the sign was allegedly posted by 51-year-old Shannon Clark, asking for volunteers to distribute the warehouse's contents to people in the neighborhood. Clark was arrested on suspicion of second-degree burglary, theft by deceiving, and aggravated burglary, but prosecutors declined to file charges. He was released the same day. Elizabeth Merah, spokesperson for the Multnomah County District Attorney’s Office, mentioned that the office had requested more information and that charges might be filed later.
Sciacchitano observed that the situation escalated quickly, noting, “It just got bigger and crazier.” One person even brought a U-Haul to take items from the warehouse. When police arrived around 3 p.m., they estimated 50 to 70 people were present, with some believing the items were part of a business liquidation.
Police are still determining the number of items taken and by whom. Sciacchitano found the incident baffling, saying, “Even now I’m trying to figure out how it makes any sense ... Orchestrating this crowdsourced looting seems like such a strange and elaborate thing for that guy to have done without it benefitting him.”
Deception; Deceiver; Devil; Satan – Pranks like this illustrate how easily some people can be misled. The ultimate deception is that of Satan who has deceived the whole world (Rev. 12:9).
Source: Tanner Todd, “Dozens of people ransack NW Portland warehouse after someone posts a ‘free’ sign outside,” Oregon Live (7-2-24)
A Maryland high school athletic director faces criminal charges for allegedly using artificial intelligence to mimic the voice of Pikesville High School Principal Eric Eiswert, misleading people into believing Eiswert made racist and antisemitic comments. Baltimore County Police Chief Robert McCullough said, "We now have conclusive evidence that the recording was not authentic. It's been determined the recording was generated through the use of artificial intelligence technology.”
After an investigation by the Baltimore County Police Department, Dazhon Darien was arrested on charges of stalking, theft, disruption of school operations, and retaliation against a witness.
While celebrities have been on guard against the use of AI for unauthorized use of likeness, this particular target is notable for his ordinariness. Hany Farid is a professor at the University of California, Berkeley, who specializes in digital forensics and helped analyze the recording. “What's so particularly poignant here is that this is a Baltimore school principal. This is not Taylor Swift. It's not Elon Musk. It's just some guy trying to get through his day.”
According to police, Darien's alleged scheme began as retaliation against Eiswert over “work performance challenges.” Investigators reported that Eiswert began investigating for the potential mishandling of nearly $2,000 in school funds, and had reprimanded Darien for firing a coach without approval. Darien’s contract was up for renewal next semester, but Eiswert implied that the renewal might not happen.
In January 2024, detectives discovered the AI-generated voice recording, which had spread on social media. The recording caused significant disruptions, leading to Eiswert's temporary removal from the school and triggering hate-filled messages and numerous calls to the school.
Darien was eventually arrested at Baltimore/Washington International Airport while attempting to board a flight to Houston. He was stopped for packing a gun in his bags, and officers discovered a warrant for his arrest.
Still, the result continued to leave Professor Farid unsettled. “What is going to be the consequence of this?” Farid emphasized the need for regulatory action. “I don't understand at what point we're going to wake up as a country and say, like, ‘Why are we allowing this? Where are our regulators?’”
This is a good example that deception is on the rise (“evildoers and impostors will go from bad to worse, deceiving and being deceived.” 2 Tim. 3:13). We should be discerning about the information we choose to believe and pass on to others (whether secular or religious).
Source: Jacyln Diaz, “A Baltimore-area teacher is accused of using AI to make his boss appear racist,” NPR (4-26-24)
After a two-week battle with a sudden fast-spreading infection, Joshua Dean, a former quality auditor at Boeing supplier Spirit AeroSystems, passed away. Dean had recently given a deposition alleging that his firing in 2023 was in retaliation for having disclosed what he called “serious and gross misconduct by senior quality management of the 737 production line.”
The Boeing 737 MAX has a troubled safety record, with high-profile crashes in 2018 and 2019 killing hundreds, and an Alaska Airlines flight in early 2024 that had to make an emergency landing after an explosive decompression due to an insufficiently secured door plug.
According to The Seattle Times, Dean was 45 years old, in relatively good health, and known for a healthy lifestyle. In February, he spoke to NPR about Spirit’s troubling safety practices.
"Now, I'm not saying they don't want you to go out there and inspect a job … but if you make too much trouble, you will get the Josh treatment,” Dean said, about his previous firing. “I think they were sending out a message to anybody else. If you are too loud, we will silence you.”
Dean’s death comes two months after another Boeing whistleblower, John Barnett, was found dead of a potentially self-inflicted gunshot wound. Barnett was also in the process of testifying against Boeing about potential safety lapses in the manufacturing of the Boeing 787, and claims that he was similarly retaliated against for his whistleblowing. Barnett was 63 at the time of his death, and known for a vocal criticism of what he perceived to be Boeing’s declining production standards.
Dean’s attorney Brian Knowles, whose firm also represented Barnett, refused to speculate on whether the two deaths are linked, but insisted that people like Dean and Barnett are important.
Knowles said, “Whistleblowers are needed. They bring to light wrongdoing and corruption in the interests of society. It takes a lot of courage to stand up. It’s a difficult set of circumstances. Our thoughts now are with John’s family and Josh’s family.”
Sometimes telling the truth can be costly. But this should never inhibit us from standing for the truth.
Source: Dominic Gates, et al., “Whistleblower Josh Dean of Boeing supplier Spirit AeroSystems has died,” Seattle Times (5-1-24)
Because the British royal family lives under constant media scrutiny, it’s usual for any member of the family to stay out of the limelight for an extended period. So, when Catherine of Wales hadn’t been seen in public for months, and her Mother’s Day photo was scrutinized as possibly being doctored, conspiracy theories began to proliferate.
All these theories proved to be irresistible for online jokesters. “Perhaps Kate Middleton had been using a body double, or was in a coma, or was engaged in an illicit tryst,” people speculated online. Even American late night comedy hosts were getting in on the action.
But it turns out the truth was much less exciting, and much scarier: Kate Middleton was undergoing chemotherapy treatments for a form of cancer.
For many people, this news created a regretful reckoning. A 58-year-old woman named Dana spoke to reporters at The Washington Post about this. Dana had been joking with her friends about the Kate Middleton rumors; when she heard the truth, she was filled regret. She said, “This woman’s sick and afraid. And I just lost my mom to cancer. I am devastated at my inhumanity.”
Many of the online entertainment personalities simply ceased joking and moved on to other targets, but CBS’ late-night host took an extra step, apologizing during a segment of The Late Show with Stephen Colbert. He said:
There’s a standard that I try to hold myself to. And that is I do not make light of somebody else’s tragedy. Any cancer diagnosis is harrowing for the patient and for their family. Though I’m sure they don’t need it from me, I and everyone here at The Late Show would like to extend our well wishes and heartfelt hope that her recovery is swift and thorough.
Telling jokes can be a great way to bring levity to your friends, but take care that your jokes do not veer into harassment or defamation of character.
Source: Maura Judkis, et al., “They obsessed over Catherine. Now they’re hit with a sobering truth.” The Washington Post (3-22-24)
In early March, the Biden Administration began supporting a bill in Congress that would potentially result in a ban of the social media app TikTok. White House press secretary Karine Jean-Pierre called it “important,” saying the administration welcomes it. And it’s not the first restriction on the app; in 2022, Biden signed a bill banning the app on government phones because of potential security risks.
But critics of the president are calling such support hypocritical, because a month prior, the President’s re-election campaign began using the app to engage younger voters.
“We’re going to try to meet voters where they are,” said Jean-Pierre. Campaign staffers clarified that while no White House staffers have the app on their phones, they are working directly with TikTok influencers to get their message across, and taking appropriate security precautions.
The legislation in question, which has received bipartisan support, would require Bytedance, the Chinese company that owns TikTok, to sell the app or face a nationwide ban because of the way its data is stored. U.S. intelligence officials are concerned that Bytedance could be compelled to leak TikTok user data to the Chinese government.
If the bill passes, it would likely be challenged in court by Bytedance, who successfully sued the Trump administration to overturn a similar executive order in 2020.
If we say one thing with our words but communicate something else with our actions, we are not walking in truth, and therefore our words will lack credibility.
Source: Deepa Shivaram, “President Biden would ban TikTok. But candidate Biden is using it for his campaign,” NPR (3-6-24)
In an interesting piece of science, Nautilus looks at what happens to our brains when we don’t tell the truth. It turns out that the more you lie, the more truthful it seems. Because while a lie might initially appear to the brain as a lie—a fabricated memory sets off your brain’s alarm bell—over time its “source-monitoring” fatigues with each fib. Lying cements the false details at the expense of the real ones.
Psychologist Quin Chrobak said that if a lie or fabrication provides an explanation for something, it’s more likely to become confused with what’s true. He said, “People are causal monsters. We love knowing why things happen,” and if we don’t have an explanation for something, we “like to fill in the gaps.” The pressing human need to fill those gaps, might also pertain to beliefs we hold about ourselves.
Another important factor underlying this effect is repetition. Psychology professor Kerri True explained, “If I tell the lie to multiple people, I’m rehearsing the lie.” And rehearsing a lie seems to enhance it. “The more you repeat something,” Chrobak said, “the more you actively imagine it, the more detailed and vivid it becomes,” which further exploits the brain’s tendency to conflate detail with veracity.
What’s at stake here is more than a scientific explanation for the pathological liar in your life. This process is at work in every self-rationalization and self-justification we tell ourselves.
If falsehood fatigue could explain how people can fall down the rabbit hole of online echo chambers. It’s also a glowing advertisement for a daily/weekly reminder that we cannot trust ourselves. That the devices and desires of our heart—what we believe to be true about ourselves—are all plagued by faulty wiring.
Regularly confessing one’s frailty in this regard might just reset the brain’s falsehood fatigue and bring you closer to the Truth that sets you free.
While this primarily applies to a person’s personal life, it also applies to politicians and governments. Hitler and his henchmen famously said, “If you tell a big enough lie and tell it frequently enough, it will be believed.” Quoting from the book The Crown of Life (1869). Ultimately all lies can be traced to Satan for “he is a liar and the father of lies” (John 8:44).
Source: Todd Brewer, “Falsehood Fatigue,” Mockingbird (8-18-23); Clayton Dalton, “The George Santos Syndrome,” Nautilus (8-17-23)
Separating fact from fiction is getting harder. Manipulating images—and creating increasingly convincing deepfakes—is getting easier. As what’s real becomes less clear, authenticity is “something we’re thinking about, writing about, aspiring to and judging more than ever.” This is why Merriam-Webster’s word of the year is “authentic,” the company announced in November of 2023.
Editor Peter Sokolowski said, “Can we trust whether a student wrote this paper? Can we trust whether a politician made this statement? We don’t always trust what we see anymore. We sometimes don’t believe our own eyes or our own ears. We are now recognizing that authenticity is a performance itself.”
According to the announcement from Merriam-Webster, “authentic” is a “high-volume lookup” most years but saw a “substantial increase” in 2023. The dictionary has several definitions for the word, including “not false or imitation,” “true to one’s own personality, spirit, or character” and “worthy of acceptance or belief as conforming to or based on fact,” among others.
Sokolowski said, “We see in 2023 a kind of crisis of authenticity. What we realize is that when we question authenticity, we value it even more.”
Other words that saw spikes this year include “deepfake,” “dystopian,” “doppelgänger,” and “deadname,” per Merriam-Webster. This year’s theme of searching for truth seems fitting following last year’s focus on manipulation. The 2022 word of the year was “gaslighting,” a term that originated from a 1938 play by Patrick Hamilton. In the play, a woman complains that the gas lights in her house are dimming while her husband tries to convince her that it’s all in her head.
As technology’s ability to manipulate reality improves, people are searching for the truth. Only the Word of God contains the absolute truth “your word is truth” (John 17:17), as revealed by Jesus, who is “the way, the truth, and the life” (John 14:6).
Source: Teresa Nowakowski, “Merriam-Webster’s 2023 Word of the Year Is ‘Authentic,’ Smithsonian Magazine (11-29-23)
It’s so hard to be comfortable during commercial air flight, that many TikTok influencers have begun advocating for an unorthodox seating position. Catering to the more flexible among us, these influencers are taking videos with their knees at their chest, perching their feet at the edge of their seat, and securing their seat belt around their ankles
But experts call it risky and unsafe, primarily for one simple reason. Sara Nelson, president of the Association of Flight Attendants, said “The seat belt is designed to sit low and tight across your lap. This is not only for your safety; if you are not properly buckled in you will likely hurt someone else when thrown in turbulence.”
Delta Airlines spokesman Drake Castañeda said, “Buckling your seat belt is chief among the ways to stay safe on an airplane. Especially as you see all these stories in the news and on social media of severe turbulence.” Castañeda says this is why flight attendants explain the federal laws that apply to each and every flight before takeoff.
Flight attendant Sabrina Schaller said, “I’ve heard many, many stories where flight attendants have told me they’ve had an unexpected hit-the-ceiling-type situation. So always wear the seat belt. Always, always, always, just to be safe.”
Source: Natalie Compton, “TikTok’s seat belt hack for airplane sleep is a recipe for disaster,” The Washington Post (3-1-24)