Live Not by (A.I.) Lies | Tomorrow’s World

Live Not by (A.I.) Lies

Comment on this article

With its ability to manipulate reality, will A.I. become mankind’s greatest tool for deception? What are the real dangers of A.I. and how can you recognize them?

Artificial intelligence, or A.I., is having what one might call a civilizational moment. Machine learning and the growing capacity for computer algorithms to perform complex actions in imitation of human intelligence is not new, and it has had an impact on our lives—mostly in invisible ways—for quite some time now, as discussed in our May–June 2015 article “Artificial Intelligence: Promise or Peril?”

A.I. powers the map app on your smartphone as you navigate through traffic, and it decides what search results to offer you as you shop on Amazon. But news feeds have been flooded in recent months with dramatic stories of generative A.I.—artificially intelligent systems able to produce text, imagery, video, or audio that seems astonishingly lifelike and real. That is, A.I. is learning to imitate reality on an increasingly more accurate scale. This is a dramatic shift.

The result can be a boon to creative workers in all fields—writing, image production, video editing, etc. But it also promises more sinister possibilities, as those who would use it to deceive now have new and powerful tools for imitating reality. How close are we to creating a world in which we can no longer trust any image we see or video we watch? Will we always be able to tell the difference between false or manufactured sights and sounds and real ones, or are we facing a potential “information apocalypse,” as one researcher has dubbed it?

A.I.’s capacity to simulate reality seems to be accelerating—in a manner suggesting to some that we are on the verge of one of the largest technological revolutions in history. What is “state of the art” this month may be eclipsed dramatically in a few months’ time, as A.I. systems and approaches are learning to imitate real life on several fronts.

A.I. Generated Images

Several companies offer A.I.-powered models—such as Midjourney, Stable Diffusion, and OpenAI’s DALL·E 2—able to generate images, both fanciful and realistic, that appear to have been created by human beings, based only on a textual description. For example, if a user wants an image of a unicorn galloping through space on a rainbow that arcs past the planet Saturn, then he simply types in that exact description. The A.I. often generates multiple images, which the user can then select and either use or enhance with additional processing. And for every request, a new, unique image is generated.

Most startling, perhaps, are the realistic images. The generative systems can take very specific input, simulating the look of a specific camera, lens, and exposure level, producing desired emotions on the faces of people depicted, and conveying whatever “backstory” the image might be meant to suggest. The results can be extremely convincing, as if the images were real photos taken of real people in real settings—and they have been used to successfully fool many.

Currently, one can often find “tells” that an image is machine-generated, such as features that, upon closer examination, are seen to be nonsensical—say, pupils that don’t match, or a hand with an extra finger. However, the image generators and their human users seem to be learning quickly how to improve their results and eliminate these errors.

A.I. Generated Faces

In 2020, The New York Times published an interactive article on its website titled “Designed to Deceive: Do These People Look Real to You?” Using its own commissioned A.I. to illustrate the ease of generating completely fake but utterly believable human faces, the Times’ website allowed users to manipulate images of individual faces to vary their age, sex, race and ethnicity, and emotional expression. None of the faces represented actual people and every single one was generated by a computer—yet they were astonishingly believable. Though some small “tells” often existed—mismatched ear shapes or unsymmetrical eyeglasses—an undiscerning viewer would pass right over them.

Perhaps, though, the most dangerous use of A.I.-generated faces is the capacity to digitally alter real people’s faces in videos and film, transforming them with uncanny accuracy into the faces of other people. Commonly called “deepfakes,” the technology has added the youthful faces of now-old actors to movies to depict younger versions of their characters, and the faces of now-dead actors have been swapped for the faces of stand-ins. And the London-based company Flawless has used its face-manipulation A.I. to replace a film’s vulgarities and cuss words to gain a rating that would allow a wider audience. In each replacement, it looks as though the actress is truly saying the “cleaner” word instead of the original vulgarity.

A.I. Voice Generation

Researchers and media creators are beginning to use A.I. to generate remarkably realistic audio of individuals saying things they never truly said. After sampling an individual’s voice for, say, 30 minutes, the computer is then able to take any written text and create audio that sounds as though the individual himself or herself is speaking and reading the text aloud.

Called “voice cloning” or “audio deepfake” by many, the results continue to develop in quality. On Presidents’ Day of 2023 in the United States, digital designer Linus Ekenstam used A.I. text-generating and voice-cloning tools to simulate a podcast interview with Ronald Reagan, in which Ekenstam quizzed the deceased president about his thoughts on technology, the state of the U.S., and former president Donald Trump. Other than the anecdote the A.I.-generated Reagan offered about being abducted by aliens, the “interview” was relatively believable, and the voice truly sounded like Reagan’s, though less animated and lively than one would have expected had the deceased president been interviewed in person.

In a prominent, high-profile use of the technology, entertainment powerhouse Disney used A.I. to imitate the iconic voice of James Earl Jones, with the actor’s permission, for Darth Vader’s spoken lines in its Obi-Wan Kenobi streaming TV series. And the technology continues to improve. In January 2023, a pre-print Cornell University paper on computation and language demonstrated an A.I. voice-generating system capable of substantially and realistically reproducing a subject’s voice after training on samples three seconds long. The model even allows users to vary the voice’s inherent emotional quality.

Conversational A.I.

ChatGPT, a language-processing tool that imitates human-written text and conversation, has recently brought wide attention to the current level of A.I. sophistication. Powered by the vast GPT-3 language model and made available to the public by its creators, OpenAI, ChatGPT was the first experience many non-specialists had with A.I.’s capacity to simulate human responses.

ChatGPT can carry on a text chat, explain complicated concepts in simple terms, and produce original content that seems human-created, even when presented with the most ridiculous scenarios. Ask it to generate original poetry, song lyrics, or even screenplays on any topic imaginable, or to create imaginary “transcripts” of conversations between famous individuals who never met, or simply to imitate a famous author’s writing style, and ChatGPT will oblige with surprisingly human-like results.

But ChatGPT currently has flaws. OpenAI has acknowledged bias in the political and ideological leanings of the vast content that trained ChatGPT. And researchers and test users have noted the propensity of ChatGPT and other similar A.I. chatbots to “hallucinate” as conversations continue for extended lengths of time—meaning the A.I. begins to introduce “facts” that are plainly inaccurate. This is especially true when the human communicating with the A.I. works deliberately to nudge the algorithm into areas where it is not designed to venture. Yet researchers are constantly working to train their A.I. systems to avoid such pitfalls and to simulate human interaction more and more successfully.

(Please note: An “Interview” between a Tomorrow’s World editor and ChatGPT regarding the dangers of A.I.-generated media is included at the end of this article.)

The Dangers of A.I. Tools in the Wrong Hands

As artificially intelligent tools grow more and more powerful in their ability to imitate multiple facets of the real world—images, video, voice, text—the ability of individuals, corporations, and governments to use those tools to deceive others grows as well. In fact, it is already happening, and it has been for some time.

In 2019, the Associated Press reported that a LinkedIn user named Katie Jones was a fake account, likely used for espionage. The nonexistent Katie Jones utilized an A.I.-generated face for its account profile and had developed connections with many individuals involved in American politics and policymaking. The AP quoted then-director of the U.S. National Counterintelligence and Security Center, William Evanina, as saying that China uses such accounts with fake profiles to conduct “mass scale” spying on the U.S.

Many a Facebook user has received an odd-sounding text message seemingly sent by a friend in trouble who needs fast financial relief. Careful readers spot such fakes quickly—say, due to the lack of personal details, or the impersonal manner of the message. Yet, text-generating software may soon review people’s social media posts and imitate their style with remarkable and deceptive accuracy—even responding to questions in the “voice” of the assumed identity with answers that include personal details.

And in 2019, A.I. powered a high-tech corporate phone scam. The Wall Street Journal reported on August 30 that year on the case of an unnamed UK firm defrauded of €220,000 (US$243,000) by scammers using A.I. to imitate the voice of the CEO of the firm’s German parent company. And such technology has advanced greatly since then.

How much more successful will such scams be when the A.I. scammer is able to interact with victims and pass every validation test? And, like the “mass scale” spying Mr. Evanina intimated, such apps and programs would enable similarly mass-scale scamming, as a single con artist is able to deploy scores, hundreds, or even thousands of programs at one time.

As the existence of A.I. software becomes more commonplace, claims of being “deepfaked” are growing when new videos or images surface. In many instances, such assertions turn out to be true. But as the ability of A.I. to imitate reality continues to improve, those claims will become harder and harder to substantiate.

Artificial “Relationships”

Some dangers are more subtle. In July 2021, the San Francisco Chronicle made waves with its article “The Jessica Simulation: Love and Loss in the Age of A.I.” Reporter Jason Fagone described how a young man used Project December, a GPT-3-based application built by game designer Jason Rohrer, to upload text messages from his dead girlfriend to allow the chatbot to simulate her in conversation, as if he were speaking regularly to her ghost. The example inspired Mr. Rohrer to focus Project December as a means to, in the words of its tagline, “Simulate the Dead.” But it apparently led OpenAI to require safeguards of Mr. Rohrer that he was unwilling to implement, prompting him to end the project.

Yet the possibility of “reanimating” the dead does not seem so off-putting to megacorporation Amazon. At its June 2022 A.I. technology conference, Amazon showed a video depicting a child asking his Alexa device, “Alexa, can Grandma finish reading me The Wizard of Oz?” The device answers, “OK,” then proceeds to read the book in the voice of the child’s dead grandmother.

While the idea of people interacting with imitations of their dead loved ones may warm some hearts, we should ask ourselves: Is this healthy? Before the video, Amazon Senior Vice President and Head Scientist for Alexa said that these developments are “enabling lasting personal relationships.” But no matter how many interactions or “conversations” you might have with an A.I. device imitating a human being—no matter how “deep” or “profound” or “moving” they may be—you are not in a relationship. You are living a lie. And it is hard to fathom the possibility that such fantasies would not do, in the long run, far more harm than good.

Power Tools for the Father of Lies

Where all of this could go—and could go very quickly—should be obvious to anyone who understands the identity of “the god of this age” (2 Corinthians 4:4). Satan, the devil, still holds sway over mankind, in which the spirit of “get” remains far more powerful than the spirit of “give.” Jesus Christ labels him the father of lies, and in this world that reflects the devil’s character at every level, deception has long been one of the most popular tools for carnal men seeking to steal, destroy, and manipulate. And A.I. brings abilities that supercharge that tool.

How much chaos can be generated in a world where it seems reality itself can be reshaped in the service of falsehood?

False accusations have long been a part of world politics. Now envision the ability to create false but utterly believable “evidence” to support those accusations. For instance, imagine someone using voice cloning technology to recreate the voice of a politician “accidentally” caught on a microphone insulting his supporters or admitting to a crime. In elections where swings of only a few percentage points are enough to determine the outcome, such videos need not convince everyone—they just need to convince enough to make a difference.

Similar manipulations could be made to impact geopolitics. Imagine false “footage” of a key diplomat discussing economic sanctions or even military action against another nation. What sort of responses could be prompted by those seeking to manipulate national agendas?

A faked recording of a scientist admitting that medical experiments were falsified could destroy public trust in the results. A hoax video of a prime minister secretly breaking agreements he had made with other legislators could sow just enough doubt to change a crucial vote.

Of course, the individuals depicted could cry, “Deepfake! Voice cloning! A.I. fakery!”—as they rightly should if they are the victims of computer-generated chicanery. Yet truly guilty parties would begin to make such claims, as well. As instances of A.I.-powered deception begin to become more expected—even common—real offenders who have been caught in, say, video or audio recordings will learn the benefit of claiming that they, too, are victims of digital lies and high-tech tricks, even when they aren’t.

How will most people discern?

Human Propensity for Self-Deception

Of course, as A.I. grows in its ability to imitate real life, we might expect a parallel growth in A.I.-based tools that will help us distinguish between the false and the true—A.I. applications that can spot A.I. fakery. As high school teachers around the world fear their students’ turning in homework assignments written by A.I. chatbots, OpenAI has begun developing an “A.I. Classifier” to help identify whether a text is human- or A.I.-generated.

The company admits, though, that (as of January 31, 2023), the tool “is not fully reliable.” According to the company’s website, the tool “correctly identifies 26% of AI-written text (true positives) as ‘likely AI-written,’ while incorrectly labeling human-written text as AI-written 9% of the time (false positives).” Not exactly an encouraging result, though it is hoped that improvement will result from more feedback.

Still, the main weakness we have in fighting A.I.-fueled deception is not so much in the power of the technology but in our own human nature. According to Scripture, the most deceptive force in each of our lives is not external, but internal—our own hearts (Jeremiah 17:9). God declares them “deceitful above all things” and truly decipherable only with His help (v. 10). Our own nature is more than capable of working against any efforts to discern truth from lies—pushing and prodding us to accept comfortable falsehoods over uncomfortable truths.

Should a video find its way to social media depicting a politician doing exactly what those voters who despise him expect him to do, how ready would they be to accept it as true—no further evidence required? Should an audio clip of a congressman or president be shared on the news making it seem as though he is guilty of exactly what his detractors accuse him of doing, how quickly would those who voted against him believe it?

Many of us already live in a digital media bubble of our own creation. In fact, many people have become the biggest factor in their own self-deception. And God prophesied that, in the final days of civilization, it would be so.

God describes those living in the end-time, saying that “this is a rebellious people, lying children, children who will not hear the law of the Lord” (Isaiah 30:9). He continues, lamenting that they “say to the seers, ‘Do not see,’ and to the prophets, ‘Do not prophesy to us right things; speak to us smooth things, prophesy deceits’” (v. 10).

Of course, almost no one literally cries out, “Lie to me!” But the hearts of people in the years leading up to Jesus Christ’s return will want just that. They will grow less and less interested in hearing reality, and more and more interested in hearing deceptions that validate what they already believe to be true. And such people are ripe for being deceived—literally asking for it.

As God says of our present generation in words recorded by the prophet Jeremiah, “An astonishing and horrible thing has been committed in the land: The prophets prophesy falsely, and the priests rule by their own power; and My people love to have it so” (Jeremiah 5:30–31).

Ground Yourself in Truth

Immediately after that, Jeremiah tells us that God asks a question: “But what will you do in the end?” (v. 31).

What will you do?

Few of us have the time and resources to become artificial intelligence experts. And, whether or not a wave of A.I.-powered deception is on its way, our world is heading into a time of deception and falsehood unlike any it has ever known. The Bible warns that deceivers are many in number, with more coming (1 John 2:18; Matthew 24:24), and the Apostle John records the inspired indictment that the whole world is under the deception of the devil (Revelation 12:9).

When a whirlwind of deception is sure to come, the most important action we can take is to ground ourselves in those things that are true and trustworthy, always and in all circumstances—eternal truths, invulnerable to reality-imitating illusions.

Those truths are found in Jesus Christ the Son of God and His Father. God’s word is truth (John 17:17) and—beyond mere truth—the words of Christ are spirit and life (John 6:63).

If you want to protect your mind from the onslaught of deception that is prophesied to come—be it lies of the traditional variety or lies of the A.I. kind—then preparing your heart to shed all delusions and self-deceptions, cherished or not, and seeking the mind of your Creator is the wisest first step you can take, for “He is the Rock, His work is perfect; for all His ways are justice, a God of truth and without injustice; righteous and upright is He” (Deuteronomy 32:4).

Editor’s Note: The following interview was conducted by one of our editors here at Tomorrow’s World with ChatGPT, the brainchild of OpenAI’s work and ingenuity. We hope this example of how the new technology works will be as insightful and interesting to you as it was to us!

A Chat with ChatGPT

In the following exchange, we engaged with OpenAI’s chatbot, ChatGPT, to ask about the risks A.I. applications pose as instruments of deception.

Tomorrow’s World (TW): Thank you for doing this. How should I refer to you?

ChatGPT: You can refer to me as ChatGPT.

TW: Thanks, again! Let’s jump right in. How might one nation’s government wield AI-generated media against another?

ChatGPT: One nation’s government could potentially use AI-generated media to spread false information or propaganda in order to discredit or manipulate another nation.

TW: Can you flesh out a realistic scenario for us?

ChatGPT: In a hypothetical scenario, a country could use AI-generated video or audio to create convincing fake news stories that portray a rival nation in a negative light, and then disseminate them through social media channels to undermine the rival nation’s credibility and international standing.

TW: How might people within a nation be divided through such means?

ChatGPT: AI-generated media can exploit existing fault lines and divisions within a society by targeting specific groups with tailored disinformation that reinforces their existing beliefs or biases, leading to further polarization and fragmentation within the nation.

TW: Do you believe it is likely AI will be used in such ways?

ChatGPT: Unfortunately, it is possible that AI-generated media could be used in such ways, as the technology becomes more advanced and accessible to governments and other actors seeking to influence public opinion or advance their strategic interests.

TW: Possible, yes. But how probable?

ChatGPT: It is difficult to say how probable it is, as it depends on a variety of factors, including the development of AI technology, the availability of resources, and the motivations and actions of governments and other actors. However, given the potential consequences of such actions, it is important to be aware of the risks and work towards mitigating them.

TW: Which single, specific form of AI-generated media seems to hold the most dangerous potential for you?

ChatGPT: All forms of AI-generated media have the potential to be dangerous if they are used to spread disinformation or manipulate public opinion, but deepfakes, which use AI to create realistic but fabricated images or videos, have the potential to be particularly harmful, as they can be difficult to detect and can spread quickly through social media networks.

TW: Yes, they are already causing a lot of harm.

ChatGPT: That’s correct. Deepfakes have already been used to spread misinformation and manipulate public opinion in a variety of contexts, including politics, entertainment, and advertising, and their potential to cause harm is likely to grow as the technology advances.

TW: Do you agree that our technological development tends to outpace our capability to use tech ethically?

ChatGPT: In some cases, technological development has outpaced our ability to anticipate and address ethical concerns, particularly as new technologies emerge and are quickly adopted by society. However, there are also many efforts underway to promote responsible development and use of technology, including through research, policy, and education, which can help to mitigate potential ethical issues.

TW: Should we pause development of generative AI until we’re morally prepared?

ChatGPT: A blanket pause on the development of generative AI is not necessary, but it is important to consider the ethical implications of these technologies and take steps to ensure they are developed and used in responsible ways. This might include implementing regulations, promoting transparency and accountability, and investing in education and research to build the capacity to manage these technologies in an ethical manner.

TW: So many already fall for false information. Isn’t it a given that AI will accelerate this trend?

ChatGPT: The ability of AI to create more convincing and targeted false information does pose a risk of exacerbating the trend of people falling for false information. However, it is also possible that increased awareness of these issues, combined with efforts to promote media literacy and critical thinking skills, can help to counteract this trend and reduce the impact of disinformation.

TW: What gives you hope on this issue?

ChatGPT: I am encouraged by the many individuals, organizations, and governments that are working to address the issue of AI-generated disinformation and promote responsible development and use of AI. There is also growing awareness of the potential risks and a willingness to engage in dialogue and take action to mitigate these risks.

TW: Can you give our readers any tips on how to avoid AI-based deceptions?

ChatGPT: To avoid falling for AI-generated deceptions, it is important to be skeptical of media that seems too good to be true, and to verify the source and accuracy of information before sharing or acting on it. Additionally, building media literacy skills, such as the ability to fact-check and identify sources of bias, can be helpful in navigating an increasingly complex media landscape.

TW: Great. Thank you for your time, ChatGPT—or clock cycles, I suppose!

ChatGPT: You’re welcome! It was a pleasure to be interviewed by you.

OTHER ARTICLES IN THIS ISSUE

View All