Deepfakes: A Light and Shadow Future. Can You Discern the Truth?

Hi, I'm Tak@, a systems integrator. What if the video or audio in front of you was a complete "fake," intentionally created by someone else? Would you be able to spot it?

Last year, a deepfake scam in Hong Kong led to a loss of approximately 380 million USD. AI-generated fakes are no longer a distant threat. From my perspective as a systems integrator, let's explore the "light" and "shadow" this technology brings.

The True Nature of Deepfake Technology

"Synthetic Media" Born from Deep Learning

Deepfakes are a technology that uses deep learning, a type of AI, to create incredibly realistic images, videos, and audio. Specifically, the AI mechanism known as "Generative Adversarial Networks (GANs)" plays a significant role in producing these lifelike fakes.

GANs work by having two AIs, a "generator" that creates fakes and a "discriminator" that tells whether something is real or fake, compete against each other. This competition improves their accuracy over time.

This technology allows AI to learn the facial features and voice characteristics of specific individuals, generating videos and audio that make it seem as if that person is speaking.

Remarkably, it can precisely replicate even subtle details like facial expressions, lip movements, and eye movements, making it nearly impossible to distinguish from reality at first glance. It's truly a result of AI gaining the ability to "mimic" reality and "create" new content.

Why Have Deepfakes Become So "Accessible"?

Deepfake technology, once requiring specialized skills and equipment, is now becoming commonplace. This is due to advancements in deep learning AI and the widespread availability of high-performance hardware like Graphics Processing Units (GPUs).

These advancements have enabled the learning of complex AI models and rapid processing.

For instance, image generation AI services like Stable Diffusion, developed by Stability AI, can create creative images that look like they were made by a human, just from simple text input.

This has created an environment where anyone can produce high-quality video and audio with just a PC, without special knowledge. As this technology becomes "democratized," we've entered an era where anyone can be a deepfake "creator."

The "Light" of Deepfakes: Applications that Enrich Society

Expanding Entertainment and Creativity

Deepfake technology is bringing new expressive possibilities to the entertainment industry. It can make movie and game characters more realistic and controllable, or even recreate the appearance of past actors.

For example, there's a case where an actor who lost his voice to throat cancer was able to "speak" again through deepfake technology.

In gaming, "voice skins" are emerging that reflect a player's face and mouth movements on an avatar or make in-game characters sound like their own voice.

This not only creates a more immersive experience but also has positive social impacts, such as allowing LGBTQ+ individuals to feel more comfortable in games, in my opinion. The scope of creative expression is broadening, enriching our entertainment experiences.

Breaking Language Barriers and Delivering Diverse Information

Deepfakes are also expected to break down language barriers and deliver information to people worldwide. The ability to translate a specific person's voice into multiple languages and synthesize it so that person appears to be speaking that language significantly improves message transmission efficiency.

David Beckham's use of deepfake technology to deliver a message in nine languages for a malaria eradication campaign is a prime example.

Furthermore, in global campaigns and the advertising industry, it's starting to be used to deliver more personal and direct messages to consumers from different linguistic and cultural backgrounds. This allows us to understand and receive diverse information more deeply.

Potential in Education, Healthcare, and Social Contribution

This technology is also beginning to be used to improve education, healthcare, and society as a whole. Deepfakes can create simulations that closely resemble real-world situations or visually represent complex concepts in an easy-to-understand way.

For example, in the medical field, AI contributes to technology that predicts protein shapes to improve drug discovery processes and deepen our understanding of diseases.

In education, it's used to recreate historical events with realistic visuals or to create scenarios for observing human reactions in psychological research. Moreover, deepfake technology is being applied to create content that raises public awareness about important global issues like climate change and gender.

As someone who enjoys creating AI learning planners as a hobby, I truly feel we've entered an era where AI assists individual learning. Deepfakes can be a powerful tool for us to learn more effectively and solve societal challenges.

The "Shadow" of Deepfakes: Serious Issues to Confront

Disinformation and the Crisis of Trust

One of the most serious problems with deepfake technology is the ease with which incredibly realistic disinformation can be created and spread. Because the quality of generative AI has improved, it's becoming extremely difficult for even experts to discern fakes.

This can lead to false information being mistakenly perceived as real, causing social chaos and a decline in trust.

Cases like the spread of a deepfake video of Ukrainian President Zelenskyy, or AI-generated articles introducing non-existent tourist attractions or local delicacies leading to withdrawal of local government sponsorship, illustrate the real-world aspects of this problem.

The recent incident where a British energy company CEO was tricked into transferring money by a fake voice call from their parent company's CEO is also fresh in our minds. We are now facing an unprecedented challenge: discerning what is true and what is fake.

Privacy, Intellectual Property, and Ethical Concerns

The advancement of deepfake technology also raises issues concerning individual privacy infringement, unauthorized use of intellectual property, and various ethical dilemmas. Concerns include the synthesis and replication of faces and voices without consent, and the unauthorized use of copyrighted works as AI training data, leading to unclear accountability for generated content.

The incident where deepfake images of Taylor Swift were spread, and Scarlett Johansson's accusation against OpenAI for using an AI voice strikingly similar to her own, demonstrate how individual rights like portrait rights and publicity rights are being threatened.

Furthermore, if AI training data contains discriminatory biases related to gender or ethnicity, there's a risk that the AI could reproduce discriminatory judgments.

These issues demand not only technical solutions but also societal discussion and the establishment of new norms. I believe we are in an era where understanding AI's "imperfections" and then deciding how to utilize it is paramount.

Cybersecurity and the Risk of Misuse

Deepfake technology also carries the risk of making cyberattacks more sophisticated and crimes easier to commit. Realistic fake information and false identities generated by AI can become powerful weapons for phishing scams, business email compromise, and even social manipulation and propaganda.

Researchers at the Center for AI Safety predict that AI will enhance the success rate and scale of cyberattacks, stating that if attack capabilities are strengthened over defenses, it could lead to "significant geopolitical disruption."

There are even concerns that AI could have the capability to design more lethal and infectious pathogens, and its misuse could pose a grave threat to national security. The "dual-use" nature of this technology constantly urges us to remain vigilant.

How to Approach AI Technology: A Path to Coexistence

Regulation and Guideline Development

To address the challenges posed by deepfake technology, countries worldwide are progressing with the development of laws and guidelines. These initiatives aim to find a balance that prevents AI misuse while promoting its healthy utilization.

There are movements to mandate the disclosure of AI-generated content, and discussions are ongoing regarding the application of laws related to personal information protection.

In the EU, the "AI Act" has been published, requiring deepfake systems to disclose to users that they are interacting with AI and that the content is artificially generated.

In Japan, "Human-Centric AI Society Principles" and "AI Business Operator Guidelines" have been formulated, aiming to establish AI governance. Legal frameworks must continuously be updated to keep pace with technological advancements.

Countermeasures through Technology and Research Promotion

To combat disinformation, the development of deepfake detection technology is also rapidly progressing. While AI creates fakes, another AI is trained to detect them, leading to an ongoing "cat and mouse" game. This technological competition is generating increasingly sophisticated detection capabilities.

The "InVID" video verification tool, developed with EU funding, helps collect contextual information from YouTube videos and streamline fact-checking.

Research institutions are also investigating factors contributing to deepfake authenticity and conducting simulations to inform future countermeasures. Since technological progress is relentless, we must constantly seek out and adapt to new detection technologies.

Improving Human Literacy and Societal Dialogue

One of the most crucial countermeasures is for each of us to acquire "correct knowledge" about AI and deepfakes, and to enhance our ability to discern information. It's essential to recognize that AI-generated information may contain errors or inaccuracies, and to cultivate the habit of not blindly accepting it but rather verifying it with reliable sources.

Initiatives in "digital citizenship" are also advancing, which teach individuals how to use digital tools appropriately and act as responsible members of society.

This involves educating people to think and make judgments for themselves, rather than blindly accepting information presented by AI. In my experience in system development, I deeply feel the importance of not just using AI-generated code, but always reviewing and testing it myself.

I believe that as long as humans do not abandon the act of "thinking" in an AI-coexistent society, we can overcome the "shadow" of deepfakes.

Reflection

As a systems integrator involved in various system development projects, I find AI to be a wonderful tool. However, we must never forget that beneath its "convenience" always lies "uncertainty."

Just as we manage risks in a project, I strongly believe that the uncertainty brought by AI should also be treated as a subject of management.

This technology is still evolving, and it's entirely possible that today's "common sense" will be obsolete in six months. Not all problems have clear answers.

That's precisely why we must constantly re-evaluate our relationship with this new technology and continue to learn.

Conclusion

Just as "attractive food" advertisements, though unrealistic "lies," were once accepted as the ideal "truth" in our minds, deepfakes also have the potential for "lies" to create new expressions and value, depending on how they are used.

However, we must never forget the danger of these "lies" being unintentionally perceived as truth.

Deepfakes cast both "light" and "shadow" upon our society. How will we leverage this powerful technology for the future, and how will we manage it? And are you prepared to discern whether the information before you is "real" or "fake"?

For each of us to consider our new relationship with AI and continue learning will be the first step towards building a better future.

Follow me!

photo by:ArturSkoniecki