Categories: Ethics

Cracking Down on Deepfakes – Latest Strategies Revealed


Ramping Up the Fight Against Deepfakes

Deepfake scams, ranging from fake videos of celebrities to phony political robocalls, are becoming increasingly common. With the approaching “generative elections,” both companies and governments are intensifying their efforts to combat this AI-driven deception.

Major Steps Towards Safeguards

In the past week, significant strides have been made in the battle against deepfakes:

  • The Biden administration unveiled a new consortium focused on AI safety, with major companies like OpenAI, Adobe, and Google among its partners. One of its key objectives is to establish guidelines for watermarking synthetic content.
  • Meta announced plans to identify and label all AI-generated content across its platforms, including Facebook, Instagram, and Threads. Collaboration with industry partners on common technical standards is underway.
  • Google joined the Coalition for Content Provenance and Authenticity (C2PA), an industry effort aimed at labeling AI-created media. This move aligns with other tech giants like Adobe and Microsoft in the quest for authenticity.
  • The FCC took a decisive step by banning the use of AI-generated voices in robocalls.

The Challenge of Watermarking

Despite these efforts, determining whether a piece of media is AI-generated remains a daunting task. Watermarking, often touted as a solution, presents challenges as it becomes a game of cat and mouse with malicious actors. Some experts warn that this could be an uphill battle.

Expert Insights

Karen Panetta, an electrical and computer engineering professor at Tufts University, emphasized the need for a universal benchmark for watermarking. While efforts are underway, she noted that watermarking technology is still in its early stages, lacking a standardized process.

Panetta highlighted the importance of leveraging expertise from fields like cybersecurity to enhance deepfake safeguards. She emphasized the necessity of urgency, especially with a major presidential election approaching.

Looking Ahead

As outlined in Biden’s AI executive order, the coming months will be crucial for establishing standards around deepfake prevention. Panetta stressed the need for swift action to counter the advancements of malicious actors in this space.

“The bad actors are moving. You see it out there. And it’s bringing it to the attention of the public,” Panetta warned, underscoring the urgency of the situation.


Grow your business with AI. Be an AI expert at your company in 5 mins per week! Free AI Newsletter

AI News

Recent Posts

Kling AI from Kuaishou Challenges OpenAI’s Sora

In February 2024, OpenAI introduced Sora, a video-generation model capable of creating one-minute-long, high-definition videos.…

5 months ago

Alibaba’s Qwen2 AI Model Surpasses Meta’s Llama 3

Alibaba Group Holding has unveiled Qwen2, the latest iteration of its open-source AI models, claiming…

5 months ago

Google Expands NotebookLM Globally with New Features

Google has rolled out a major update to its AI-powered research and writing assistant, NotebookLM,…

5 months ago

Stability AI’s New Model Generates Audio from Text

Stability AI, renowned for its revolutionary AI-powered art generator Stable Diffusion, now unveils a game-changing…

5 months ago

ElevenLabs Unveils AI Tool for Generating Sound Effects

ElevenLabs has unveiled its latest innovation: an AI tool capable of generating sound effects, short…

5 months ago

DuckDuckGo Introduces Secure AI Chat Portal

DuckDuckGo has introduced a revolutionary platform enabling users to engage with popular AI chatbots while…

5 months ago