Elon Musk recently shared a manipulated video featuring a deepfake voiceover of Vice President Kamala Harris, which has sparked significant controversy. The altered video depicts Harris making false statements, such as claiming she was selected for her diversity rather than her qualifications.
This manipulation is not flagged as deceptive by the platform, X, formerly known as Twitter, despite the platform’s policy against sharing synthetic or misleading media.
The video, initially posted by @MrReaganUSA as a parody, quickly gained traction, accumulating over 119 million views by early Sunday afternoon. The altered content includes fabricated comments about Harris’s relationship with President Joe Biden, further misleading viewers.
Musk’s endorsement of the video with a simple “This is amazing” and a laughing emoji has not been accompanied by any warnings or labels indicating the content’s misleading nature.
X’s policy allows for certain types of altered media to remain unmarked as misleading if they fall under categories like memes or satire, provided they do not cause significant confusion about their authenticity.
However, deepfake technology presents unique challenges, especially when used in a manner that could deceive or manipulate public opinion.
The concern about deepfakes influencing voter perceptions has been increasing, with tech companies and organizations taking steps to address the issue. Earlier this year, 20 technology companies, including X, committed to combating the deceptive use of AI in the 2024 elections to ensure the integrity of the electoral process.
Despite these efforts, the lack of intervention in this specific case raises questions about the effectiveness of current policies in managing manipulated media. As the use of deepfakes and synthetic media becomes more prevalent, there is a growing need for robust measures to prevent misinformation and protect public trust in media.