A coalition of 20 prominent technology companies made a joint pledge on Friday to tackle the spread of AI-driven misinformation during this year’s elections.
Their focus lies particularly on combating deepfakes, which utilize misleading audio, video, and imagery to impersonate key figures in democratic processes or disseminate false voting details.
Microsoft, Meta, Google, Amazon, IBM, Adobe, and chip designer Arm are among the signatories of this commitment. The pact also includes emerging artificial intelligence firms like OpenAI, Anthropic, and Stability AI, alongside social media giants such as Snap, TikTok, and X.
As global elections loom, affecting over four billion individuals in more than 40 nations, tech platforms brace for the challenge.
The proliferation of AI-generated content raises significant concerns about election-related misinformation. Clarity, a machine learning firm, reports a staggering 900% year-over-year increase in deepfake creation.
Election misinformation has plagued democratic processes since the 2016 presidential campaign when Russian actors exploited social platforms to spread false narratives. Today, lawmakers are increasingly alarmed by the swift advancement of AI technology.
Josh Becker, a Democratic state senator in California, voices apprehension about the potential misuse of AI in influencing voters. While he welcomes industry initiatives, he stresses the need for clear legislative standards.
Despite efforts, the development of detection and watermarking technologies for deepfakes lags. Companies currently focus on establishing technical standards and detection mechanisms.
However, combating this multifaceted issue remains a daunting task. AI-generated text detection services exhibit biases while identifying manipulated images and videos proves challenging. Even with protective measures like invisible watermarks, loopholes exist, and certain signals are absent in audio and video generators.
News of the coalition follows OpenAI’s announcement of Sora, a new AI model for generating videos. Sora, akin to OpenAI’s DALL-E for images, crafts high-definition video clips based on user inputs or existing images.
Participating companies commit to eight overarching principles, including evaluating model risks, detecting and addressing misleading content on their platforms, and ensuring transparency in these processes. However, adherence to these commitments is contingent upon their relevance to each company’s services.
Kent Walker, Google’s president of global affairs, underscores the significance of safeguarding democracy through secure elections. The coalition reflects the industry’s dedication to combatting AI-generated misinformation that undermines trust.
IBM’s chief privacy and trust officer, Christina Montgomery, emphasizes the urgency of collaborative action in safeguarding individuals and societies from the heightened risks posed by AI-generated deceptive content during this pivotal election year.