Microsoft Addresses AI Content Threats and Regulatory Needs in New Report

Microsoft has released a new report highlighting the significant challenges and opportunities in protecting people from the dangers of AI-generated content. The 50-page white paper details efforts to curb harmful AI, such as election misinformation and deepfakes, and addresses issues like financial scams and explicit content.

The report also provides insights into people’s exposure to AI misuse and their ability to identify synthetic content. It includes policy suggestions for lawmakers crafting regulations around AI-generated text, images, video, and audio.

The release of the report coincides with growing concerns about AI-generated content during the election season. It also comes on the same day the U.S. Senate approved the Kids Online Safety Act (KOSA), which aims to create new regulations for social networks, gaming platforms, and streaming platforms, including content rules related to minors.

Microsoft Addresses AI Content Threats and Regulatory Needs in New Report
Microsoft Addresses AI Content Threats and Regulatory Needs in New Report

In a blog post, Microsoft vice chair and president Brad Smith urged lawmakers to expand efforts to promote content authenticity, detect abusive deepfakes, and provide public tools to understand AI harms.

Recent incidents, such as AI-generated videos of President Joe Biden and Vice President Kamala Harris, underscore the urgent need to address AI misinformation. X CEO Elon Musk sharing a deepfake of Harris raised concerns about policy violations.

Derek Leben, a business ethics professor, emphasized the need for clear thresholds for acceptable AI content, considering factors like intention and context. Microsoft’s push for regulation and public awareness aims to balance corporate and government responsibilities in combating AI misinformation.

Experts argue that watermarking AI content alone is insufficient to prevent AI-generated misinformation. Rahul Sood from Pindrop highlighted the complexity of detecting “partial deepfakes” that mix synthetic and real audio.

Despite the availability of real-time detection technology, there is no mandate forcing platforms to implement it. This gap in regulation poses challenges in identifying and mitigating AI-generated threats.

Companies like Trend Micro are developing tools to help detect deepfakes in real-time. A recent study by Trend Micro found that 36% of people have experienced scams, and 60% can identify them.

Jon Clay, Trend Micro’s VP of threat intelligence, stressed that AI misinformation, particularly deepfakes, will be a major challenge in the coming years. These efforts reflect the ongoing battle to differentiate real from fake content in the age of advanced AI.

Sajda Parveen
Sajda Parveen
Sajda Praveen is a market expert. She has over 6 years of experience in the field and she shares her expertise with readers. You can reach out to her at [email protected]
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x