The U.S. Senate Commerce Committee convened on Thursday to deliberate on the complex issues at the intersection of artificial intelligence (AI) and privacy.
The discussion highlighted varied concerns, with some lawmakers emphasizing the potential for AI to exacerbate risks such as online surveillance, scams, hyper-targeting advertisements, and discriminatory practices.
Conversely, others warned that stringent regulations might inadvertently benefit large tech corporations while imposing burdensome requirements on smaller businesses.
Senator Maria Cantwell (D-Wash.) expressed apprehensions about AI intensifying risks for consumers, particularly through social media and digital advertising. She drew parallels to the data-driven surge of online ads, voicing concerns that tech companies might similarly exploit sensitive data to train AI models, potentially leading to adverse consumer outcomes.
Cantwell cited an example from her state where a restaurant allegedly allocated reservations based on patrons’ income levels, thus denying access to less affluent individuals. She emphasized the need for robust privacy laws to prevent misuse of personal data for such discriminatory practices.
Cantwell, along with other lawmakers, is pushing for federal transparency standards to safeguard intellectual property and mitigate risks associated with AI-generated content. In collaboration with Senators Marsha Blackburn (R-Tenn.) and Martin Heinrich (D-N.M.), she introduced bipartisan legislation known as the COPIED Act.
This Act aims to protect publishers, artists, and others from the misuse of AI while also addressing the dangers of AI-generated misinformation. The legislation proposes the development of transparency standards for AI models, detection and watermarking of synthetic content, and new cybersecurity measures to prevent tampering with content provenance data.
The COPIED Act also seeks to ban AI companies from utilizing protected content without permission, granting individuals and companies the right to sue violators, and empowering the Federal Trade Commission and state attorneys general to enforce these regulations.
Senator Blackburn underscored the critical importance of such legislation, alongside other proposals like the No Fakes Act, to shield individuals from the detrimental effects of AI deepfakes. She questioned the ownership of one’s virtual identity, emphasizing the urgency of these regulatory measures.
The COPIED Act has garnered support from major organizations such as the News/Media Alliance, National Newspaper Association, National Broadcasters Association, SAG-AFTRA, Nashville Songwriters, and the Recording Academy.
The bill targets large tech entities, including social media companies, search engines, and content platforms, with substantial annual revenue and significant user bases. These entities would be required to adhere to the proposed transparency and cybersecurity standards to prevent the misuse of AI-generated content.
Ryan Calo, a law professor and co-founder of the UW Tech Policy Lab, testified that companies have already begun leveraging customer data to set differential pricing. He cited instances where Amazon and Uber reportedly charged higher prices based on customer profiles.
Calo advocated for data minimization laws to curb such exploitative practices, emphasizing the potential harms of AI in consumer markets. Other experts, like Udbhav Tiwari from Mozilla and Amba Kak from AI Now Institute, echoed the need for integrating privacy features into AI models and highlighted the subtler risks of AI misuse, such as predictive outcomes based on voice tones.
Despite the clear need for regulation, some lawmakers and witnesses cautioned that AI regulations might negatively impact small businesses. Morgan Reed, president of ACT | The App Association, argued that a unified U.S. privacy law would simplify compliance for small businesses, which currently face a patchwork of state privacy laws.
He pointed out that small businesses have rapidly adopted AI tools to enhance productivity and should have their experiences considered in policy-making. Senator Ted Cruz (R-Texas) echoed the need for focused regulations that protect privacy without stifling technological innovation, advocating for targeted legislation like the Take It Down Act to address specific issues like AI-generated explicit deepfakes.