[UNITED STATES] Elon Musk’s social media company, X, filed a lawsuit against the state of Minnesota on Wednesday, challenging a newly enacted law that prohibits the use of AI-generated deepfakes to influence elections. The platform argues the statute infringes on free speech protections.
In its complaint filed in federal court, X contends that the law replaces the platform’s discretion over content with that of the government, and exposes companies to potential criminal liability if they fail to comply.
The legal action comes as governments and watchdogs around the world increase scrutiny of artificial intelligence in the electoral arena. With deepfake technology becoming more advanced and accessible, concerns are mounting about its potential to spread misinformation. Just last month, a fake audio recording featuring a prominent European leader circulated widely, underscoring the global implications of unregulated AI in political communication.
“This regime will inevitably suppress broad categories of vital political speech and commentary,” the lawsuit states. Musk, a vocal advocate of unrestrained speech, dismantled Twitter’s content moderation policies after acquiring the platform in 2022 and rebranding it as X. Minnesota Attorney General Keith Ellison, the defendant in the suit, has not yet issued a response.
Legal experts say the case could have far-reaching consequences for how states attempt to regulate AI in the political sphere. A decision in favor of X might encourage other tech companies to pursue similar legal challenges nationwide. Alternatively, a victory for Minnesota could embolden more states to implement tighter controls on AI-generated political content.
Minnesota’s statute targets the use of AI-created images, videos, and audio that appear authentic but are designed to sway voters. According to consumer advocacy group Public Citizen, at least 22 states have passed laws curbing deepfakes in elections, amid growing fears that such tools could be used to deceive the electorate.
The proliferation of generative AI has made it easier than ever to create realistic fake media, prompting some states to push for mandatory disclaimers, while others—like Minnesota—have adopted full bans in election contexts. The ongoing debate reflects a broader struggle to balance technological innovation, civil liberties, and electoral integrity.
In its lawsuit, X asked the court to declare the Minnesota law unconstitutional under both the U.S. and Minnesota constitutions, claiming it is overly broad and vague. The company also invoked Section 230, the federal statute shielding online platforms from liability for user-generated content. X is seeking a permanent injunction to block enforcement of the law.
This is not the first legal challenge the law has faced. In January, Republican state legislator Mary Franson and social media commentator Christopher Kohls also contested the statute. However, U.S. District Judge Laura Provinzino denied their request for a preliminary injunction, though she did not rule on the case’s merits. That decision is currently under appeal.
As the lawsuit progresses, digital rights groups and election advocates are closely monitoring developments. Organizations such as the Electronic Frontier Foundation have voiced concerns that sweeping restrictions could chill legitimate political discourse. On the other hand, supporters of the law argue that unchecked use of AI deepfakes threatens to undermine public confidence in democratic processes.
The case highlights the difficult balancing act lawmakers face as they attempt to navigate the intersection of emerging technology, free expression, and the need to protect electoral integrity.