Voters in New Hampshire were targeted by robocalls featuring an artificially generated voice impersonating President Biden.
The messages urged recipients not to vote in the upcoming primary election, suggesting that their votes would be more impactful in November.
The state attorney general’s office labeled these calls as an unlawful attempt to disrupt the primary and suppress voters, emphasizing that voters should disregard the content entirely.
The fake recordings were manipulated to appear as if they were sent by a Democratic committee officer.
This incident adds to the growing concerns about the rise of deceptive audio, commonly known as deepfakes, in the political landscape and the plethora of AI tools that can listen to and mimic both the voice and style of any speaker.
Experts worry about the potential prevalence of such tactics during the election season, citing instances like the Republican National Committee’s use of deepfake technology last year.
As a response to the increasing misuse of artificial intelligence in political contexts, state lawmakers are hurriedly drafting bills to regulate content produced by AI.
This news comes shortly after OpenAI announced a ban on the political use of ChatGPT.
The aim is to prevent the spread of misleading information and maintain the integrity of elections.
Recent incidents, including deepfake videos in foreign elections, have heightened the need for legislative actions.
The New Hampshire attorney general’s office initiated an investigation into the robocall accusations following a complaint from Kathleen Sullivan, a former chairwoman of the state Democratic Party.
Sullivan emphasized the urgency of addressing such tactics to prevent their escalation in the future.
Advocacy groups, like Public Citizen, stress the necessity for policymakers to implement protections promptly to avoid potential chaos in upcoming elections.