A political consultant, Steven Kramer, faces a $6 million fine and multiple criminal charges for sending AI-generated robocalls mimicking President Joe Biden’s voice. The Federal Communications Commission (FCC) proposed this fine, marking its first case involving generative AI technology. Lingo Telecom, the company that transmitted the calls, faces a $2 million fine.
Kramer admitted to sending the misleading robocalls to New Hampshire voters ahead of the state’s presidential primary. The calls falsely suggested that voting in the primary would prevent voting in the general election. As a result, Kramer is charged with 13 felony counts for voter suppression and 13 misdemeanor counts for impersonating a candidate. These charges will be prosecuted by the New Hampshire attorney general’s office.
Reactions and statements
New Hampshire Attorney General John Formella emphasized the state’s commitment to protecting election integrity. Former state Democratic Party chair Kathy Sullivan, whose phone number was falsely used in the calls, supported the decisive actions taken by authorities.
Lingo Telecom, while cooperating with investigations, disagreed with the FCC’s actions, claiming the rules are being applied retroactively. The company insists it complied with all regulations and was not involved in producing the calls.
Kramer’s perspective
Kramer defended his actions, stating he intended to highlight the potential dangers of AI in politics, not influence the election outcome. Despite facing severe penalties, including potential jail time, Kramer expressed readiness to contest the charges in court.
In response to this incident, the FCC has introduced measures to combat AI misuse in political communications. The proposed new rules require political advertisers to disclose AI-generated content in TV and radio ads, aiming to increase transparency and protect voters from misleading information.
FCC Chairwoman Jessica Rosenworcel highlighted the risks posed by AI-generated voices in robocalls, stressing the need for robust regulations to prevent deception and protect the public.
This case underscores the growing concern over AI’s role in political communications and the need for stringent regulations to safeguard election integrity.