The Federal Communication Commission has a plan to make advertisers tell audiences when they use content generated by artificial intelligence.

Jessica Rosenworcel, the FCC chair, said the effort to create new disclosure requirements, announced via press release Thursday, is “a major step to guard against AI being used by bad actors to spread chaos and confusion in our elections.”

“There’s too much potential for AI to manipulate voices and images in political advertising to do nothing,” Rosenworcel said. “If a candidate or issue campaign used AI to create an ad, the public has a right to know.”

Read also: A neuroscientist explains why artificially intelligent robots will never have consciousness like humans

The effort comes as AI technology is being used increasingly in media, including by some to create deep fakes with false statements from political leaders. The FCC pointed out how New Hampshire primary voters received an AI-generated robocall with President Joe Biden’s voice telling them not to vote, as well as altered images of Republican nominee Donald Trump that were proliferated by Ron DeSantis’ campaign.

While not prohibiting such content, the FCC plans to implement new requirements that would force TV and radio ads to tell audiences if AI is being used. Many states have already enacted AI regulations of their own, and the FCC guidelines aim to “bring uniformity and stability to this patchwork of state laws, seeking to bring greater transparency in our elections,” according to the press release.