A curious video featuring a Republican Congressman from Virginia began to circulate online at the end of December.
In it, Rep. Rob Wittman appeared to pledge military support for Taiwan based on the outcome of the island’s presidential election. According to a fact check, published by Agence France-Presse in January, the vice chair of the House Armed Services Committee appears to say the following in the video:
“If Lai [Ching-te] and Hsiao [Bi-khim] became president, the United States would accelerate all arms sales to Taiwan, send US military personnel with combat experience to assist Taiwan’s training, and invite Taiwan’s army to train in the United States, to strengthen self-defense capabilities.”
Wittman told USA Today that his team immediately reported the video as misinformation after it was discovered. Wittman said the deep fake underscored efforts by bad actors to interfere with democratic processes around the globe. It also added to his concern for the coming U.S. elections.
“It is more important than ever that Congress establishes guidelines for responsible use of artificial intelligence,” he said in a statement. “As artificial intelligence continues to advance, there is a high possibility that we’ll see a rise in deep fakes that could be used to influence U.S. elections.”
With an absence of federal legislation to regulate AI-generated content ahead of what is anticipated to be a fraught election cycle, states have been left to form a patchwork of dos and don’ts for candidates and campaigns.
In Virginia, a handful of bills filed before the start of the legislative session take aim at the use of new technology.
Some seek to regulate AI-generated content and prohibit certain uses of the technology; others propose the formation of studies or commissions to assess and get ahead of potential pitfalls. None have advanced beyond their respective committees as of early February, the halfway mark before the end of the 2024 regular legislative session.
Virginia AI legislation is slow-going
State Senator Suhas Subramanyam, D – Ashburn, authored a bill to form one of those joint commissions. He supports the use of AI for political campaigns but he wants to make sure those uses don’t lead to unintended consequences, he said.
“I could see really bad uses of AI when it comes to campaigning. I mean, I think about deep fakes and bots and so many other uses that can really harm our democracy, and I want to make sure that we regulate and stop the bad behaviors and the bad uses of the technology,” he said.
Subramanyam was elected to the General Assembly in 2019, after he worked in the Obama White House as a technology advisor. While at the White House, his focus was the affect of emerging technology on the lives of Americans and what needed to be done from a policy-making standpoint.
“It can be a really powerful tool for people running campaigns. Think about the way we can generate content so quickly; think about the way we can use different types of images and tell more compelling stories with the help of AI machine learning,” he said.
If formed, the charge of the joint commission could be to assess the impacts of AI-generated deep fakes, misinformation, and data privacy implications. It may also suggest measures to ensure the new technology does not lead to discrimination and explore how AI could be used to improve government services and policy. His bill was referred to the Senate Finance and Appropriations Committee on Friday.
Other AI-focused legislation include a bill that regulates the use of the technology by developers which was pushed to the 2025 legislative session; a bill that requires an impact assessment to be done before a public body uses the technology; and a bill that forbids the creation and use of deep fakes.
Sen. Bryce Reeves, R – Orange County, was inspired to write the bill that prohibits deep fakes after hearing about issues in the music recording industry regarding the use of the technology. His bill would ban the creation or sharing of deep-faked images, videos or audio, without a contractual agreement with the subject. It would also offer civil recourse to victims. He said his bill was written broadly and could be applied to political campaigns. It is anticipated to be discussed in the Senate General Laws and Technology Committee on Wednesday.
“There’s no doubt in my mind that there are some nefarious people in politics, just absolutely ruthless, they’ll do whatever they can to win,” he said. “As a politician, how do you stop it? Once it’s out, it’s out.”
Reeves acknowledged that the mechanisms in his bill – allowing a victim of a deep fake to sue the creator – would work slowly while the use of the technology in the 2024 election has moved at a fast clip.
“We’re always chasing our tail on some of this stuff because it’s so fast moving and then how do you get at the bad actors without hitting the legitimate uses of it?” he said.
Both Reeves and Subramanyam agreed that regulations are needed for the use of the new technology in elections. And the used of the new technology for political campaigns already has a precedent in Virginia.
The Virginia experiment: AI-authored campaign letters
During the 2023 General Assembly election, Tech for Campaigns launched an experiment aimed to determine if artificial intelligence could help state and local campaigns fundraise.
The non-profit offered volunteer services to 36 different campaigns in the 2023 Virginia General Assembly race. They found that AI-aided emails raised 3.5-4.4 times more dollars per work hour than human-written fundraising emails. Jessica Alter, co-founder and board chair, noted that each AI-aided email was proofread by a human staff member before it was sent out to voters.
“The alternative is that most of them, in large proportion, would have no one doing it, a staff member doing it as their seventh job – I think that’s important for the fundraising emails – or probably not the best people doing it,” Alter said. “A state house campaign isn’t raising that much money each month on emails. If they have to pay you $5K on emails and they’re raising $6K, is it really worth it?”
Even if AI-aided emails earn the same amount of dollars as those written by humans, they save immense work hours for campaigns that can then spend that time on voter engagement, she said.
The goal of Tech for Campaigns, which launched in 2017, is to bring commercial best practices in new technology to the political world and to help Democrats win and to fight extremism, Alter said.
“We certainly see the risks of AI and we aren’t downplaying them, but we also want to help build, experiment, and educate in the political arena so it’s not all doom and gloom and we’re not left behind as a party,” she said.
In 2024, Tech for Campaigns plans to work across 180 largely state legislative campaigns in 14 states.
Federal efforts are slow-going
Much like state legislation, federal efforts to pass AI regulations have also lagged. Two bills that would regulate the use of AI-generated content in political campaigns were introduced in Congress in 2023 but have languished.
One, introduced by Rep. Yvette Clarke, D – New York, in May, would expand disclosure requirements for campaign ads to include if AI was used to generate videos or images. The other, introduced in September and led by Sen. Amy Klobuchar, D – Minnesota, would forbid the use of AI in political advertising.
Another AI-focused bill would form a national commission in hopes of regulating the technology. It was introduced in June by Rep. Ted Lieu, D – California, and cosponsored by Wittman. It hasn’t moved since.
“Either is a step in the right direction,” Rep. Don Beyer, D – Virginia, said. “Although we haven’t passed any AI legislation yet, the great hope, the good news is that so far all of the AI policy discussions have been bipartisan.”
Beyer recently returned to college to pursue a master’s degree in AI and machine learning at George Mason University. The senior House Democrat on the Joint Economic Committee and member of the House Ways and Means Committee has since become a leading proponent of policy to regulate the new technology. He noted that the right policies need to be in place to regulate AI.
“We’re struggling to figure out what those right policies are,” he said.
This article originally appeared on Staunton News Leader: AI-generated content proved to be successful in fundraising in 2023 for Va Dems