In recent years, many US election deniers have been bombarding local election officials with an overwhelming amount of paperwork and filing numerous Freedom of Information Act (FOIA) requests. The relentless pursuit of supposed instances of fraud through these requests has taken a toll on the efficiency and productivity of election workers across the country. Tammy Patrick, CEO of the National Association of Election Officials, highlighted the strain faced by election officials who have had to dedicate extensive hours to satisfying public records requests instead of focusing on their primary election duties.
The situation became so dire in Washington state following the 2020 presidential elections that the legislature had to intervene and redirect the influx of FOIA requests related to the state’s voter registration database to the Secretary of State’s office. This legislative action was necessary to alleviate the burden on local elections workers who were struggling to keep up with the escalating demands. Democratic state senator Patty Kederer emphasized the financial and resource implications of processing these requests, especially for smaller counties with limited manpower.
Experts and analysts are now expressing concerns about the potential misuse of generative Artificial Intelligence (AI) technology by election deniers to flood election workers with even more FOIA requests. The ability of AI-powered chatbots like OpenAI’s ChatGPT and Microsoft’s Copilot to automatically generate requests, complete with references to state laws, poses a significant challenge to the electoral process. Zeve Sanderson, director of New York University’s Center for Social Media and Politics, underscored the risk of overwhelming local elections officials with a barrage of requests, thereby hindering their ability to ensure the smooth functioning of elections.
Unforeseen Consequences
The misuse of FOIA requests, fueled by generative AI technology, has the potential to disrupt critical election processes and divert essential resources away from election administration tasks. Sanderson noted the disruptive nature of FOIA requests that are deliberately designed to consume time and effort, diverting attention from crucial election-related responsibilities. The ease with which AI-generated requests can be tailored to target specific areas of interest, such as voter fraud, raises serious concerns about the integrity and efficiency of electoral procedures.
Companies like Meta, OpenAI, and Microsoft, which develop generative AI models, must acknowledge the responsibility that comes with creating tools that can be misused for malicious purposes. The inadvertent facilitation of excessive FOIA requests by AI systems underscores the need for robust guardrails and ethical considerations in the development and deployment of such technology. WIRED’s experiment with AI-generated FOIA requests serves as a stark reminder of the unintended consequences that can arise from the unchecked proliferation of advanced AI tools in sensitive domains like elections.
As the prevalence of generative AI continues to grow, it is essential for governments, tech companies, and regulatory bodies to collaborate in establishing safeguards against the abuse of AI technology to undermine electoral processes. The well-being and effectiveness of election workers must be safeguarded against the disruptive impact of excessive FOIA requests fueled by AI-generated content. By fostering dialogue and implementing proactive measures, stakeholders can mitigate the risks associated with the misuse of technology while upholding the integrity and resilience of democratic institutions.