OpenAI, the artificial intelligence laboratory, issued a blog post on Monday to address concerns surrounding potential election interference with its technology.
With over a third of the global population heading to the polls this year, apprehensions have risen regarding using AI to compromise election integrity, particularly with OpenAI’s ChatGPT and DALL-E.
The Microsoft-backed company’s ChatGPT, known for its human-like writing capabilities, and DALL-E, specializing in creating realistic deepfakes, have sparked worries about manipulating information during elections.
Even OpenAI’s CEO, Sam Altman, expressed concerns during a Congressional testimony in May, highlighting the potential for generative AI to facilitate interactive disinformation.
To tackle these concerns, OpenAI disclosed its collaboration with the National Association of Secretaries of State in the United States, especially relevant to the upcoming presidential elections.
ChatGPT will guide users to CanIVote.org for election-related queries, reinforcing the company’s commitment to responsible AI use.
OpenAI is also actively working on enhancing transparency regarding AI-generated images from DALL-E. It plans to incorporate a “cr” icon on images, adhering to the protocol of the Coalition for Content Provenance and Authenticity.
The company aims to develop mechanisms to identify DALL-E-generated content even after modifications.
In the blog post, OpenAI reiterated its strict policies against potential abuses of its technology, such as creating chatbots impersonating real individuals or discouraging voting.
DALL-E is explicitly restricted from generating images of real people, including political candidates. Despite these measures, OpenAI faces challenges in monitoring its platform effectively.
A case highlighted by Reuters revealed that while attempts to create images of Donald Trump and Joe Biden were blocked due to content policy concerns, images of several other U.S. politicians, including former Vice President Mike Pence, were successfully generated.