OpenAI Will Limit Artificial Intelligence During Election Campaigns

The year 2024 will be a year of elections that will take place in every corner of the planet, from the USA to Europe, India, and Russia. 80% of the world’s population will be invited to vote, and it will be the first election since the existence of artificial intelligence.

And this is precisely one context, in which there are concerns that chatbots could reveal their negative side by influencing consultations through “mass misinformation.”

For this reason, OpenAI, the company that created ChatGPT, will not allow its program to be used for generating texts and images for election purposes and will present tools to fight misinformation, prevent abuse, and ensure transparency.

“We want to make sure our technology is not used in a way that could undermine this process,” OpenAI explains in the memo. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.

“We regularly refine our Usage Policies for ChatGPT and the API as we learn more about how people use or attempt to abuse our technology.

“Until we know more, we don’t allow people to build applications for political campaigning and lobbying,” says OpenAI, which is experimenting with a “provenance classifier” for images created with its DALL-E software. In the US, when ChatGpt is asked questions about the election, users are redirected to the voting information site CanIVote.org.