Wednesday, March 12, 2025
6.9 C
London
HomeAIAnthropic takes steps to prevent election misinformation

Anthropic takes steps to prevent election misinformation

Date:

Lloyds Banking Group Secures Patent for AI-Powered Cybersecurity Innovation

Lloyds Banking Group's Global Correlation Engine (GCE) leverages AI...

CaixaBank Unveils €5 Billion Technology Roadmap with AI Focus

Discover how CaixaBank's ambitious €5 billion technology roadmap aims...

Goldman Sachs Develops AI Assistant Mimicking Seasoned Bankers

Generative AI Tool Aims to Enhance Efficiency and Decision-Making Highlights:...

Ahead of the 2024 U.S. presidential election, Anthropic, the well-funded AI startup, is testing a technology to detect when users of its GenAI chatbot ask about political topics and redirect those users to “authoritative” sources of voting information.

Called Prompt Shield, the technology, which relies on a combination of AI detection models and rules, shows a pop-up if a U.S.-based user of Claude, Anthropic’s chatbot, asks for voting information. The pop-up offers to redirect the user to TurboVote, a resource from the nonpartisan organization Democracy Works, where they can find up-to-date, accurate voting information.

Anthropic says that Prompt Shield was necessitated by Claude’s shortcomings in the area of politics- and election-related information. Claude isn’t trained frequently enough to provide real-time information about specific elections, Anthropic acknowledges, and so is prone to hallucinating — i.e. inventing facts — about those elections.

“We’ve had ‘prompt shield’ in place since we launched Claude — it flags a number of different types of harms, based on our acceptable user policy,” a spokesperson told TechCrunch via email. “We’ll be launching our election-specific prompt shield intervention in the coming weeks and we intend to monitor use and limitations … We’ve spoken to a variety of stakeholders including policymakers, other companies, civil society and nongovernmental agencies and election-specific consultants [in developing this].”

It’s seemingly a limited test at the moment. Claude didn’t present the pop-up when I asked it about how to vote in the upcoming election, instead spitting out a generic voting guide. Anthropic claims that it’s fine-tuning Prompt Shield as it prepares to expand it to more users.

Anthropic, which prohibits the use of its tools in political campaigning and lobbying, is the latest GenAI vendor to implement policies and technologies to attempt to prevent election interference.

The timing’s no coincidence. This year, globally, more voters than ever in history will head to the polls, as at least 64 countries representing a combined population of about 49% of the people in the world are meant to hold national elections.

In January, OpenAI said that it would ban people from using ChatGPT, its viral AI-powered chatbot, to create bots that impersonate real candidates or governments, misrepresent how voting works or discourage people from voting. Like Anthropic, OpenAI currently doesn’t allow users to build apps using its tools for the purposes of political campaigning or lobbying — a policy which the company reiterated last month.

In a technical approach similar to Prompt Shield, OpenAI is also employing detection systems to steer ChatGPT users who ask logistical questions about voting to a nonpartisan website, CanIVote.org, maintained by the National Association of Secretaries of State.

In the U.S., Congress has yet to pass legislation seeking to regulate the AI industry’s role in politics despite some bipartisan support. Meanwhile, more than a third of U.S. states have passed or introduced bills to address deepfakes in political campaigns as federal legislation stalls.

In lieu of legislation, some platforms — under pressure from watchdogs and regulators — are taking steps to stop GenAI from being abused to mislead or manipulate voters.

Google said last September that it would require political ads using GenAI on YouTube and its other platforms, such as Google Search, be accompanied by a prominent disclosure if the imagery or sounds were synthetically altered. Meta has also barred political campaigns from using GenAI tools — including its own — in advertising across its properties.

source

Rinsu Ann Easo
Rinsu Ann Easo
Diligent Technical Lead with 9 years of experience in software development. Successfully lead project management teams to build technological products. Exposed to software development life cycle including requirement analysis, program design, development and unit testing and application maintenance. Has worked on Java, PHP, PL/SQL, Oracle forms and Reports, Oracle, Bootstrap, structs, jQuery, Ajax, java script, CSS, Microsoft Excel, Microsoft Word, C++, and Microsoft Office.

Related stories

spot_img

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories