AI voting fraud is on the rise. Here’s how to stay safe, according to security experts


  • According to security experts, the number of AI voice clones is increasing
  • Voice-controlled AI models can be used to imitate loved ones
  • Experts recommend agreeing on a safe sentence with friends and family

The next spam call you receive may not be a real person, and your ear won’t be able to tell the difference. Scammers use voice-activated AI models to automate their fraudulent schemes, tricking individuals by imitating real human callers, including family members.

What are AI Voting Frauds?

Scam calls aren’t new, but AI-powered calls are a new, dangerous breed. They use generative AI to impersonate not only authorities or celebrities, but also friends and family.

The advent of AI models trained on human voices has opened up a new area of ​​risk when it comes to telephone fraud. These tools, such as OpenAI’s speech API, support real-time conversations between a human and the AI ​​model. With a small amount of code, these models can be programmed to automatically execute phone scams, encouraging victims to reveal sensitive information.