On the morning of the New Hampshire primary a robocall was launched spoofing a New Hampshire Democrat’s cell phone number with a deepfake of President Joe Biden telling voters not to vote in the primary, but instead to vote in November.

The New Hampshire Attorney General is investigating what it is calling an “unlawful attempt at voter suppression” and is warning consumers that the message was “artificially generated” and should be disregarded. The New Hampshire Secretary of State said the calls “reinforce a national concern about the effect of artificial intelligence on campaigns.”

Fake recordings in Slovakia’s elections last year and the fake robocall of President Biden show that the proliferation of AI and deepfakes will be used during elections, which worries misinformation researchers. According to panelists from the University of Washington’s Center for an Informed Public, which studies the spread of strategic misinformation, “When multiple pieces of fake content related to the same subject are pushed out, it can create a more believable narrative.”

The panelists noted that deepfakes and AI-generated content quality will improve and get harder to detect, and “educating the general public about how to decipher authentic information from fake content will be a challenge.”

Spotting deepfakes is like spotting a phishing email. Most people think they can spot them, but a study by iScience, “Fooled Twice: People Cannot Detect Deepfakes but Think They Can,” shows the majority cannot. The highlights of the study show:

  • “People cannot reliably detect deepfakes;”
  • “Raising awareness and financial incentives do not improve people’s detection accuracy;”
  • “People tend to mistake deepfakes as authentic videos (rather than vice versa);”
  • “People overestimate their own detection deepfake abilities.”

It’s not great news in an election year.

Science, in its “How to Spot a Deepfake-and Prevent It from Causing Political Chaos,” spoke with researchers and experts about the dangers of deepfakes. Science noted that “deepfakes are cheaper and easier to produce than ever, and we’re likely to see many more during the election season.”

The key is to educate people on the existence of deepfakes and to teach them how to spot one. Science has some tips to spot deepfakes.

We all need to be ready to receive, spot, and stop deepfakes. Here are some additional useful resources: 

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chair’s the firm’s Data Privacy and Security Team. Linn focuses her practice on…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chair’s the firm’s Data Privacy and Security Team. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.