« RETURN TO NEWS

N.C. A&T Researchers Study Deepfakes Detection, Impacts on Political Leaders

By Jamie Crockett / 04/24/2025 Research, College of Engineering

EAST GREENSBORO, N.C. (April 24, 2025) – Using artificial intelligence (AI) to create a professional headshot, swap faces with someone else or see what your future child may look like, and more, has underscored how the ability to alter images and video has become increasingly accessible.

“Maybe six years ago you would need a computer scientist or a computer engineer to do what are we are seeing today, but now anyone can use AI,” said Kaushik Roy, Ph.D., director of the Center for Cyber Defense at North Carolina Agricultural and Technical State University. “It easily becomes a huge problem when fake videos and images are out there.”

That is particularly true for politicians during election campaign season, who are targets of bad actors using software to manipulate images, videos and audio to incite fear and spread misinformation.

Roy, Kashifah Afroz, a student at the STEM Early College at N.C. A&T, and Ph.D. student Swetha Chatham studied types of “deepfakes,” how to detect the models and how to determine if content is authentic or fake.

“As part of my community engagement, I work with high school students interested in research and Kashifah reached out to me to get involved and started working with me when she was a high school junior,” said Roy. “This is great exposure at a very early stage - not just getting involved in research, but also taking the lead as a first author on a research article is impressive. Opportunities like this help students build their resumes and enhance their applications as they consider various colleges.”

In the paper Understanding the Threat of Political Deepfakes first authored by Afroz and presented at the 4th annual IEEE Conference on AI in Cybersecurity at the University of Houston, the team referenced several examples of deepfake content, including “false speeches created by an AI tool from former president Barack Obama surfaced after his presidency.”

The team listed relevant studies and their authors, the datasets used and detailed what each one helps improve deepfake detection success rates.

The three main types of deepfakes include manipulated audio, image and video content, which are then broken down into various subtypes. For example, lip sync, face-swap and puppet master are the three subtypes for video manipulation.

“Lip sync uses a trained network to take an authentic video and map the mouthing and position of features to be consistent with synthesized audio,” the team explained in the paper. “This form can take segments from the original video to replace parts of the audio to create a smooth blend.”

The researchers noted in the paper that video deepfakes are the most common threat to a politician. When people consume this type of content, especially during wartime, it can “cause harmful effects that pose a threat to national security, such as mass panic among the citizens.”

Advances in technology have rendered the ability to spot missing or inconsistent blinking almost useless in deepfake videos, for example, as most systems are more “humanoid” and realistic, factoring in blinking. Most laypersons are not able to tell the difference because of these developments.

“AI is inevitable and it is everywhere,” Roy said. “Even to understand what is happening, a person would need to improve their AI literacy and seek out information.”

Roy suggested platforms like YouTube and even Chat-GPT as resources to learn more about AI; however, users should know that not everything is trustworthy, so they should engage with a level of caution.

Media Contact Information: jicrockett@ncat.edu

All News, College News, Headlines