Deep Deception Detector: Exposing AI Generated Fake Video
DOI:
https://doi.org/10.47392/IRJAEH.2025.0334Keywords:
Deepfakes, detection, Transformer, EfficientNet, facial landmarks, feature extraction, fusionAbstract
Deepfakes are artificially generated videos or images created to falsely portray someone as saying or doing things they never actually did. These manipulated media forms can lead to serious issues, especially when circulated on social media platforms. As a result, detecting deepfakes has become increasingly critical. This project builds on an existing detection method called FAMM, which targets identifying deepfakes, particularly in videos that have been compressed. The original FAMM approach analyzes facial movements by calculating distances and angles between key facial landmarks, such as the eyes, nose, and mouth, and observing how these points change over time. In this earlier method, GRU and SVM models were used to capture the temporal and static changes, and their outputs were combined to determine whether the video was authentic or fake. In our updated approach, we introduce more advanced techniques. We replace the GRU with a Transformer model, which offers improved capabilities in capturing time-based changes in facial movements. Additionally, we implement EfficientNet to extract more precise features from the face images. The data from both models are processed and then combined through a fusion strategy to reach a final classification of whether the video is real or fake. With these advancements, our system demonstrates improved accuracy in detecting deepfakes, even in low-quality or compressed videos. This project highlights how cutting-edge deep learning techniques can better address the spread of deepfakes on social media platforms.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering Hub (IRJAEH)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
.