Deepfake Video Detection Using LSTM Networks: A Temporal Sequence Learning Approach
DOI:
https://doi.org/10.47392/IRJAEH.2025.0325Keywords:
Deepfake, LSTM, RNN, Temporal dependencies, video analysis, data preprocessing, data augmentation, model generalization, deepfake detection, sequential dataAbstract
With the swift emergence of deepfake videos, there is an urgent demand for sophisticated and effective detection methods to counter the dangers posed by misinformation and digital manipulation. This research examines the application of Long Short-Term Memory (LSTM) networks for the identification of deepfake content. LSTM, a variant of recurrent neural networks (RNNs), is recognized for its proficiency in learning time-related patterns in sequential data, rendering it particularly effective for analyzing the changing dynamics in video streams. The study aims to utilize LSTM architecture to identify irregularities characteristic of altered video content, highlighting the importance of temporal patterns in recognizing deepfakes. The proposed methodology involves thorough video preprocessing, creation of high-quality training datasets, and the implementation of data augmentation techniques to enhance the model’s generalization abilities. Additionally, the research investigates training protocols and optimization approaches tailored to LSTM models, with the goal of attaining high levels of accuracy and dependability in deepfake detection.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering Hub (IRJAEH)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.