A Transformer-Based Semantic Verification Model for Detecting Online Misinformation
DOI:
https://doi.org/10.47392/IRJAEH.2026.0007Keywords:
Fake news detection, transformer models-BERT, bilingual NLP, explainable AI, ROC Curve, Precision, recall, F1-scoreAbstract
The flood of misinformation online is moving so fast that manual fact checking can’t keep up, prompting a push for automated, transparent fake news detectors. This study introduces a bilingual (Kannada English) system built on a finely tuned BERT model that not only flags false stories but also explains its decisions using sentiment analysis, domain credibility scores, and keyword level reasoning. To balance the data, the team started with about 12 000 English and Kannada articles and boosted the set to roughly 14 000 through synthetic augmentation, which helped avoid class- imbalance problems. The model delivers impressive results-around 94 % accuracy with precision, recall and F1 scores all hovering in the mid-90s, and an AUC of 0.96 confirming robust discrimination between real and fake content. What really sets this work apart from many existing tools is its bilingual capability and built in explainability, addressing the typical monolingual, “black box” limitation of earlier detectors. On top of the core engine, users get handy extras-a Wikipedia viewer for quick background checks and a meme generator for lighter engagement-both of which operate independently of the classification pipeline.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 International Research Journal on Advanced Engineering Hub (IRJAEH)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
.