Automated Hallucination Detection and Mitigation in Large Language Model

Authors

  • S Srinivasan Professor of practice, Department of Artificial intelligence and Data science, SRM Valliammai Engineering College, Kattankulatur, Tamil Nadu, India. Author
  • R Manjushree UG Scholar, Department of Artificial intelligence and Data science, SRM Valliammai Engineering College, Kattankulatur, Tamil Nadu, India. Author
  • P Harshini UG Scholar, Department of Artificial intelligence and Data science, SRM Valliammai Engineering College, Kattankulatur, Tamil Nadu, India. Author
  • G V Jeeshitha UG Scholar, Department of Artificial intelligence and Data science, SRM Valliammai Engineering College, Kattankulatur, Tamil Nadu, India. Author

DOI:

https://doi.org/10.47392/IRJAEH.2025.0104

Keywords:

AI Hallucination Detection, Fact-Checking AI, External API Integration, Semantic Analysis, Scalability & Efficiency, AI Response Validation, IEEE

Abstract

The system is designed to improve AI credibility by providing reliable fact-checking solutions for various applications, including AI-powered customer service, legal consultation, and medical diagnosis verification. It accomplishes this by utilizing advanced Natural Language Processing (NLP) methods, integrating external APIs for real-time information retrieval, and applying sophisticated machine learning models for accurate analysis. The system operates through a structured four-stage pipeline: data collection, which gathers information from credible sources; preprocessing, where data is cleaned, standardized, and organized for efficient processing; model training, where AI is refined using extensive datasets to enhance accuracy and adaptability; and real-time evaluation, ensuring responses are verified dynamically before reaching users. With a modular architecture, the system prioritizes scalability and efficiency, enabling seamless data management, precise accuracy assessment, and an intuitive user interface for enhanced interaction. A key feature is its ability to validate AI-generated responses against trusted and authoritative data sources, minimizing misinformation and ensuring factual correctness. This validation process strengthens AI reliability, instilling greater user confidence in automated decision-making while upholding transparency and accountability across essential industries. Furthermore, the system is built to integrate seamlessly into various AI-driven applications, offering a responsive interface that balances efficient verification with optimal processing speed. By reinforcing AI trustworthiness and mitigating the spread of incorrect information, this solution promotes responsible AI adoption in critical fields such as automated customer support, healthcare, legal compliance, and financial analysis

Downloads

Download data is not yet available.

Downloads

Published

2025-03-22

How to Cite

Automated Hallucination Detection and Mitigation in Large Language Model. (2025). International Research Journal on Advanced Engineering Hub (IRJAEH), 3(03), 746-754. https://doi.org/10.47392/IRJAEH.2025.0104

Similar Articles

1-10 of 452

You may also start an advanced similarity search for this article.