Explainable Multimodal Deep Learning in Healthcare: A Survey of Current Approaches
DOI:
https://doi.org/10.47392/IRJAEH.2025.0151Keywords:
Sensor Data Analysis and Data-driven healthcare, Medical Imaging, Explainable AI (XAI), Disease Prediction, Electronic Health Records (EHR), Deep Learning, Medical Data Fusion, Healthcare AI, Multimodal LearningAbstract
Multimodal data integration has been considered the next step in transformation for modern healthcare as it brings an improved level of patient outcome and clinical decision-making. With the multimodal data set consisting of medical images, electronic health records, wearable sensor data, genetic information, and behavioral insights, the complexity of patient health becomes much clearer. Traditional methods for data analysis find it challenging in handling such complexities and diversities in data sets. This paper proposes a deep learning multimodal framework that exploits feature extraction, optimal selection of feature, and explainable AI techniques in order to detect and predict diseases. Data fusion techniques are used in the suggested system to efficiently combine various data sources, improving diagnosis accuracy and dependability. Furthermore, by using explainable AI techniques, the model guarantees decision-making transparency and helps doctors comprehend the roles that various modalities play in diagnostic results. Using a Python implementation on this framework brings promising results of disease categorisation and prediction with the possibility for AI-driven multimodal healthcare to improve medical diagnosis and individual therapy.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering Hub (IRJAEH)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.