A PEFT (Parameter Efficient Fine-Tuning) Framework Enabling Context Augmentation in LLMs
DOI:
https://doi.org/10.47392/IRJAEH.2025.0617Keywords:
Large Language Models (LLMs), LoRA, Retrieval-Augmented GenerationAbstract
This paper explores domain-specific fine-tuning of large language models (LLMs) to enhance their contextual understanding in specialised tasks. Mistral-7B was selected as the base model due to its open-source availability, efficiency, and performance. Training was conducted using PDF-derived datasets, converted into structured formats suitable for model ingestion. The parameter-efficient fine-tuning technique LoRA was employed, allowing training on consumer-grade hardware. The fine-tuned models were deployed in a Retrieval-Augmented Generation (RAG) based chatbot system. Evaluation was carried out through human judgment, comparing outputs from the fine-tuned and base models using identical prompts. Results indicated that while the trained models effectively learned specific domain knowledge and keywords, they sometimes lacked deeper conceptual linkage between ideas. Nonetheless, the approach demonstrated a practical path toward resource-efficient customisation of LLMs for domain-specific applications.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering Hub (IRJAEH)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
.