A PEFT (Parameter Efficient Fine-Tuning) Framework Enabling Context Augmentation in LLMs

Authors

  • Navaneeth D Student, International Institute of Information Technology, Electronic City Phase–1, Bengaluru, Karnataka, India. Author
  • Mijaz Mukundan Scientist/Engineer, MSSG, Vikram Sarabhai Space Centre, Thiruvananthapuram, Kerala, India. Author
  • Jishy Samuel Scientist/Engineer, MSSG, Vikram Sarabhai Space Centre, Thiruvananthapuram, Kerala, India. Author

DOI:

https://doi.org/10.47392/IRJAEH.2025.0617

Keywords:

Large Language Models (LLMs), LoRA, Retrieval-Augmented Generation

Abstract

This paper explores domain-specific fine-tuning of large language models (LLMs) to enhance their contextual understanding in specialised tasks. Mistral-7B was selected as the base model due to its open-source availability, efficiency, and performance. Training was conducted using PDF-derived datasets, converted into structured formats suitable for model ingestion. The parameter-efficient fine-tuning technique LoRA was employed, allowing training on consumer-grade hardware. The fine-tuned models were deployed in a Retrieval-Augmented Generation (RAG) based chatbot system. Evaluation was carried out through human judgment, comparing outputs from the fine-tuned and base models using identical prompts. Results indicated that while the trained models effectively learned specific domain knowledge and keywords, they sometimes lacked deeper conceptual linkage between ideas. Nonetheless, the approach demonstrated a practical path toward resource-efficient customisation of LLMs for domain-specific applications.

Downloads

Download data is not yet available.

Downloads

Published

2025-12-26

How to Cite

A PEFT (Parameter Efficient Fine-Tuning) Framework Enabling Context Augmentation in LLMs. (2025). International Research Journal on Advanced Engineering Hub (IRJAEH), 3(12), 4214-4217. https://doi.org/10.47392/IRJAEH.2025.0617

Similar Articles

1-10 of 465

You may also start an advanced similarity search for this article.