A Survey on Transformer-based Models in Code Summarization

Authors

  • Suraj Nate Dept. of AI&DS, Rajiv Gandhi Institute of Tech., Mumbai, Maharashtra, India. Author
  • Om Patil Dept. of AI&DS, Rajiv Gandhi Institute of Tech., Mumbai, Maharashtra, India. Author
  • Shreenidhi Medar Dept. of AI&DS, Rajiv Gandhi Institute of Tech., Mumbai, Maharashtra, India. Author
  • Dr. Jyoti Deshmukh Dept. of AI&DS, Rajiv Gandhi Institute of Tech., Mumbai, Maharashtra, India. Author

DOI:

https://doi.org/10.47392/IRJAEH.2025.0103

Keywords:

Code summarization, natural language descriptions, transformer-based models, BERT, CodeT5, CodeBERT, performance metrics, automated code documentation

Abstract

Code summarization in software engineering refers to the task of automatically generating short natural language descriptions for source code, which has been addressed by the development of transformer-based models such as CodeBERT, CodeT5, and CodeSearchNet in deep learning. These approaches have played a significant role in enhancing the capability of automated code documentation and producing good summaries. This paper provides an in-depth review of the latest transformer-based models for code summarization in terms of architectures, performance metrics, and diverse applications. Through the analysis of the strengths and weaknesses of these models, we aim to give insights that could guide future research and development in the area of automated code documentation. Through this survey, we will evaluate the progress of transformer-based models and give a foundation for further advancements in code summarization technologies.

Downloads

Download data is not yet available.

Downloads

Published

2025-03-22

How to Cite

A Survey on Transformer-based Models in Code Summarization. (2025). International Research Journal on Advanced Engineering Hub (IRJAEH), 3(03), 740-745. https://doi.org/10.47392/IRJAEH.2025.0103

Similar Articles

1-10 of 404

You may also start an advanced similarity search for this article.