A Survey on Transformer-based Models in Code Summarization
DOI:
https://doi.org/10.47392/IRJAEH.2025.0103Keywords:
Code summarization, natural language descriptions, transformer-based models, BERT, CodeT5, CodeBERT, performance metrics, automated code documentationAbstract
Code summarization in software engineering refers to the task of automatically generating short natural language descriptions for source code, which has been addressed by the development of transformer-based models such as CodeBERT, CodeT5, and CodeSearchNet in deep learning. These approaches have played a significant role in enhancing the capability of automated code documentation and producing good summaries. This paper provides an in-depth review of the latest transformer-based models for code summarization in terms of architectures, performance metrics, and diverse applications. Through the analysis of the strengths and weaknesses of these models, we aim to give insights that could guide future research and development in the area of automated code documentation. Through this survey, we will evaluate the progress of transformer-based models and give a foundation for further advancements in code summarization technologies.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering Hub (IRJAEH)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.