Fine-Tuning a Transformer Model for Sentiment Analysis and Summarization of ISO-Certified Customer Satisfaction Surveys
DOI:
https://doi.org/10.69478/BEST2025v1n2a025Keywords:
Natural language processing, BERT, LSTM, Logistic Regression, Sentiment AnalysisAbstract
Understanding customer sentiment is an essential part of improving service quality, especially for organizations that rely on regular feedback, such as ISO-certified institutions. While structured survey questions provide valuable data, it’s often the open-ended comments that offer the most insight into customer experiences. However, analyzing large volumes of written feedback manually can be time-consuming and inconsistent. This study explores the use of machine learning and deep learning models to automate sentiment analysis of the responses gathered from customer satisfaction surveys. The research focused on comparing the performance of three models (Bidirectional Encoder Representations from Transformers), LSTM (Long Short-Term Memory), and Logistic Regression. To gain a clearer understanding of the customers' comments and suggestions, the models will be used to analyze their feedback. The dataset then went through a series of steps that included handling missing values and cleaning the comments to ensure quality output. After the data pre-processing is the results. Going through the outcomes, BERT significantly outperformed the other models, achieving the highest overall accuracy of 84%, while LSTM followed with 78%, and Logistic Regression lagged behind at 39%. These findings highlight the value of using transformer-based models like BERT for understanding complex, unstructured customer feedback and suggest that such models can play a meaningful role in decision-making and service improvement.

Published
Issue
Section
Categories
License
Copyright (c) 2025 Jayson A. Daluyon, Ronel J. Bilog (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.