<  Retour au portail Polytechnique Montréal

LLMs and Stack Overflow discussions: reliability, impact, and challenges

Leuson Mario Pedro Da Silva, Jordan Samhi et Foutse Khomh

Article de revue (2025)

Document en libre accès dans PolyPublie et chez l'éditeur officiel
[img]
Affichage préliminaire
Libre accès au plein texte de ce document
Version officielle de l'éditeur
Conditions d'utilisation: Creative Commons: Attribution (CC BY)
Télécharger (2MB)
Afficher le résumé
Cacher le résumé

Abstract

Since its release in November 2022, ChatGPT has shaken up Stack Overflow, the premier platform for developers’ queries on programming and software development. Demonstrating an ability to generate instant, human-like responses to technical questions, ChatGPT has ignited debates within the developer community about the evolving role of human-driven platforms in the age of generative AI. Two months after ChatGPT’s release, Meta released its answer with its own Large Language Model (LLM) called LLaMA: the race was on. We conducted an empirical study analyzing questions from Stack Overflow and using these LLMs to address them. This way, we aim to quantify the reliability of LLMs’ answers and their potential to replace Stack Overflow in the long term; identify and understand why LLMs fail; measure users’ activity evolution with Stack Overflow over time; and compare LLMs together. Our empirical results are unequivocal: ChatGPT and LLaMA challenge human expertise, yet do not outperform it for some domains, while a significant decline in user posting activity has been observed. Furthermore, we also discuss the impact of our findings regarding the usage and development of new LLMs and provide guidelines for future challenges faced by users and researchers.

Mots clés

Matériel d'accompagnement:
Département: Département de génie informatique et génie logiciel
Organismes subventionnaires: FRQ, NSERC, CIFAR, Canada Research Chairs Program
URL de PolyPublie: https://publications.polymtl.ca/66641/
Titre de la revue: Journal of Systems and Software (vol. 230)
Maison d'édition: Elsevier BV
DOI: 10.1016/j.jss.2025.112541
URL officielle: https://doi.org/10.1016/j.jss.2025.112541
Date du dépôt: 22 juil. 2025 12:29
Dernière modification: 03 févr. 2026 19:47
Citer en APA 7: Da Silva, L. M. P., Samhi, J., & Khomh, F. (2025). LLMs and Stack Overflow discussions: reliability, impact, and challenges. Journal of Systems and Software, 230, 112541 (21 pages). https://doi.org/10.1016/j.jss.2025.112541

Statistiques

Total des téléchargements à partir de PolyPublie

Téléchargements par année

Provenance des téléchargements

Dimensions

Actions réservées au personnel

Afficher document Afficher document