Poster: Unit Testing Past vs. Present: Examining LLMs' Impact on Defect Detection and Efficiency

Research output: Chapter in Book/Report/Conference proceedingConference proceedingspeer-review

Abstract

The integration of Large Language Models (LLMs), such as ChatGPT and GitHub Copilot, into software engineering workflows has shown potential to enhance productivity, particularly in software testing. This paper investigates whether LLM support improves defect detection effectiveness during unit testing. Building on prior studies comparing manual and tool-supported testing, we replicated and extended an experiment where participants wrote unit tests for a Java-based system with seeded defects within a time-boxed session, supported by LLMs. Comparing LLM supported and manual testing, results show that LLM support significantly increases the number of unit tests generated, defect detection rates, and overall testing efficiency. These findings highlight the potential of LLMs to improve testing and defect detection outcomes, providing empirical insights into their practical application in software testing.
Original languageEnglish
Title of host publication18th IEEE International Conference on Software Testing, Verification and Validation (ICST) 2025, Naples, Italy, March 31 - April 4, 2025
Number of pages4
DOIs
Publication statusPublished - 20 May 2025

Fields of science

  • 102020 Medical informatics
  • 102022 Software development
  • 102006 Computer supported cooperative work (CSCW)
  • 102027 Web engineering
  • 502050 Business informatics
  • 102040 Quantum computing 
  • 102016 IT security
  • 503015 Subject didactics of technical sciences
  • 509026 Digitalisation research
  • 102015 Information systems
  • 102034 Cyber-physical systems
  • 502032 Quality management
  • 211928 Systems engineering

JKU Focus areas

  • Digital Transformation

Cite this