International Symposium on Empirical Software Engineering and Measurement

Key Speakers

Lionel Briand
Lionel Briand - Simula Research Laboratory and University of Oslo (Norway)

A Critical Analysis of Empirical Research in Software Testing

In the foreseeable future, software testing will remain one of the best tools we have at our disposal to ensure software dependability. Empirical studies are crucial to software testing research in order to compare techniques and improve software testing practices. In fact, there is no other way to assess the cost-effectiveness of testing techniques, since all of them are, to various extents, based on heuristics and simplifying assumptions.

However, when empirically studying the cost and fault-detection effectiveness of a testing technique, a number of validity issues arise. First of all, measurement of fault-detection effectiveness and cost is in itself difficult. For example, to ensure construct validity, what cost components should we account for when comparing techniques? In terms of external validity, what populations of faults, systems and developers do we target and how do we characterize them? In terms of conclusion validity, how do we account for the random variation inherent to all test techniques? Indeed, test techniques are heuristics and cannot guarantee the detection of any type of faults. Many test suites can usually satisfy a testing coverage criterion but yield a different cost and fault detection effectiveness values.

Further, there are many ways in which empirical studies can be performed, ranging from simulations to controlled experiments with human subjects. What are the strengths and drawbacks of the various approaches? How do they complement each other? What is the best option under which circumstances?

This keynote will present a critical analysis of empirical research in software testing and will attempt to highlight and clarify the main issues in a structured and practical manner.



Gregg Rothermel
Gregg Rothermel - University of Nebraska (USA)

Helping End-User Programmers "Engineer" Software:
an Opportunity for Empirical Researchers

While much of the software that people depend on is written by professional software engineers, increasingly, important applications are being created by non-professional (end-user) programmers. Using tools such as spreadsheet environments and web authoring tools, these programmers are creating software that is being used to support significant activities and inform decisions. Such software needs to work dependably and increase user productivity, but evidence shows that it frequently does not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults, and data suggests that time spent maintaining web macros may actually impede their users' overall efforts.

In recent years researchers have begun to address this problem, considering various approaches to adapting software engineering techniques to the realm of end-user programming. For example, researchers have sought ways to help end users test and debug spreadsheets, and to increase productivity in web macros by combining them with various software engineering devices. To make progress in this area, researchers are turning to empirical studies, in order to investigate new approaches, understand the factors that influence them, and better understand issues related to end user programmers themselves.

In this talk I will present recent work being done in end-user software engineering, with a particular focus on the state of the art of empirical research in the area. I will show that there is a pressing need for further empirical work in this context, and that there are interesting questions that researchers from the ESEM community could help to answer. For example, how are end-users different from programmers, and how does that affect how we conduct the research? (Are there assumptions we make when doing studies with programmers that do not hold with end-users? What are the threats to validity when, say, we use CS students as subjects in end-user studies?) As another example, how are end-users different from each other and how does that affect study results? (The end-user community is much more diverse than the professional programmer community. What are the important context variables one needs to capture when studying them?) A concerted effort by the ESEM community on such issues could result in a substantial impact on society as a whole, and on the everday lives of many people.

International Symposium on Empirical Software Engineering and Measurement - Madrid