International Symposium on Empirical Software Engineering and Measurement

Key Speakers

Harald Hoenninger

Vice President Corporate Research, Robert Bosch GmbH (Germany)

Title: Using empirical methods to improve industrial technology transfer

This keynote establishes a link between existing trends in the embedded (automotive) domain and the need for significant empirical evaluations of development methods.

Software drives many innovations in cars such as new functions that increase passenger safety or convenience. A side-effect is the growing complexity of embedded systems: The number of systems and functions increases along with their internal complexity. In order to cope with this, Bosch is investing into the development of engineering methods and their evaluation.

The audience will learn about:

  • The role of Bosch Corporate Research as a catalyst for new technologies
  • " Experiences with processes and methods
  • " Activities in the area of empirical research and measurement

The keynote will conclude with factors that are critical for success when using empirical methods in industrial settings and will provide an outlook on how empirical research can support industrial technology transfer.

Presenter

Mr. Harald Hoenninger is vice president of the Bosch corporate research division. He is responsible for the pre-development of software-intensive systems and for software-, hardware- and systems engineering methods.

He has over 25 years of experience as an engineer and manager in manufacturing and software development projects, specifically engine management systems and software for electronic control units. He initiated metric programs for embedded software and CMM(I)-based process improvement for projects and departments.

Mr. Hoenninger coordinates corporate-wide improvement activities at Bosch with the focus on processes, software architectures, and employee competencies.

His professional interests include change management, international distributed development, and team development.

Mary Shaw

Institute for Software Research, Carnegie Mellon University (USA)

Title: Empirical Challenges in Ultra Large Scale Systems

Ultra Large Scale (ULS) systems are complex software-intensive systems that are deeply embedded in a business and social context with many and diverse stakeholders. They are qualitatively more complex and challenging than software-intensive systems or even "systems of systems". ULSs, like other complex systems, are large along many dimensions, but the special character of "ultra-large scale" systems arises from their decentralized operation and control, their conflicting and even unknowable requirements, their continuous evolution requiring integration of heterogeneous and inconsistent elements, the indistinct boundary between the system and its users, and their consequent need for new forms of governance. The capability of a ULS arises not solely from the software and hardware of the system, but from the things that stakeholders with different objectives do with and to the system.

The "ultra-large scale" properties of these systems will pose new challenges for empirical investigation.

Conventional wisdom about developing software systems calls for showing that the system's implementation satisfies its specification (ideally, a formal specification). Taken thus in isolation, the specification can speak only to the system itself. We are coming to realize that a system should be evaluated against its user's needs and expectations - so a system that satisfies its specification in isolation may serve one user well but another user not at all, for example because they have different needs for latency or precision or because they depend on different properties of the system. Evaluating the value that systems deliver to specific users requires the identification of the users' objectives, selection of appropriate properties to evaluate, and appropriate mappings between these.

But even this is insufficient, because it does not consider the ways that end users will behave when using the system. The design, development, and evaluation of ULS systems must consider the ways that users will use, adapt, improve, augment, and even abuse the system. Users of ULS systems are usually not professional computer scientists with analytic training. Conventional wisdom in computer science calls for reasoning about software with high-ceremony evidence such as formal verification, systematic testing, or empirical data on performance in the field. Users of ULSs, however, are often unable to carry out formal analysis and do not have access to data about testing or field performance. They base their reasoning on low-ceremony evidence such as reputation, prior experience, and anecdotes. If empirical studies are to handle the behavior of systems in their users' context, we must learn to integrate low-ceremony evidence in our reasoning processes.

International Symposium on Empirical Software Engineering and Measurement - Kaiserslautern