Evaluating Object-Oriented Design

Robert Biddle,

Victoria U., New Zealand

robert@mcs.vuw.ac.nz

Rick Mercer,

U. of Arizona, USA

mercer@cs.arizona.edu

Eugene Wallingford,

U. of Northern Iowa, USA

wallingf@cs.uni.edu

Abstract

This paper reports on the Oopsla 98 workshop on evaluation of object-oriented design. This workshop was part of a series exploring an earlier emphasis on design in computer science education, and the issue of evaluation had been identified as a key challenge. The workshop considered a wide variety of issues, developed a model of the design evaluation process, and identified alternative strategies for the key parts of the process. Much work and exploration remains, but there was widespread feeling that the workshop made significant progress in meeting the challenge of design evaluation.

1 Overview

Object-Oriented programming is now the basis for many introductory courses in programming. But while it seems students successfully learn program implementation in such courses, it is less clear whether they learn program design. In 1996 we organized a workshop to address how to better teach OO design in first year computer science courses in universities and colleges [1]. In 1997 we organized a follow-up workshop on resources [2]. In 1998 we organized a third workshop to concentrate on a specific issue: how to evaluate OO design. This is an important issue for software designers, and one of practical importance to educators. We must be able to explain design quality to our students, and help them understand and distinguish what constitutes good or bad design.

Our workshops focus on object design, rather than implementation, and on the different issues involved in teaching and learning object design. We are striving to involve viewpoints and ideas from educators, learners, and industry, in a cooperative effort. There are many issues to address, including the nature of good design, how it can be taught, learned, and assessed ¾ and how tools can help. Our intention is to help educators perform their role more successfully. We explicitly avoid language wars, and specifically welcome people from both academia and industry to contribute their perspectives. Our initial goal in 1996 was to investigate general experience and ideas related to effective early teaching and learning of object design. We found widespread agreement that an early emphasis on design would have advantages, but there was concern that the idea presented some serious challenges. Our 1998 goal was to address one of these challenges, and explore how to evaluate OO design.

In preparation for the workshop, we had invited position papers on all aspects of the topic. Our agenda was to begin with brief presentations of these papers, then use the presentations as the basis for working discussions. Together, the presentations and discussion would fill the morning of the workshop. In the afternoon, we planned to tackle a design case study, taken from Rick Mercer’s work [3], as a way of focusing on design and evaluation issues. Finally, we planned to use the late afternoon to conclude the workshop by organizing our thoughts for this report.

In fact, our workshop turned out rather differently. We began as planned, but by late morning we chose to change the direction of the afternoon. The presentations and discussions were developing a structure as they proceeded, and we decided to pursue that structure. In the sections below, we report on the results of our workshop, and also document the workshop process that proved successful.

2 Presentations and Discussions

The first session of the day was to be an opportunity for everyone to introduce themselves and their views by recapping their position papers. Our intention was to avoid formal presentations, but concentrate on the discussion following each introduction. We took care, as each person presented their position, and in the discussion following, to identify the key issues that were arising.

Over the course of the presentations, a set of issues arose that shaped the workshop. Two of the first presentations were by Jane Chandler and Ed Epp, and they focused on basics of evaluation, and successful classroom techniques and experiences. One issue that arose was whether we should focus on evaluation as student assessment, or in some wider sense, and there was general support for taking the wider view. Philip East's presentation followed this discussion, and he suggested that it would be desirable to try and address the fundamentals of design and evaluation, rather than concern ourselves with various specific problems and tactical solutions.

Steven Fraser and Priya Marsonia then presented an industry perspective, outlining aspects of design that might be missed by students. In particular, Priya presented a slide of "desired characteristics" of good design that made it clear that good design was not a single concept. We dwelled on this slide for some time, and this point was probably the pivot of our early discussion.

The next two presentations took a rather more theoretical viewpoint, and suggested general approaches to design evaluation. Federico Balaguer and Alejandra Garrido explained their approach of concentrating on how reusability is related to variability and extensibility, which involves managing various potentially conflicting forces. Eugene Wallingford then outlined the work that he has been involved with [4], using patterns at an early stage to explain design elements. He also outlined how a pattern language might be used both to help with design and with design evaluation.

The final presentation was given by Marianna Sipos, who brought us back to teaching and learning practicalities, talking about how to address such issues as students lack of domain experience, and the impact of languages used to implement designs.

At this point in the workshop we had a wide-ranging discussion. The next item on the agenda was the afternoon design case study to be led by Rick Mercer. We did have extensive informal discussions about various design case studies, but we chose to change our agenda and not work through a single case study in detail. Instead, we decided to recognize how the morning discussions had developed into something more than a survey.

As we had been talking, it seemed that various elements of the discussion were connected in significant ways, and it seemed almost as if we had been unconsciously mapping out a model of the design evaluation process. Instead of tackling a case study in detail, we chose to try and make this model explicit, and explore its implications.

3 The Design Evaluation Process

The main structure of our model stemmed from our early concerns about whether we should discuss student assessment, or a more general approach to evaluation. We borrowed a distinction made in education between "summative" evaluation and "formative" evaluation. The idea of summative evaluation is to provide summary information after completion. The idea of formative evaluation is to provide assistance for further improvement.

The model structure reflects the distinction between summative and formative evaluation, and the context in which it occurs. This structure is shown in the figure below. The central element is a design, and in the model we suppose that earlier there was analysis, as depicted on the left, and later there will be actual code, as depicted on the right. Our main concern is with the design, and we do not consider analysis or implementation in detail. However, we should not forget that these steps are complex themselves, and that there are processes and people involved before analysis and after implementation.

There are three human roles involved: the designer, the implementor, and the evaluator. The designer takes the analysis and produces the design, and the implementor takes the design, and produces the actual code. To these usual roles, we have added the evaluator. Of course, these roles may all involve the more than one person, and one person might play several roles.

The evaluator takes the design, and produces a summative evaluation or formative evaluation. The summative evaluation will be especially useful to the implementor, who can use this information in making decisions related to implementation, such as which of several designs to use. The formative evaluation will be especially useful to the designer, who can use this information to improve the design. This means that evaluation leads to a circular structure within the process, allowing iterative improvement of a design, and introducing many possibilities such as prototyping and incremental design. This structure also facilitates both reflection by the designer, and mentoring by the evaluator.

A Model of the Design Evaluation Process

The structure above outlines the key parts of the process, but it does not yet address evaluation itself in detail. An evaluation is a determination of quality, but this is only meaningful within some context. We cannot tell how good something is unless we know its purpose. In the structure described above, the purpose of design is to facilitate implementation, but this is not a simple idea. Priya and Steven had pointed out there are a variety of desirable characteristics for a design, listing several of particular importance in industry, and tabulated below. It seems clear that any evaluation must be done with reference to which of these characteristics, or "values", is of interest.

Issues Relating to Desired Characteristics

Design satisfies requirements

Maintainability

Partitioning

Adaptability

Scalablity

Testability

Incremental

Desirable design characteristics, or "values", as suggested by Priya Marsonia and Steven Fraser.

This also relates to the discussion led by Federico and Alejandra, which chose one particular value, reusability, and considered how it could be determined. That discussion also involved the concept of managing forces during the design process, the different forces relating to different values. Also, Eugene’s presentation about patterns had outlined the role of patterns and pattern languages in managing forces, and assisting choice between design alternatives.

Understanding the design process itself does not necessarily lead to understanding of how the design should be evaluated. One reason is that not all design processes make explicit what design values the process emphasizes. Another reason is that a design process may be claimed to lead to some values, when this may not always be the case. However, where the values of a design process are explicit, and where it is clear that the process does lead to those values, this could indeed facilitate design evaluation.

For example, the design process details discussed by Federico and Alejandra, and also by Eugene, did address explicit values, and reasoned about how a process should lead to the values. In such cases, an examination of the adherence to the process would be an approach to design evaluation. In particular, an approximation to design evaluation might be achieved by checking whether designers had correctly followed the guidance of a pattern language.

At this point in the design of our model, we decided to address the circle of activities at the center, and identify choices for each of the key elements. In this way we were concentrating more on formative evaluation, although some issues discussed also relate to summative evaluation. The key elements we considered were the evaluation mechanism, the feedback method, and the design representation. The alternatives we discussed for each of these elements are tabulated below.

 

 

Evaluation Mechanisms

Metrics, such as class width and hierarchy depth

Heuristics, such as those documented by Riel [5]

Use-Case Walkthroughs, especially with role-play

Checks of Pattern and Pattern Language usage

Modification Tasks where the design must accommodate reasonable change

Specification Checks to see if the design does match analysis

Design Critiques, similar to art criticism or reviews

Evaluator possibilities: self (designer), peer, expert

Sample alternative evaluation mechanisms. Some approaches involve inspection of a design, and others involve testing the design by some activity. This distinction is also made in User Interface evaluation, and in assessment of implemented code.

 

 

 

Feedback Methods

Face-to-Face explanation

Formal written review

Feedback related to Values

Feedback related to Design Process

Facilitation during a Modification Task

Facilitation during a Use-Case Walkthrough with role-play

Facilitation during design process itself

Use industry role viewpoints, such as customer or manager

Approaches to providing feedback following evaluation. Important considerations to make feedback effective are how well the feedback relates to the context, and how quickly the feedback can be provided. For these reasons, elements of feedback may be incorporated into the design or evaluation processes themselves.

Design Representations

Class Diagrams

Interaction Diagrams

Use-Cases

Testing Plans

List of Candidate Classes

Pattern and Pattern Language Usage

CRC Cards

Class Definition Stubs

Synthesized documents such as JavaDoc pages

Design Representations useful for facilitating evaluation. Many deliverables or artifacts from the design process would be useful, and new approaches should be considered especially to help with evaluation and feedback

 

4 Conclusions

At our original 1996 workshop we had identified design evaluation as a key challenge, and when we planned to address it in 1998 we knew it would be difficult. As educators we know that evaluation is vital for several reasons, including student assessment and motivation. Yet we also know that evaluation must be based on sound principles, and be practical to accomplish. While we understand the importance of good design, the subtle nature of design seems to make evaluation of design more problematic than evaluation of more concrete work. Those of us considering teaching design to large groups shuddered at the difficulty of meaningfully assessing hundreds of designs.

Yet at our workshop there was a widespread feeling that we made significant progress in overcoming the difficulties of evaluating design. One general achievement was developing our conceptual model of the design evaluation process. The model connected several areas of knowledge and concern together in such a way that each of the areas seemed more understandable. Moreover, we hope the model will facilitate development of more principled design evaluation strategies.

Within our evaluation model, there were several key ideas that seemed to gain clarity and importance. For example, one key idea was that good design means design good with respect to some particular values. Another was that design process involves development toward these values, and involves making choices between values. More generally, we developed a better understanding of the role of evaluation, especially formative evaluation, in the design process. Evaluation follows design, but leads to feedback, which leads to better design. In this way, evaluation is a vital part of a virtuous feedback loop that enables better design, better understanding of good design, and a host of other benefits.

With this improvement in understanding, we found it easier to see the role and importance of key elements in the design evaluation process, including the design representation, the evaluation mechanism, and methods of feedback. We were then able to catalog and discuss alternatives for these elements, and were also inspired to invent new alternatives.

While we feel we have made a good start addressing the challenge of design evaluation, there is much yet to be done. The alternatives we discussed for representation, evaluation and feedback all need to be better organized. It would also be useful if the various alternatives could be better related to different values, to help make appropriate choices to evaluate for particular design values. As well as mapping out possibilities, there is also the need to test these approaches and indeed the model itself, to determine whether our understanding is correct.

We should also address the issue of summative evaluation in more detail. While the feedback loop makes formative evaluation attractive, summative evaluation is important beyond the design process itself. Understanding summative evaluation better would lead to better advice for people choosing between designs, and also help to assess the performance of designers. For educators, this would address this issue of student assessment.

Finally, we educators must better explore design evaluation in industry. Our growing understanding made us realize that evaluation was not a concern only for educators, but also of clear importance to anyone engaged in design. Despite having some familiarity with industry practice and design methodologies, we were unsure how evaluation featured as part of common industrial design processes. This is a topic we must investigate further.

Our workshops are an effort to explore early teaching and learning of object-oriented design. Our 1996 workshop identified the difficulty of design evaluation as a key problem, and we addressed this in the 1998 workshop. The day itself was stimulating, satisfying, and successful: we wish to thank everyone involved, and look forward to working together in the future. We feel a significant start has been made to meeting an important and difficult challenge.

 

 

5 Workshop Participants

Owen L. Astrachan

Duke University, USA

ola@cs.duke.edu

Federico Balaguer

University of Illinois at Urbana-Champaign, USA

balaguer@students.uiuc.edu

Robert Biddle

Victoria University, New Zealand

robert@mcs.vuw.ac.nz

Jane Chandler

University of Portsmouth, England

jane.chandler@port.ac.uk

Catalin Ciudin

Choice Hotels International, USA

catalin_ciudin@choicehotels.com

Gimi Ciudin

Choice Hotels International, USA

gimi_ciudin@choicehotels.com

Robert Duvall

Duke University, USA

rcd@cs.duke.edu

J. Philip East

University of Northern Iowa, USA

east@cs.uni.edu

Ed C. Epp

University of Portland, USA

epp@up.edu

Steven Fraser

Nortel Networks, Canada

sdfraser@nortel.ca

Alejandra Garrido

University of Illinois at Urbana-Champaign, USA

garrido@students.uiuc.edu

 

Priya Marsonia

Nortel Networks, USA

Priya-Narasimha.Marsonia.pmarsoni@nt.com

Rick Mercer

University of Arizona, USA

mercer@cs.arizona.edu

Marianna Sipos

Dennis Gabor College for Information Technology, Hungary

sipos@okk.szamalk.hu

Eugene Wallingford

University of Northern Iowa, USA

wallingf@cs.uni.edu

 

6 Position Paper Abstracts

Full text of all position papers is available at the following workshop web sites:

www.mcs.vuw.ac.nz/comp/Research/design1

www.cs.uni.edu/~wallingf/miscellaneous/oopsla98/

 

Evaluating Design by Reusability

Robert Biddle and Ewan Tempero

Victoria University of Wellington, New Zealand

What is good software design? Ultimately, it is design that facilitates working with the software. We believe a fundamental aspect of good design is that software should be reusable. This aspect of design looks to the longer term, and facilitates programmer productivity. In this paper, we discuss our approach to exploring reusability for evaluation of software design. Firstly, we review our understanding of how programming languages support reusability. Secondly, we briefly explain how these principles assisted our teaching, by enabling us to show how reusability could help in design and in design evaluation. Thirdly, we introduce our more recent work on developing programming tools designed to explicitly support understanding issues involving reuse and reusability.

Introducing student-centred methods in teaching computing: assessment-based methods

J.M.Chandler & S.C.Hand

University of Portsmouth, England

A module called Object Oriented Methods was introduced onto the fourth year undergraduate curriculum two years ago. It is being run in a student-centred fashion for groups of 70+ students and it strongly emphasises transferable skills and student feedback. This paper describes the course together with its methods, and shares experiences in attempting novel methods in a higher education environment. In particular it shows how assessment can be developed to become central to the learning process.

In-class Discussion of Student Work

J. Philip East

University of Northern Iowa

Essentially, this technique has three steps: students submit their work; the instructor reviews submissions to determine elements to address in class discussion; and the instructor leads a class discussion on the selected items. The items can be chosen to address a variety of topics -- code layout, documentation, language constructs used, design decisions, data representation/structures, etc.

Evaluating Design Through Modification

Ed C. Epp

University of Portland

A valuable evaluation of design aspects of student's programs occur when students are required to modify their designs to accommodate new program functionality or data sizes. Students must document their modifications. The more localized their modification, the better the design.

 

Evaluating Designs: Variability Measures

Federico Balaguer, Alejandra Garrido and Ralph Johnson

University of Illinois at Urbana-Champaign

The most important characteristic of object-oriented designs is their degree of reusability. All literature in the field mentions this feature as the key aspect in the object-oriented paradigm. In general, reusability means that a piece of design is used in more than one application of the same or different domains. The latter reaches the ideal degree of reusability, and it generally happens at the level of single classes or, on the other end, with abstract designs (design patterns). However, reusability occurs when various applications have been built, and particular classes as well as pieces of designs have been subject to many modifications and have reached certain maturity. Therefore, experienced designers are mainly the ones that can have a hand in reuse.

The above paragraph raises the question of how students in a teaching environment can evaluate the reusability of their design, when they do not have built many applications in the same or different domains. This paper is focused on a characteristic of object-oriented designs that can be measured in a single application and that also leads towards reuse: that is variability or extensibility of a design.

Evaluation of Object-Oriented Design

Rick Mercer

University of Arizona

My submission describes several concrete excursions into developing a vocabulary for object-oriented design in the first course (the first 15 weeks of a computer science degree). In an attempt to get students thinking about design, students are shown some elementary algorithmic patterns such as "Input/Process/Output" and "Multiple Selection" to help design algorithms and/or programs. We also consider some object-oriented design heuristics such as "All data should be hidden within its class", and "Avoid all-powerful classes". We end the semester by considering a case study. Our textbook designs a Cashless Jukebox system for the Student union. Students observe a simple OO design strategy that employs responsibility driven design captured on Component Responsibility Helper (CRH) cards, role playing, and teamwork.

Ways of OOP Teaching

Marianna Sipos

Dennis Gabor College for Information Technology, Hungary

There is not enough time to teach programming in the historical order: Structured Programming, Object Oriented Programming, and Software Developer Devices. In my opinion it is possible to teach Object-Oriented Programming in a 4GL, which makes enjoyable to learn programming and efficient the teaching process. But the implementation language defines how to teach the object-oriented paradigms. I specify one from the possible solutions. This change will lead us to have time to solve bigger problems such as distributed applications, which need really the analysis and design process before the implementation.

Using a Pattern Language to Evaluate Design

Eugene Wallingford

University of Northern Iowa

We would like to teach design early in the computer science curriculum, but evaluating design for the purposes of giving feedback and grades is hard to do. Much of the difficulty lies in a lack of vocabulary for describing and comparing designs. A pattern language of design and programming provides such a vocabulary--and more. By using a pattern language as one of the basic elements of our instruction, we provide students with a vocabulary for doing and comparing designs, and we provide the necessary foundation for evaluating design.

 

References

  1. Mercer, R., Biddle, R., Duvall, R., Clancy, M., Cockburn, A. Teaching and Learning Object Design in the First Academic Year, OOPSLA 1996 Addendum, ACM SIGPLAN 1997.
  2. Mercer, R. and Biddle R. Resources for Early Object Design Education, OOPSLA 1997 Addendum, ACM SIGPLAN 1998.
  3. Mercer, Rick. Computing Fundamentals with Standard C++, Object-Oriented Programming and Design, Rick Mercer, Franklin Beedle and Associates 1998.
  4. Riel, A. J. Object-Oriented Design Heuristics, Addison-Wesley 1996.
  5. Wallingford, Eugene. Elementary Patterns and their Role in Instruction, ChiliPLoP 1998 Conference, Wickenburg, Arizona. http://www.agcs.com/patterns/chiliplop/index.htm http://www.cs.uni.edu/~wallingf/research/patterns/chiliplop