Problem Reports
"Houston, we have a problem."
Jim Lovell, Apollo 13 Commander
What makes a good problem report?
Good problem reports are clear, complete, and objective.
IEEE standard 1044-1993 is a useful source of ideas for fields and processes
around bug tracking. There are lots of defect tracking systems available on
the market. Two industrial-strength products are ClearDDTS for Unix,
and ClearQuest for Windows, both from
Rational Software.
Note that my interest in recommending these is somewhat vested, since I worked on both
products, but I do believe they are two of the most flexible products on
the market.
At a minimum the initial report should include:
- a one-line headline summary of the problem.
This should be as unique as possible, since the headline is usually what
people read first and search for in the database. Headlines like, "Foo is
broken." are not useful unless foo is completely inoperable. A better
headline might be, "Foo is a no-op if bar option used."
- how severely does this behavior affect product use?
This field is often expressed as a numeric severity for easy sorting. Many
shops will define Severity 1 to be crashes, hangs, and any other problem which
causes the product to be unusable. Severity 3 is often the run-of-the-mill
bug, and Severity 5 is used for enhancements, typographical errors, etc.
- a detailed description of the problem
This field should expand on the the description in the headline, and also
provide detailed instructions on how to recreate the problem. If there may
be some doubt about why the behavior is a problem, then reasoning behind your
choice of severity level should be included.
- where exactly was the problem observed
Depending on the product under test, this might include a build identifier,
machine information, operating system information, etc.
- who submitted the report and when
- maybe initial assignment information
Depending on how much the submitter knows about the internals of the product,
they may be able to make an educated guess about what component has the problem
and which developer should look at it first. This is really more of a process
issue than anything else.
Good problem report metrics
Establishing good metrics is really hard. If you're interested in starting
a metrics program, take a class first. If you want to read a book,
Making Software Measurement Work, by Bill Hetzel is a good one.
Practical Software Metrics for Project Management and Process Improvement
by Robert Grady is a classic. All responsible metrics books include
a warning to start slowly. Kids, don't try this at home.
- Testers should self-monitor number of problems they submitted
which were deemed "not a bug". The number should be about 10% of the total
bugs they filed. Below that, then they may not be filing bugs which are in
the gray areas of the product. This is bad because those questions need to
be asked. Above 10%, and testers risk losing credibility.
- The defect arrival versus the disposal rate. Really this is two metrics,
but you have to look at both numbers. These two trends will tell you a lot
about how close the product is to being ready to ship.
- Distribution of the defect severity versus the defect priority. As described
above, the severity is how bad the symptoms of the problem are. The priority
is what order the problem will be fixed in. Priority is sometimes expressed
as "must, should, could". Must-fix bugs are enough to stop shipment.
This metric is a good way to find defects that have been mis-prioritized.
Bad problem report metrics
It can be argued that any single metric is a bad metric. You can
never get the whole picture from a single metric, and any single metric
will skew the team's behavior in that direction. That being said, there
are a number of metrics that will almost always cause trouble.
- bugs submitted by person. All this will tell you is who has been doing
testing, but it won't tell you what kind of testing. The worst thing you
can do is have a defect finding "contest".
- most metrics that attach a person to a defect count are a problem if
they are used for anything except workload averaging across the team.
Home
Copyright 1998 Anne Powell
last update 3/8/98