| von Helen Petrisca

Test Management – Part 3: Documentation and Reporting

Are you still testing or are you already documenting?

Do you know the situation, when the developer says: “Come on, it takes 2 minutes to test this! It’s only the wording of the heading.” But to ensure quality in the project and to be audit-save, you have to document what (see also part 2 of this blog set) and how you had tested. Quality of the test means, that you can review every time, how the developing stage at test time was and if it still consists.

Nevertheless, the big question is, why you should document and report testing. Just because the project requires this? Because of the reason of being a tester? Just because of ISO 9000:2000 [1]? Like the hole project, tests must be also managed, tracked and reported. Like showed in part 1 testing describes an own life cycle within the software development - it’s a kind of subproject. The scheme shows clearly that there is a difference regarding the software development framework: waterfall or agile.

Documentation takes longer as the test itself

Test documentation means “collecting and analyzing data from testing activities and subsequently consolidating the data in a report to inform stakeholders” [2]. This task will take more time than you are supposed to do. Why? Because beginning the development cycle tests must be designed (how it is schemed in part 1 of the blogpost). This step contains the concept for the test strategy including methods and tools.

The focus must be on the result: What do you expect to see/hear/feel? Regardless of the project framework the documentation ensures that testing at least covers the requirements. That is how IEEE610 defines it: The degree to which a system, component, or process meets customer expectations and needs [3]. Regarding the test case definition ISO 9001:2000 suggests the degree to which a set of inherent characteristics meets requirements [4]. Another type of documented artifacts / project phases is the test plan. It defines work products to be tested and test type distribution among testers and should answer these questions: What, why, where, when, how, who.

Tools

There are some established tools to track and document the test cases. To know how to organize this tool saves a lot of time in searching and creating overviews. One of these tools is Jira as basic software. To use it the installation of a corresponding test plug in is needed. Sounds complicated, but that’s how the tool can be selected by requirements.

“Though Jira Software was not designed to serve as a Test Case Management, it can be configured to support test case management in a couple of different ways.”, it is noticed on the Atlassian-homepage. The functionality is quite clear. The plugin Synapse RT provides standardized reports and group-views based on the same data base of development issues grouping by defines labels or components. The relationship between requirement and test case via links is established.

The test suite of SAP supports manual and automated functional test. If there is another SAP product like ChaRM the Test Suite should be integrated in the requirement-to-deploy process. The hole system manages to establish himself as single point of truth of the software development status. Test planning happens using manual and attribute driven test case selection. It supports waterfall and agile development approaches including DevOps.

In both cases test cases are reusable. this feature will prove to be quite important when it comes to regression testing. What’s about the number of concepts and plans? How much test cases? The more the better – but don’t forget to relate them to the requirements!

Make testing visible

It is very important, that all your work as tester was documented and it is therefore traceable and billable. Not less important is it to report your work as a test manager to the project management.

Reporting the test status communicates the system quality. The detail level can be individual (per requirement, release, project, etc.) or group-related (team, release, project). The frequency should be adapted to the release and deployment plan. In the beginning test reports can be published daily or final. To ensure that the information would arrive the stakeholders it is important to choose the wright communication way (e-mail, dashboard, iMessage, etc.)

The test status can be classified first as “passed” or “failed”. In order to make a statement, the factor quantity must also be considered: total: 20 tests, passed: 15, failed: 5. The fails can be classified more specifically in terms of their impact level: critical, high, medium, low. If it would be a bad report, the presentation can be supplemented by test plan regarding current status and forecast.

The summary report should include defect/bug reports too, because it shows development needs and maybe new requirements (especially in agile framework). For visualization it is very important to clarify the message of the report. What do you mean by that? Corresponding to the answer the chart can be designed by Hichert principles (lean and clear, less colors)[5] to transport a technical information and turn out the impact. As diagram style stacked columns are often used. For group-related reports a pie chart can be a good choice.

However, the report looks like test the effect and reaction asking not involved people. They should tell you how they interpret the chart - if they understand the level of criticality.

Summary and conclusions

Testing - is human nature. We'd been doing it from the first second of brain activity. But what? 90% of test time is documentation? Where is the charm? Like in every part of the topic test management it turns out that no development is possible without documentation. Project member change, scope changes, there are so many influences - but paper is patient, and nothing is better than a snapshot to show evolution in software development.


Quellangaben:

[1] ISO = International Standardisation Organisation: Defines standards for quality management and quality insurance
[2] https://glossary.istqb.org/gb/search/reporting
[3] IEEE Standard Computer Dictionary: Der Grad, in dem ein/e System, Komponente oder Prozess die Kundenerwartungen & -bedürfnisse erfüllt.
[4] https://www.software-testing.academy/glossar.html : Der Grad, in dem ein Satz inhärenter Merkmale Anforderungen erfüllt.
[5] hi-chart.com

Jira Software is a registered trademark of Atlassian Corporation plc, SAP Solution Manager a registiered trademark of SAP SE. No endorsement by Atlassian or SAP is implied by the use of these marks.

Über den Autor
Helen Petrisca

Helen is an expert in the field of test management for software development and supports Woodmark since December 2018. With her great experience in classic and agile projects, Helen strengthen her attention for details.

Zurück