Abstract: This is a story about an experience that Brian Kurtz and I had in shifting test documentation strategies from and all-manual unintentional approach to the utilization of an exploratory logging tool. We also talk about our observations on how that worked overall to our advantage within our context, in recapturing both time and operational expenses. The testing community is always going back and forth on how to balance manual testing with automation. In that vein, we’re convinced that exploratory logging tools can help by allowing manual testers to spend more time on actual testing: investigating, learning about the product, exploring for value, etc. and less time on the granular minutiae of documentation. These tools can help minimize the administrative overhead traditionally mandated to or self-imposed by testers. This experience can tell us something about the larger picture, but does not give us the whole story, and by no means does this include the only measurements we took to make our business case.
Note: When we say “test documentation” in this article, we’re not referring to the usual product outlines, coverage maps, etc that the feature teams will create, but rather one specific team’s documentation focused around bug/session reporting. We realize that bug reporting is only one small part of what a tester does, but this proof-of-concept was executed on a team that does a lot of claims verification and manual check-testing since they operate separately, outside of our normal scrum teams.
Special thanks to Brian Kurtz for auditing and contributing to this work.
Overview
When, how and to what extent to document has always been something of a quandary to many. When I was a young tester I felt it was a must. We must have documentation, simply for the sake of having it. We must write down everything about everything. There was no consideration to whether that documentation actually provided value or not. As I grew in my testing career, and through the mentorship of others, I realized that I needed to undertake a shift in my thinking. My new barometer came in two forms. The first in the form of the question, “What problem am I trying to solve?” The second, comes in the word, satisfice. In other words, how much documentation is enough to be satisfactorily acceptable based on what is actually necessary, rather than what we traditionally have done as testers. This involved me learning to align my testing compass to point toward the the concerns of product management, rather than simply all the concerns that I had.
Recently, we were working with a team that wanted to solve a few problems: decreasing the cost of testing as well as the amount of time it takes to do that testing, all while raising our product quality. Management always wants it both ways right? More quality with less time. Well, we found a tool that we believed could help mitigate this discrepancy, while at the same time allowing our team to do better testing.
Of course, the first step after choosing a tool (which required much research) was that we needed to make a business case, as this would be external to the already approved yearly budget. Since the cost was a new cost to the company, our business case had to be compelling on both fronts: cost saving and time saving. The tool we landed on was QASymphony qTest eXplorer, an exploratory logging tool that takes much of the administrative overhead out of bug reporting. This tool has a cost of $39 per user per month, which comes to a yearly cost of $468 per tester per year. Since the team we’re targeting has three testers, that’s a yearly cost of $1,404. Now that we have the cost of the proposed tool, that we’ll subtract at the end, let’s take a look at some other expenses and time estimates now:
Manual Testing without an Exploratory Logger Tool
- Description: Each tester was doing their own form of bug reporting/session-documentation, either through Notepad++, Word, Evernote or directly in Jira.
- Time spent (single tester, 1 day): 30 Min/Site @ 4 Sites per day = 2 hours per day.
- Time spent (single tester, 1 week): 2 Hours per day x 5 days = 10 hours per week.
- Contractor bill rate (unrealistic sample): $20* per hour x 10 hours = $200 per week.
- Multiplied for a team of three testers = 30 hours or $600 per week on test documentation.
Manual Testing with an Exploratory Logger Tool (qTest Web eXplorer)
- Description: Each tester was now using this new tool that partially automates the process of note-taking, error reporting, taking screenshots, and other administrative overhead normally required by testers.
- Time cost (single tester, 1 day): 5 Min/Site @ 4 Sites per day = 20 minutes per day.
- Time cost (single tester, 1 week): 20 Minutes per day x 5 days = 1.66 hours per week.
- Expense (unrealistic sample): $20* per hour x 1.66 hours = $33.33 per week
- Multiplied for a team of three testers = 5 hours or $99.99 per week on test documentation.
*This is used as an unrealistic contractor bill rate, used simply for the sake of writing this blog post.
It was our experience that introducing this new tool into the workflow saved considerable amounts of time, as well as recapturing expense that could be woven back into the budget and better used elsewhere.
Qualifications
Keep in mind, these measurements are surrogate measures, and are second order measurements that do not take much time. They tell us something but do not give us the whole story. If you do want to move toward a more lean documentation processes through the use of exploratory logging tools, by no means should you make a business case on this data alone. This should be one facet, among others that you use to make your business case. Also, don’t get locked into thinking you need a paid tool to do this, as 80% of your business case is most likely getting mindsets shifted to a more common paradigm about what documentation is necessary.
We spoke not only with the testers, but also the people absorbing their work, as well as their management to gain insight into the pros and cons of this shift, as it was taking place, as well as after implementation. After our trial run with the tool, before we paid for it, we followed up on the pros/cons: How does the tool benefit your workflow? How does it hinder it? What limitations does the tool have? What does the tool not provide that manual documentation does? How has the tool affected your testing velocity, either positively or negatively? Is the report output adequate for the team’s needs? What gripes to the developers/managers have about the tool? Is it giving other team members, outside of testing, the information they need to succeed? etc. So, while the expense metrics are appealing to the business, the positive aspects of how this affects testing is what got us excited: showing how the tool frees up the testers to work on other priorities, increasing the amount of collaboration, not being focused on the documentation process, etc. We also spent many hours in consult with QASymphony working with their support team on a few bugs, but that was part of the discovery process when learning about the workflow quirks.
Our Challenge To You…Try it!
Try an exploratory logging tool, which would be anything that can help track/record actions in a more automated way to eliminate much of the documentation overhead and minutiae that testers normally have to deal with on a daily basis. We happened to use qTest Web eXplorer (paid, free trial) which is available for Firefox and Chrome, or you can try another standalone one called Rapid Reporter (freeware), that we have found to do many of the same functions. We have no allegiance to any one product, so we would encourage you to explore a few of them to find what works (or doesn’t) for your context. The worst thing that can happen is that you try these out, they don’t work for your context, and you go back to doing what you want. In the process through, positive or negative, you hopefully learned a little.
Conclusion
We feel it is very important for testers to evaluate how tools like this combined with processes such as Session-Based Test Management can help testing become a much more efficient activity. It is easy as testers to get settled into a routine, where we believe that we have ‘figured it out’, so we stop exploring new tools, consciously or subconsciously, and new ways of approaching situations become lost to us. We hope you take our challenge above seriously, and at least try it. You are in charge of your own learning and improving your skill craft as a tester, so we would encourage you to try something new even if you fail forward. This could bring immense value to your testing and team. Go for it. If you do work in a more restrictive environment, then feel free to use the data we have gathered here as justification to your manager or team to try this out.
One of the biggest pitfalls right now in the testing community is over-documentation. Many testers will claim that test cases and other artifacts are required, but often they are not; it just simply feels like it is. If you believe that heavily documented test cases and suites are required to do good testing, then you are locking yourself out of the reality that there are also many other options, tools and methods. Do you think that asking a product owner to read through one-hundred test cases would really inform them about how the product works as well as how you conducted your testing and what the risks and roadblocks are? I would lean toward ‘No’ as a script is simply that, a script, not a testing story.
In this blog post, we told a story about how a tool alleviated documentation overhead within our context. This is a story based on our experience, with that tool and the benefits that it brought. While we feel this is very different than traditional test documentation means, we feel it is a step in the right direction for us. But don’t just take our word for it, read this or this from Michael Bolton or this from James Bach or this from James Christie or…you get the point.