Article revisions: Learning is continuous, thus my understanding of testing and related knowledge is continually augmented. Below is the revision history of this article, along with the latest version.
- December 31, 2015, Version 1.0: Initial release.
- March 31, 2016, Version 1.1: Most definitions reworded, multiple paragraph edits and additions, updated Excel sheet to calculate actual team average.
- July 28, 2016, Version 1.2: Added sub-category calc. averaging (credit: Aaron Seitz) plus minor layout modifications.
- September 20, 2016, Version 1.3: Replaced/reworded Test Strategy & Planning > Thoroughness with Modeling (verb) & Tools > Modeling with Models (noun).
Abstract: A Personal Metric For Self-Improvement is a learning model meant to be used by testers, and more specifically, those within software testing. Many times, self-improvement is intangible and immeasurable in the quantifiable way that we as humans seek to understand. We sometimes use this as an excuse, consciously or subconsciously, to remain stagnant and not improve. Let’s talk about how we can abuse metrics in a positive way by using this private measure. We will seek to quantify that which is only qualifiable, for the purpose of challenging us in the sometimes overlooked areas of self-improvement.
Video Version for the non-readers 😉
Overview
I will kick this post off with a bold statement, and I stand by it: You cannot claim to do good testing if you believe that learning has a glass ceiling. In other words, learning is an endless journey. We cannot put a measurable cap on the amount of learning needed to be a good tester, thus we must continually learn new techniques, embrace new tools and study foreign ideas in order to grow in our craft. The very fact that software can never be bug-free supports this premise. I plan to blog about that later, in a post I am working on regarding mental catalysts. For now though, let’s turn our attention back to self-improvement. In short I am saying, since learning is unending, and better testing requires continual variation, then the job of self-improvement can never be at an end.
This job can feel a bit intangible and almost like trying to hit a moving target with a reliable repeatable process; therefore, we must be intentional about how we approach self-improvement so we can be successful. Sometimes I hear people talk about setting goals, writing things down or trying to schedule their own improvement through a cadence of book reads, coding classes or tutorial videos perhaps. This is noble, because self-improvement does not simply happen, but many times we jump into the activity of self-improvement before we determine if we’ve first focused on the right space. For example, a tester believes that they must learn how to code to become more valuable to their company, so they immediately dive into Codecademy classes. Did the tester stop to think…
Maybe the company I work for has an incomplete understanding of what constitutes ‘good testing’? After all, the term ‘good’ implies a value statement, but who is the judge? Do they know that testing is both an art and a science? I am required to consider these variables if I want to improve my testing craft. Does my environment encourage a varied toolset for testers, or simply the idea that anyone under the “Engineering” umbrella must ‘learn coding’ in order to add value?
Now, Agile (big “A”) encourages cross-functional teams, while I encourage “cross-functional teams to the extent that it makes sense”. At the end of the day, I still want a team of specialists working on my code, not a group of individuals that are slightly good at many things. Now, is there value to some testers learning to code? Yes, and here is a viewpoint with which I wholeheartedly agree. However, the point here, as it relates to self-improvement, is that a certain level of critical thinking is required in order to engage System 2, before this level of introspection can even take place. If this does not happen, then the tester may now be focused on an unwarranted self-improvement endeavor that may be beneficial, but is not for the intentional purpose of ‘better testing’.
So, why create a metric?
This might be a wake-up call to some, but your manager in not in charge of your learning; you are. Others in the community have created guides and categories for self-improvement, such as James Bach’s Tester’s Syllabus, which is an excellent way to steer your own self-improvement. For example, I use his syllabus as a guide and rate myself 0 through 4 on each branch, where zero is a topic in which I am unconsciously competent, and a four is a space in which I am consciously or perhaps unconsciously competent (see this Wikipedia article if you need clarification of those terms). I then compare my weak areas to the type of testing I do on a regular basis to determine where the major risk gaps are in my knowledge. If I am ever hesitant about rating myself higher or lower on a given space, I opt for the lower number. This keeps me from over-estimating my abilities in a certain area, as well as helps me to stay intellectually humble on that topic. This self-underestimation tactic is something I learned from Brian Kurtz, one of my mentors.
The Metric
The personal self-improvement metric I have devised is meant to be used in a private setting. For example, these numbers would ideally not roll up to management as a way of evaluating if you are a good or bad tester. These categories and ratings are simply created to give you a mental prompt in the areas you may need to work on, especially if you are in a team environment as that requires honing soft-skills too. However, you may have noticed that I have completely abused metrics here by measuring qualitative elements using quantitative means. This is usually how metrics are abuse for more nefarious purposes, such as being used to influence groups of decision makers to take unwarranted actions. However, I am OK with abusing metrics in this case, since it is for my own personal and private self-improvement means. Even though the number ratings are subjective, it means something to me, and I can use these surrogate measures to continually tweak my approach to learning.
My main categories are as follows: Testing Mindset, Leadership, Test Strategy & Planning, Self-Improvement, Tools & Automation and Intangibles. To an extent, all of these have a level of intangibility, as we’re trying to create a metric by applying a number (quantitative) to an item that can only accurately be described in qualitative (non-numeric) terms. However, since this intended for personal and private purposes, the social ramifications of assigning a number to these categories is negligible. The audience is one, myself, rather than hundreds or thousands across an entire division. Below is the resulting artifact that is created, but you can download the Excel file as a template to use for yourself, as this contains the data, glossary of terms, sample tester ratings, sample team aggregate, etc.
Application & Terms
Typically, you can use this for yourself or if you manage a team of testers, privately with them. I would never share one tester’s radar graph with another, as that would defeat the purpose of having a private metric that can be used for self-improvement. The social aspects of this can me minimized in an environment where a shared sense of maturity and respect exist. You can also find the following terms and definitions in the “Glossary” tab of the referenced Excel sheet:
Testing Mindset:
- Logic Process: ability to reason through problems in a way that uses critical thinking skills to avoid getting fooled.
- User Advocacy: ability to put on the user hat, albeit biased, and test using various established consumer personas and scenarios (typically provided by Product Management), apart from the acceptance/expected pathways.
- Curiosity: ability to become engaged with the product in a way that can and does intentionally supersede the intended purpose as guided by perceived customer desires (i.e. Like a kitten would with a new toy, yet also able to focus that interest toward high-value areas and likely risks within the product).
- Technical Acumen: ability to explain to others, with the appropriate vocabulary, what kind of testing has been, is or is going to be completed or not completed.
- Tenacity: ability to stay and remain persistently engaged in testing the product as it relates to seeking risks related to the item under test.
Leadership:
- Mentorship: ability to recognize areas of weakness within the larger team and train others accordingly to address these gaps.
- Subject Matter Expertise: ability to become knowledgeable in both the product and practice of testing for the purposes of supporting both the stakeholder’s desires as well as capability of supplementing the information needs of other team members.
- Team Awareness: ability to get and stay in touch with the two main wavelengths of the team, personal and technical, in order to adjust actions to alleviate testing roadblocks.
- Interpersonal Skills: ability to work well with others on the immediate or larger teams in such a way that facilitates positive communication and allows for more effective testing, including the ability to convey product risks in a way that is appropriate.
- Reliability: ability to cope through challenges, lead by example based on previous experiences and champion punctuality as well as support a consistent ongoing telling of the testing story to Product Management.
Test Strategy & Planning:
- Attention to Detail: ability to created adequately detailed test strategies that satisfy the requirements of the stakeholders and the team.
- Modeling: ability to convert your process into translatable artifacts, using continually evolving mental models to address risk and increase team confidence in the testing endeavor.
- Three-Part Testing Story: ability to speak competently on the product status, the testing method and the quality of the testing that was completed for the given item under test.
- Value-Add Testing Artifacts: ability to create testing artifacts (outlines, mind-maps, etc) that can be used throughout the overlapping development and testing phases, as well as support your testing story in your absence.
- Risk Assessment: ability to use wisdom, which is the combination of knowledge, experience and discernment, to determine where important product risks are within the given item under test.
Self-Improvement:
- Desire: ability to maintain an internal motivator that brings passion into the art of testing, for the purpose of supporting all other abilities.
- Quality Theory: ability to support a test strategy with an adequate sum of explicit and tacit knowledge though the use of a varied tool belt: models, apps, techniques, etc, as well as maintaining a strong understanding of a tester’s role within the development lifecycle.
- Testing Community: ability to engage with both the internal and external testing communities in a way that displays intellectual humility to the extent that it is required to share new ideas, challenge existing ones, and move testing forward.
- Product Knowledge: ability to become a subject matter expert in yours team’s area of focus such that you can better expose risk and provide value to product management.
- Cross-Functionality: ability to learn and absorb skills from outside a traditional subset of standards-based/factory-style testing, such that you can use these new skills to enhance the team’s collective testing effort.
Tools & Automation:
- Data: ability to interact with multiple types and subsets of data related to the product domain, such that testing can become a more effective way of exposing important risks, be it via traditional or non-traditional structures.
- Scripting: ability to use some form of scripting as a part of the test strategy, when appropriate, to assist with learning about risks and informing beyond a traditional tool-less/primarily human-only approach to the testing effort, so that the testing completed is more robust in nature.
- Programming: ability to write code in order to establish a deeper understanding of a product’s inner working, to gain insight into why and how data is represented in a product, as well as close the gap between tester and developer perspectives.
- Exploratory-Supplement: ability to embrace tools that can enhance the effectiveness of testing, allowing for a decrease in traditional administrative overhead.
- Models: ability to embrace new ways of thinking, including explicit testing models that are made available in the course of work, or via the larger community. Appropriate contextual models help to challenge existing biases, decrease the risk gap, and reshape our own mental paradigms for the purpose of adding value to the testing effort.
Intangibles:
- Communication & Diplomacy: ability to discuss engineering and testing problems in such a way that guide the team toward actions items that are in the best interests of the stakeholders, without overpowering or harming team relationships.
- Ability to Negotiate: ability to prioritize risks that pose a threat to perceived client desires, such that the interaction with product management allows for informing over gatekeeping and risk exposure over risk mitigation in the service of our clients.
- Self-Starter: ability to push in avenues of learning for the sake of improve the testing craft without the need for external coaxing of management’s intervention. Ideally, this would be fueled by an ongoing discontent at the presence of unknown risks and gaps in learning.
- Confidence: ability to display conviction in the execution of various test strategies, strategies that hold up to scrutiny when presented to the larger stakeholder audience for the purpose of informing product management.
- Maturity & Selflessness: ability to distance one’s self from the product in a way that allows for informing stakeholders and the team with proper respect. This is done in a way that distances us from the act of gatekeeping by ensuring that our endeavor of serving the client supersedes our own agendas for the product.
The practical application of this is triggered when testers become introspective and self-critical on the areas mentioned within the spreadsheet. This can only be done when each area is studied in depth. I recommend that testers do an initial evaluation by rating themselves loosely on each category and subcategory, using the Glossary as a reference. These are my own guideline definitions that I’ve given to each term, on which you can rate yourself using a 0-4 scale. Your definitions of these words may be different, so treat these as my own. This calculation is of course a surrogate measure, and meant only to be used as a rough estimate to determine areas for improvement. Once the areas of improvement that need the most attention have been identified (i.e. lowest numbers and matter the most to your team or project), the tester would then seek out resources to assist with those areas: tutorial videos, books, online exercises, peer-mentorship, and others. Don’t forget to reach out to both your company’s internal testing community as well as those who live in the online and testing conference space.
Conclusion
Please remember, this metric is by no means a silver bullet and these areas of focus are not meant to be used as a checklist, but rather a guideline to help testers determine areas of weakness of which you may not be currently aware. Many times, we do not realize an area of weakness or our own biases, until someone else points that out to us. I have found that a documented fashion such as this can help me recognize my own gaps. As stated previously, this is most useful when applied privately, or between a tester and their manager in a one-on-one setting. This is only a surrogate measure that attempts to quantify that which is only qualifiable. Putting numbers on these traits is extremely subjective and for the purpose of catalyzing your own introspection. It is my hope that this helps give testers a guide for self-improvement in collectively advancing the testing craft.