Blog posts in the “Models” category are related to the various testing models, mnemonics, oracles, heuristics, tables, charts and other frameworks that exists to help prime our thinking as testers. This is allows us to do better work and be intentional about our testing so that we can paint a more complete picture for our stakeholders.

The Binge-Purge Cycle of Frameworks

Abstract: Process frameworks are all the rage; some sold as magic pill, a silver-bullet to solve your organization’s problems toward achieving rapid delivery. They all have pros and cons, so I won’t spend time on belaboring what’s already widely discussed out there. Instead, I wanted to share my thoughts about the vicious cycle they can perpetuate within companies, as demonstrated by a history of many organizations continually adopting then scrapping these frameworks (most recently CapitalOne). The binge/purge cycle is staggering to me.

Recently, Dean Leffingwell, posted on LinkedIn about SAFe 6.0, a new version of his framework. My positions on these process frameworks have evolved over years and at this point, I’m convinced they are more for corralling and controlling systems that have other systemic issues at play (whether they intend to do this or not). Below was going to be my reply to Dean’s post, but it was too long for LinkedIn’s character limit, so I decided to post it here. Would love to hear your insights and feedback.

The more I interact with different software companies, the more I feel these frameworks are being used (many times unintentionally) to mask unrelated problems in that given system.

For example, when a system (software division) suffers from one of many pain points (e.g. let’s use ‘poor talent and hiring practices’ for the sake of this exercise), these process frameworks can mask the symptoms of that source issue. They do this by producing more busyness, sometimes only as the ‘appearance’ of work.

Now, it’s likely that none of us would claim that any of these frameworks have the ability to increase the software engineering acumen of employees; however, the new busyness being observed may cause leadership to feel like the framework is being successful, when actually it is only masking the source.

No, this framework doesn’t purport or claim to raise engineering talent; yet, when management sees busyness from formerly lower-performing employees, those leaders may not feel the pain of that source problem anymore. If there’s no pain, then there will be no intrinsic motivation to solve the real core issue of poor talent and hiring practices. However, over time, that problem becomes more distinguishable and dissociated from the efforts of applying the framework. Years later, new leadership witnesses this along with other unrelated systemic issues (corporate culture, product quality, innovation, etc) and deems the framework isn’t working. They either change to another out-of-box solution, complete with more promises, or they discontinues its implementation entirely. Ever heard a version of this statement? – “We tried Agile. It didn’t work”. Maybe it failed for legitimate reasons, but more often than not, that’s the exception and not the rule.

There are at least two issues here that bother me, compounding one another:

1) The original source problem of poor talent was hidden by the framework’s implementation.

2) The assertion that the framework was a failure or the cause of other systemic inefficiency, is false.

Both of these are misleading, but due to the nature of short human memory and new leadership rotation, the cycle continues. I’ve seen this repeated in many places I’ve worked or consulted, and I am curious how long it will take leadership in this community to recognize this cycle and preventively squash it for future generations. Or (I fear) is the rent-seeking nature too alluring and will continue to overtake true craftsmanship?

Anecdotally, the most successful environments I’ve worked, where engineering talent and soft skills were high, operated fabulously without any kind of heavy process framework.

The Agile community needs an upheaval, a revolution, similar to what the testing community started to go through in the early 2000s. Will that happen soon? 8-Ball reads, “Future uncertain.”

Practical Approach to Delivery Enablement

Abstract: The job of the influencer or coach/mentor in a software team, regardless of title (Manager, Director, Agile Coach, Scrum Master, Tech Lead, etc) isn’t to evangelize best practices and beat people over the head with manifestos. Rather, the job of any professional in an Agile environment is delivery enablement, which I define as enhancing/optimizing delivery of value to the customer and our stakeholders. This translates into working code in production. Everything we do should roll up to that, and the 9 Business Outcomes* described below.

When coaching a new team or set of teams, and especially when joining a new company, it is very important to bring with a great amount of intellectual humility. I have seen many folks come in guns-blazing with their ideas, trying to implement what they did at previous work places too quickly. This is far too common, and it usually has multiple consequences:

  • Ignores the unique context of the new team(s) or company (context-oblivious vs context-aware)
  • Proofs-of-concept executed without first understanding context can fail, even if they might have been a good idea, due to lack of buy-in and ownership (command and control approach vs coaching/mentorship)
  • Can be off-putting to partners and team members who know the system better and may already have ideas they were hoping you’d pair with them on instead (know-it-all vs collaborative partner)
  • Multiple other detriments, including the burning of bridges and very quickly becoming seen as someone to be worked around instead of worked with (seen as an impediment rather than a delivery enablement advocate)

It is important to show care about the people more than anything else up front. The technology problems will come, believe me, and when they do, you need to be respected and trusted in order to be an effective value delivery enabler.  So, what is my mental model and the approach I take when being faced with new teams or a new environment?

Practical Approach to Delivery Enablement

First 30 Days (Initial Absorption)

It is important to learn the various team contexts and people at play; gaining trust and building chemistry, rather than making suggestions out of the gate…

  • Heavy Learning of various team frameworks, contexts, scrum events, communication patterns, practices, tooling, dependencies, etc.
  • Build Partnerships with key technology and business stakeholders (via 1-on-1s, team outings, troubleshooting, cultural events, etc).
  • Identify top 2 or 3 Impediments (‘pain points’) experienced by the development and product team.
  • Gain Consensus on prioritization of the 9 Business Outcomes (below) with Tech/Product delivery leadership buy-in.

Day 30-90 (Planning and Experimentation)

At this point, I’ve reached the second stage of learning and I seek to leverage new relationships to start planning how to tackle delivery impediments…

  • Prioritize and Generate Impediment resolution plan: The “How” and any possible solutions must come from the team, or indirectly through coaching (e.g. Socratic Method)
  • Upstream Coaching: Management and other influential business stakeholders external to the development team may need to be educated on inhibitive anti-patterns observed – Start small and build conversational safety (more offering up questions than definitive changes at this point).
  • Fill Agility Practice Gaps: Delivery Transparency via Dashboarding/Metrics, Various trainings, Balance team protection with stakeholder needs, and more.
  • Canary-in-a-Coalmine: Execute small proofs-of-concepts (POC), targeting the more experimental/open teams, to gain traction on any new or pivoted agility or engineering practice. (The goal here is to avoid command-and-control practice setting but letting peer-to-peer influence abound post POC).
  • Conduct initial Team Health & Agility assessments at both the ART/org and team levels (gain consensus on which categories matter to move the needle)
  • Consult development team Retrospectives on any new POC or change and bubble up feedback as appropriate to management and influencer level.
  • Widen circle of go-to partners, allies and proactive stakeholders (leverage strengths, interests to get POCs off the ground).

Day 90+ (Coaching and Optimizing)

A success measurement at this stage is having gained the trust and respect of the development team, other peers and leadership such that the continued momentum and desire for continuous improvement stays strong. We can close the loop by both directly and indirectly affecting the business outcomes that the business previously prioritized during the first 30 days…

  • Continue to Fill Agility Practice Gaps: Optimize Value-flow through ART (Agile Release Train), PI Planning, Risk & Dependency Tracking, Hardening/Innovation allocation, and more.
  • Engineering Practices: Provide coaching on Shift-left DevOps embracement, CI/CD gap identification, automation opportunities, unit testing, build and deployment gates, quality and development risk modeling and test strategy (contextual depending on Monolith or Microservice), SDLC optimization, Vertical Slicing of teams, monitoring and alerting and more.
  • Tool Optimization & Information Radiators: Offer guidance on ALM configuration and visibility, provide stakeholder-appropriate value-driven dashboards (Product v Business v Devops, etc).
  • Impediment Removal: Continually facilitate team and org level technical and non-technical roadblocks.
  • Conflict Resolution: Manage team conflict via iterative stages only escalating when appropriate through 1-on-1, 2-on-1, then manager level if proven necessary or for recurring trends – (e.g. keep ‘team business’ at the team level ideally).
  • Ongoing Agility Health Assessments: Continue to asses/re-assess quarterly, or every 6-months depending on environment and contextual maturity.

The Nine Business Outcomes

Everything we do should roll up to one or more of these 9 Business Outcomes. Whether you area  developer, tester, product owner, or otherwise, it is important to gain consensus with your stakeholders on which outcomes are and are not a priority in their minds. This then allows delivery teams to move in the direction that our stakeholders across both IT and the rest of the business have in mind…

PathToAgility-9 Outcomes

Let us rise above the average statistic that says 64% of the features we build are rarely or never used. Image how much time and OpEx waste that is ($$). This is what we need to think about as developments teams (Product Owners, Engineers, etc). The goals of Agile isn’t just shortening the feedback loop, but also the learning cycle so we CAN deliver the right thing. More on that in the video link here, David Hawks – User Stories Suck

*Note: The Nine Business Outcomes content is part of the Path To Agility (PTA) program – more information can be found on PTA at the link provided.

Hiring Good Testers

Abstract: I frequently get asked how I interview testers, be it anyone from exploratory to automation and anywhere within that spectrum (i.e. including “Toolsmiths”, see Richard Bradshaw’s work here for context on that term). What the person is really asking me though is, “How do you know someone who interviews well will actually perform well once hired?” The real answer is, ‘You don’t’. You can use interview models to help reduce the unknowns, but ultimately if you’ve been a hiring manager long enough, you’ve hired some duds and had to manage them out. I ultimately try to talk to people about the number one thing that drives good testing, and that is the desire and capacity to learn. desire alone isn’t enough. Testing is after all, learning, at its core. We’re scientists, not showstoppers. We’re explorers, not Product Managers. Our passion lies within the journey, not so much the end or counting the number of things we found along the way (unless yo’re doing Domain Testing – I jest). So, as a boilerplate for a year or so, I used Dan Ashby’s interview model as my go-to when doing phone screens and in person interviews. After a few more years, I realized that my interview process, like my testing process, must be continually adapting and breaking so that it can reform and adapt to the contexts of whatever company or product in which I work. the major shifts in my interview process have coincided with the times I changed companies. Below are my current ways of ‘weeding out the weak’ persay, and saving myself time when it comes to finding passionate talent in testing and automation (notice, other than this sentence you won’t find any questions around specific tools like Selenium, SoapUI, etc. Good testing is tool agnostic). The sections are divided below: I typically use Phase I during the initial phone-screens, and Phase II when they come in-person. Sometimes I dive into Phase II on the phone if I get a feeling they are ahead of the curve. <Note: The term “agile” is intentionally typed as ‘little-a agile’, not ‘big-A Agile’. We’re talking about the ability to flex and adapt, not the marketing monolith that is pedaled heavily right now.>


Phase I: Initial Weed-Out Questions for Testers (in a Modern Software Development Environment)

  • What is good testing?
    • Poor answers: Clicking through a product to make sure the quality is good and all of the requirements are met.
      • This person likely has a shallow definition of what it means to test. This is Claims Testing, sometimes called human checking, but does not indicate an understanding of deep testing. this candidate is also a Product Owner at heart if they think they “assure” quality, rather than cast light on risks so that others (Product Owners/Business) can assure what does or doesn’t meet the level of quality desired.
    • Acceptable Answers: Exploration of a product, experimentation so that we can learn about what’s happening in a product, casting light on any risks that might threaten the value of the product or timing of the project and making those risks known to our stakeholders so they can make decisions on how to mitigate that risk (i.e. fix, ignore, backlog, etc)
      • This candidate has at least a basic understanding of their role as a tester within a larger organization. Their statement around bringing risks to light, but not making decisions on them is healthy and speaks to their maturity of not being in the gatekeeper mindset.
  • What is the role of a tester in an agile organization?
    • Poor answers: Find bugs, write test cases, break things, stop releases, get certifications
      • Shows gatekeeper mindset still exists, and heavy administrative focus on the value of tester being linked to test case writing or bugs-found instead of on providing customer value to the end-user with holistic testing approaches)
    • Acceptable answers: Explore for value to the customer even if my PO didn’t mention it in the acceptance criteria, challenge the veracity of the acceptance criteria, operate under the assumption that Product probably always missed something when creating User Stories, use testing models to fill those gaps in my thinking so I am not just relying on my mental model/experience to do good testing.
      • This display intellectual humility in understanding their thinking is inherently flawed in some respect – which it is for everyone, also shows healthy understanding of testing and flexibility to pivot for the purpose of providing customer value and not just check off acceptance criteria)
  • When does the testing process start and end in an agile scrum team?
    • Poor answers: After code complete, when the Dev hands off the code to QA, after a deploy we start testing, and then we stop when we cover everything.
      • This shows that they believe testing is something you “start” after development, and are still in a Waterfall mindset when it comes to what testing actually is (i.e not just clicking around a product). Also, this answer implies that we test until we as testers are satisfied (unhealthy), not until Product is satisfied (healthy)
    • Acceptable answers: Throughout the entire SDLC process – this starts in the portfolio planning stages as we should have a QA/Test lead pairing with Dev, Product and Architecture to discuss risks up front as we initially design the product. If we’re waiting until the sprint to start testing, then we’ve missed a lot of opportunities to help our stakeholders cast light on risk, much of which can be uncovered earlier in the process before any code is actually written.
      • This shows the candidate has a firm understanding of the fact that risk exposure and mitigation never starts and ends, but is rather ongoing. I would also ask follow-up questions around how they did this at previous companies, because it shows a high sense of maturity and leadership if they injected themselves into the design phase and not just down the line in the scrum-team portion of testing. In fact, a good tester in an agile org will be frustrated and may even have a story about leaving a company that did not allow them to participate earlier in the process.
  • With the world of agile testing constantly changing, what meetups or conferences do you attend, and what books do you read on the latest practices that would make us (your company) want to hire you over any other tester?
    • Poor answers: I haven’t read any books or attended meetups, but I have 20 years of experience and I Google when needed to solve problems, as well as read Guru99 which has articles on testing and development.
      • Years of experience does not make someone a good tester, nor does ad-hoc Googling show a learning mindset, as everyone has to do that as part of their job anyway. Also, when you Google “software testing”, the first non-ad hit that comes up is Guru99, so for obvious reasons this is a questionable answer when given alone.
    • Acceptable answers: Every month or two I go to a local meetup, here are a few blogs I read regularly <names 3 or 4 sources>, one of my favorite books on testing is <names title and author and tells you about something they learned from it>, I follow people on Twitter <like it or not, this is where the testing community lives and thrives! E.g. Link>
      • This shows that they are constantly learning (#1 skill needed for good testing is learning – getting tired of hearing this yet? No deep technical questions need to be asked to determine if someone is in the right mindset for a career in the test industry, like many managers think – Now, depth of knowledge for a specific role, is another story). This shows they are immersing themselves in the testing community and finding out about what other testers and companies are doing to stay up to date on the latest tools, practices and mindsets around testing and agility, and not waiting for their manager or the company to bring that to them.

Phase II: Advanced Quality & Testing Theory topics

If the candidate breezes through these with flying colors, I then go into the deeper topics below, which typically can only be answered confidently by true practitioners of the testing skill-craft.

  • Familiarity with the Four Schools of Software Testing (and why Context-Driven is healthier than the other three)
  • Understanding of good/bad testing measurement and metrics (e.g. DLR, Defect Density, First/Second/Third order measurements and when to use each appropriately)
  • Testing heuristics (e.g. HTSM model for testing)
  • Explain good Test Reporting (i.e. the 3-Part Testing Story/Braid)
  • Testers are not Gatekeepers (i.e. Product vs Tester responsibility understanding)
  • Regression practices (RCRCRC model, as well as Combinatorial testing practices to decrease process waste)
  • Testing Oracles (FEWHICCUPPS model for Product consistency)
  • Quality Criteria: Capability, Reliability, Usability, Charisma, etc (ability to give example of test types in at least a few of these)
  • Testing Techniques: e.g. Can they explain the difference between Scenario Testing and Flow Testing. What is Domain Testing? Etc.
  • Agility within testing (shift left, pairing early, mindset of not having to wait for code to start testing, Shake ‘N’ Bake pair-testing process)
  • How Exploratory Testing differs from Ad-Hoc or Random Testing (and why that matters – i.e. Exploratory testing should have a structure and they should be able to speak to that)
  • Test Chartering and SBTM (Session Based Test Management)
  • The Dead Bee Heuristic for problem solving and ensuring issues are actually fixed
  • Artifact generation: Lean Test Strategy documentation vs Heavy Test Cases (i.e. hopefully the former, so they spend more valuable time testing rather than documenting)
  • Understanding the difference between Checking and Testing
  • What is Galumphing and why is it important in testing?
  • What are the two pillars of Testability (Observability and Controllability) and can they explain why both Devs and Testers should care about them
  • Good understanding of the difference between ‘Best’ and ‘Contextual’ Practices
  • Good understanding of the detriment of IEEE testing standard ISO29119 (+other standards from the consortium or dogmatic static models)
  • Bonus: Familiarity with the RST namespace (how and why this group of the testing industry has broken off from traditional norms, shedding legacy habits and mindsets, etc)


People who react well to the above more advance topics, in displaying that learning mindset (even if they do not comes across as experts for the specific question asked) are typically the ones you want in your shop. Of course, you must be sure that your in-person interview process has a good element of letting them experiment in the interview to see how they think. Many times I open our web product on a laptop and put it in front of them to see what they do. Do they sit there without touching it and just speak theory, or do they grab the laptop, pull it toward them, and start playing with the product? The latter usually tells me they have an experimentation mindset and willingness to learn, as well as leads to better questions from them about our business needs and desires.

At the end of the day, for most projects, I value a growth mindset and passion for learning over someone with 20 years of experience who thinks they have everything already figured out, and little to learn. Intellectual humility, the belief that one’s thinking is inherently flawed and has gaps, is key to being a good scientist, and thus a good tester. Some testers have even come to call themselves ‘Professional Skeptics’ to sum up that scientific, humble and critically thinking mindset in a single phrase – and I like it. If you’ve been hiring for at least some time, you’ve probably has people who interviewed well, but eventually fell short of your expectations; I know I have, and had to manage them out. That is to say, I do not present this information as a silver bullet or sorts. We are still humans, thus this blog post is yet another flawed model from which you must adapt your hiring process, to discard/keep what you feel is best suited to your environment. I am eager to hear your thoughts on what common interview behavior and attributes you’ve noticed across your good hires that did live up or grow beyond your initial expectations.

What is Testing?

Abstract: A brief post, the tip of the iceberg on exploring the question ‘What is testing?’. If this intrigues you, then comment or contact me and let’s have a deeper discussion.

Updated: April 19th, 2018 (added my mental model to give a visual/be more explicit about my more general statements)

Many people confuse “checking” for “testing”. Checking is a part of testing, but not fully representative of testing as a whole. So what do I mean when I say checking, and how is that different from testing?

  • Checking is the act of confirmatory testing, verifying specific facts and outcomes typically by following a script or test case.
  • Testing is much larger and holistic that that – I define testing as evaluating a product through exploration for the purpose of informing our product stakeholders on risk.

Our guiding light, the purpose of testing, is…
“to cast light on the status of the product and its context in the service of our stakeholders” – James Bach

If you are simply taking acceptance criteria/requirements, and then writing test cases based on that, you are selling yourself short and doing the product a huge disservice! Much of what we find as testers comes off-script, and high-value unknowns are found by letting humans do what humans do best – be true explorers! In fact, when Michael Bolton asked Brian Kurtz and I in a Rapid Software Testing class, to define “What is testing to you?”, this is what we came up with as combination of our shared mental models…


Since my job as a tester is to inform my client as early as possible about any potential risks that I feel may threaten the value or on-time successful completion of the project, then I must be a tester, not simply a checker. This kind of answer is a much more compelling and holistic response, rather than simply saying something like “Finding bugs” or “Breaking things” (which we actually do not do, more on that here: Testers Don’t Break The Software). As testers, we must move the testing craft in a positive direction, and get away from simply doing only claims verification. Claims testing is important, but checking is only one piece of what testing actually is. Stop worrying about ‘green or red’ and instead focus on ‘does a problem exist?’. Ever been driving down the road, and smoke starts coming from the hood of your car? You are going to pull over, even if the engine light has not come on. Are you going to keep driving until that light tells you there is a problem? I hope not! I hope you would use your fantastic human brain to make a smart first-order measurement, and decide to pull over. You don’t need a red light to tell you there is a problem. Similarly in testing, a red light may mean nothing at all, while a green light may be deceiving you into thinking there are no problems, when in fact there may be – just open the hood and look! (or “bonnet” for my fellow testers across the pond)

So, if I asked you how you tested something, what would your answer be? That you simply used your knowledge, years of experience and some tools? Not compelling enough! I want to hear about Capability, Scalability, Compatibility, Charisma. I want to hear about how your Flow testing varied from your Scenario testing and why those two are different. Tell me about the methods of testing you used. Tell me why the testing done was “good” enough. Tell me what roadblocks inhibited testing, and how you worked around those; or which still stand in your way. Tell me what you did not test – many testers forget to talk about that, leaving stakeholder wondering if they even considered certain items, lowering their confidence in our ability to explore for risk that matters.

Good testing generally doesn’t come from heavy checklists, test cases or scripts that are followed – anyone can do that. So let’s do testing, which anyone cannot, in fact, do well.

Raise the bar!

Crossposted (original version): uTest – What Is Testing?

A Tester’s Guide To The Galaxy

Abstract: I’ve created a reference card pack that you can use to do better testing, by fostering a team-driven approach to collaborative holistic exposure of high-value product risks.


There are three main ways that we learn: Ingestion (books, blogs, models), Collaboration (conferences, discussions, webinars, meet-ups) and Experimentation (exercises, modeling, day-to-day exploration, etc). Since I recognize there is a myriad of options available to fit your own learning style for the purpose of advancing the testing craft, I’d like to introduce another tool that may help: Tester Reference Cards.


Previously, I presented a new model/framework for testers, A Sprint Framework For Testers. My intention was not that testers use that as a script, but more as a model to inform their thinking; however, it does need some rewording and less emphasis placed on test cases to make it more properly represent the context-driven mindset that I actually posses. However, while deciding how to reword some of those ideas, a new artifact sprang forth in these reference cards. Like the framework, these are not to act as scripts to follow, but rather a guideline for how to go about performing better testing within each stage of your development process. While I believe the framework can provide value, I feel that converting that framework into an immediately tangible form that can be applied in the moment has even more intrinsic value. In other words, the Sprint Framework had a baby, and this is it!

Reference Cards:



Keep these as reference sheets in digital form or print them out double-sides (duplex) for a physical manifestation that can be shared by various team members. These reference cards can be used for prompting more healthy and holistic discussions in grooming sessions, sprint planning meetings, team retros, etc. They can also be used in groups or individually by programmers, testers, product owners, scrum masters and other internal stakeholders. I’ve provided you with the tool, but it is only as useful as you apply it within your context. Use whichever method that you feel adds the most value for your given context and the various learning styles within your team.

A Personal Metric for Self-Improvement

Article revisions: Learning is continuous, thus my understanding of testing and related knowledge is continually augmented. Below is the revision history of this article, along with the latest version.

  • December 31, 2015, Version 1.0: Initial release.
  • March 31, 2016, Version 1.1: Most definitions reworded, multiple paragraph edits and additions, updated Excel sheet to calculate actual team average.
  • July 28, 2016, Version 1.2: Added sub-category calc. averaging (credit: Aaron Seitz) plus minor layout modifications.
  • September 20, 2016, Version 1.3: Replaced/reworded Test Strategy & Planning > Thoroughness with Modeling (verb) & Tools > Modeling with Models (noun).

Abstract: A Personal Metric For Self-Improvement is a learning model meant to be used by testers, and more specifically, those within software testing. Many times, self-improvement is intangible and immeasurable in the quantifiable way that we as humans seek to understand. We sometimes use this as an excuse, consciously or subconsciously, to remain stagnant and not improve. Let’s talk about how we can abuse metrics in a positive way by using this private measure. We will seek to quantify that which is only qualifiable, for the purpose of challenging us in the sometimes overlooked areas of self-improvement.

Video Version for the non-readers 😉


I will kick this post off with a bold statement, and I stand by it: You cannot claim to do good testing if you believe that learning has a glass ceiling. In other words, learning is an endless journey. We cannot put a measurable cap on the amount of learning needed to be a good tester, thus we must continually learn new techniques, embrace new tools and study foreign ideas in order to grow in our craft. The very fact that software can never be bug-free supports this premise. I plan to blog about that later, in a post I am working on regarding mental catalysts. For now though, let’s turn our attention back to self-improvement. In short I am saying, since learning is unending, and better testing requires continual variation, then the job of self-improvement can never be at an end.

This job can feel a bit intangible and almost like trying to hit a moving target with a reliable repeatable process; therefore, we must be intentional about how we approach self-improvement so we can be successful. Sometimes I hear people talk about setting goals, writing things down or trying to schedule their own improvement through a cadence of book reads, coding classes or tutorial videos perhaps. This is noble, because self-improvement does not simply happen, but many times we jump into the activity of self-improvement before we determine if we’ve first focused on the right space. For example, a tester believes that they must learn how to code to become more valuable to their company, so they immediately dive into Codecademy classes. Did the tester stop to think…

Maybe the company I work for has an incomplete understanding of what constitutes ‘good testing’? After all, the term ‘good’ implies a value statement, but who is the judge? Do they know that testing is both an art and a science? I am required to consider these variables if I want to improve my testing craft. Does my environment encourage a varied toolset for testers, or simply the idea that anyone under the “Engineering” umbrella must ‘learn coding’ in order to add value?

Now, Agile (big “A”) encourages cross-functional teams, while I encourage “cross-functional teams to the extent that it makes sense”. At the end of the day, I still want a team of specialists working on my code, not a group of individuals that are slightly good at many things. Now, is there value to some testers learning to code? Yes, and here is a viewpoint with which I wholeheartedly agree. However, the point here, as it relates to self-improvement, is that a certain level of critical thinking is required in order to engage System 2, before this level of introspection can even take place. If this does not happen, then the tester may now be focused on an unwarranted self-improvement endeavor that may be beneficial, but is not for the intentional purpose of ‘better testing’.

So, why create a metric?

This might be a wake-up call to some, but your manager in not in charge of your learning; you are. Others in the community have created guides and categories for self-improvement, such as James Bach’s Tester’s Syllabus, which is an excellent way to steer your own self-improvement. For example, I use his syllabus as a guide and rate myself 0 through 4 on each branch, where zero is a topic in which I am unconsciously competent, and a four is a space in which I am consciously or perhaps unconsciously competent (see this Wikipedia article if you need clarification of those terms). I then compare my weak areas to the type of testing I do on a regular basis to determine where the major risk gaps are in my knowledge. If I am ever hesitant about rating myself higher or lower on a given space, I opt for the lower number. This keeps me from over-estimating my abilities in a certain area, as well as helps me to stay intellectually humble on that topic. This self-underestimation tactic is something I learned from Brian Kurtz, one of my mentors.

The Metric

The personal self-improvement metric I have devised is meant to be used in a private setting. For example, these numbers would ideally not roll up to management as a way of evaluating if you are a good or bad tester. These categories and ratings are simply created to give you a mental prompt in the areas you may need to work on, especially if you are in a team environment as that requires honing soft-skills too. However, you may have noticed that I have completely abused metrics here by measuring qualitative elements using quantitative means. This is usually how metrics are abuse for more nefarious purposes, such as being used to influence groups of decision makers to take unwarranted actions. However, I am OK with abusing metrics in this case, since it is for my own personal and private self-improvement means. Even though the number ratings are subjective, it means something to me, and I can use these surrogate measures to continually tweak my approach to learning.

My main categories are as follows: Testing Mindset, Leadership, Test Strategy & Planning, Self-Improvement, Tools & Automation and Intangibles. To an extent, all of these have a level of intangibility, as we’re trying to create a metric by applying a number (quantitative) to an item that can only accurately be described in qualitative (non-numeric) terms. However, since this intended for personal and private purposes, the social ramifications of assigning a number to these categories is negligible. The audience is one, myself, rather than hundreds or thousands across an entire division. Below is the resulting artifact that is created, but you can download the Excel file as a template to use for yourself, as this contains the data, glossary of terms, sample tester ratings, sample team aggregate, etc.


Click here to download the current Microsoft Excel version

Application & Terms

Typically, you can use this for yourself or if you manage a team of testers, privately with them. I would never share one tester’s radar graph with another, as that would defeat the purpose of having a private metric that can be used for self-improvement. The social aspects of this can me minimized in an environment where a shared sense of maturity and respect exist. You can also find the following terms and definitions in the “Glossary” tab of the referenced Excel sheet:

Testing Mindset:

  • Logic Process: ability to reason through problems in a way that uses critical thinking skills to avoid getting fooled.
  • User Advocacy: ability to put on the user hat, albeit biased, and test using various established consumer personas and scenarios (typically provided by Product Management), apart from the acceptance/expected pathways.
  • Curiosity: ability to become engaged with the product in a way that can and does intentionally supersede the intended purpose as guided by perceived customer desires (i.e. Like a kitten would with a new toy, yet also able to focus that interest toward high-value areas and likely risks within the product).
  • Technical Acumen: ability to explain to others, with the appropriate vocabulary, what kind of testing has been, is or is going to be completed or not completed.
  • Tenacity: ability to stay and remain persistently engaged in testing the product as it relates to seeking risks related to the item under test.


  • Mentorship: ability to recognize areas of weakness within the larger team and train others accordingly to address these gaps.
  • Subject Matter Expertise: ability to become knowledgeable in both the product and practice of testing for the purposes of supporting both the stakeholder’s desires as well as capability of supplementing the information needs of other team members.
  • Team Awareness: ability to get and stay in touch with the two main wavelengths of the team, personal and technical, in order to adjust actions to alleviate testing roadblocks.
  • Interpersonal Skills: ability to work well with others on the immediate or larger teams in such a way that facilitates positive communication and allows for more effective testing, including the ability to convey product risks in a way that is appropriate.
  • Reliability: ability to cope through challenges, lead by example based on previous experiences and champion punctuality as well as support a consistent ongoing telling of the testing story to Product Management.

Test Strategy & Planning:

  • Attention to Detail: ability to created adequately detailed test strategies that satisfy the requirements of the stakeholders and the team.
  • Modeling: ability to convert your process into translatable artifacts, using continually evolving mental models to address risk and increase team confidence in the testing endeavor.
  • Three-Part Testing Story: ability to speak competently on the product status, the testing method and the quality of the testing that was completed for the given item under test.
  • Value-Add Testing Artifacts: ability to create testing artifacts (outlines, mind-maps, etc) that can be used throughout the overlapping development and testing phases, as well as support your testing story in your absence.
  • Risk Assessment: ability to use wisdom, which is the combination of knowledge, experience and discernment, to determine where important product risks are within the given item under test.


  • Desire: ability to maintain an internal motivator that brings passion into the art of testing, for the purpose of supporting all other abilities.
  • Quality Theory: ability to support a test strategy with an adequate sum of explicit and tacit knowledge though the use of a varied tool belt: models, apps, techniques, etc, as well as maintaining a strong understanding of a tester’s role within the development lifecycle.
  • Testing Community: ability to engage with both the internal and external testing communities in a way that displays intellectual humility to the extent that it is required to share new ideas, challenge existing ones, and move testing forward.
  • Product Knowledge: ability to become a subject matter expert in yours team’s area of focus such that you can better expose risk and provide value to product management.
  • Cross-Functionality: ability to learn and absorb skills from outside a traditional subset of standards-based/factory-style testing, such that you can use these new skills to enhance the team’s collective testing effort.

Tools & Automation:

  • Data: ability to interact with multiple types and subsets of data related to the product domain, such that testing can become a more effective way of exposing important risks, be it via traditional or non-traditional structures.
  • Scripting: ability to use some form of scripting as a part of the test strategy, when appropriate, to assist with learning about risks and informing beyond a traditional tool-less/primarily human-only approach to the testing effort, so that the testing completed is more robust in nature.
  • Programming: ability to write code in order to establish a deeper understanding of a product’s inner working, to gain insight into why and how data is represented in a product, as well as close the gap between tester and developer perspectives.
  • Exploratory-Supplement: ability to embrace tools that can enhance the effectiveness of testing, allowing for a decrease in traditional administrative overhead.
  • Models: ability to embrace new ways of thinking, including explicit testing models that are made available in the course of work, or via the larger community. Appropriate contextual models help to challenge existing biases, decrease the risk gap, and reshape our own mental paradigms for the purpose of adding value to the testing effort.


  • Communication & Diplomacy: ability to discuss engineering and testing problems in such a way that guide the team toward actions items that are in the best interests of the stakeholders, without overpowering or harming team relationships.
  • Ability to Negotiate: ability to prioritize risks that pose a threat to perceived client desires, such that the interaction with product management allows for informing over gatekeeping and risk exposure over risk mitigation in the service of our clients.
  • Self-Starter: ability to push in avenues of learning for the sake of improve the testing craft without the need for external coaxing of management’s intervention. Ideally, this would be fueled by an ongoing discontent at the presence of unknown risks and gaps in learning.
  • Confidence: ability to display conviction in the execution of various test strategies, strategies that hold up to scrutiny when presented to the larger stakeholder audience for the purpose of informing product management.
  • Maturity & Selflessness: ability to distance one’s self from the product in a way that allows for informing stakeholders and the team with proper respect. This is done in a way that distances us from the act of gatekeeping by ensuring that our endeavor of serving the client supersedes our own agendas for the product.

The practical application of this is triggered when testers become introspective and self-critical on the areas mentioned within the spreadsheet. This can only be done when each area is studied in depth. I recommend that testers do an initial evaluation by rating themselves loosely on each category and subcategory, using the Glossary as a reference. These are my own guideline definitions that I’ve given to each term, on which you can rate yourself using a 0-4 scale. Your definitions of these words may be different, so treat these as my own. This calculation is of course a surrogate measure, and meant only to be used as a rough estimate to determine areas for improvement. Once the areas of improvement that need the most attention have been identified (i.e. lowest numbers and matter the most to your team or project), the tester would then seek out resources to assist with those areas: tutorial videos, books, online exercises, peer-mentorship, and others. Don’t forget to reach out to both your company’s internal testing community as well as those who live in the online and testing conference space.


Please remember, this metric is by no means a silver bullet and these areas of focus are not meant to be used as a checklist, but rather a guideline to help testers determine areas of weakness of which you may not be currently aware. Many times, we do not realize an area of weakness or our own biases, until someone else points that out to us. I have found that a documented fashion such as this can help me recognize my own gaps. As stated previously, this is most useful when applied privately, or between a tester and their manager in a one-on-one setting. This is only a surrogate measure that attempts to quantify that which is only qualifiable. Putting numbers on these traits is extremely subjective and for the purpose of catalyzing your own introspection. It is my hope that this helps give testers a guide for self-improvement in collectively advancing the testing craft.

A Sprint Framework For Testers

A Sprint Framework For Testers

Click image to enlarge. Click here to download the X-Mind file.

UPDATE 2020-04-02: Added the link to the HASM model that give ideas on how to creation automation strategies (You will need X-Mind software to open this file type) . Use in conjunction with other models like HTSM for a holistic testing approach. It’s not an ‘either or’ choice.

Abstract: A Sprint Framework For Testers is a brief outline of my suggested processes and practices employed by a Tester that resides within a software development scrum team, in an Agile environment. I have created this document with web-based product software teams in mind, but these practices and recommendations are not necessarily tied to a specific type of software, tester, platform or development environment. I say this simply to give you context into the formative process of this framework, but I believe these ideas have been generalized in a way that should be beneficial across many types of software testing. Having the ability to execute much of this relies on working within a healthy engineering culture, but Testers should also be intentionally employing practices like this to improve their own culture; and hopefully this sprint framework for testers can help with that.

Note: After a recent discussion on Twitter I decided to add this note. This model is in no way meant to be a prescriptive mandate on how to run your sprint, but rather a guide to help prime your thinking as you move through the various stages. Also, test cases may or may not fit into your current paradigm. If they do not, then be sure you have good reasons for that. Some are under the impression that being ‘context-driven’ means being anti-test cases, which is a fallacy. Writing scripted test cases requires a great amount of skill and may be necessary in your context, as I have found it in mine.

  1. Sprint Grooming
    1. Smaller Group: In the interests of efficient time usage, this should be composed of a small group as this part of the process does not require the input of the entire team. A single Developer, Tester and Product Owner would be sufficient, or whichever small group is composed of team members with the most product knowledge and people who will be doing the hands-on work. Two Developers may be required, if there is a large reach in the work being done between both backed and UI. It should be the exception, not the rule, that the whole team would need to be involved in the continual backlog grooming process.
    2. Use models (HTSM, RCRCRC or other Testing Mnemonics) to inform your thinking and team’s awareness of the potential vastness of acceptance criteria considerations.
    3. Models as Litmus Tests (for Story Acceptance):
      1. Using just a smaller part from an existing model (HTSM > Quality Criteria) can many times serve as a litmus test for which stories to bring into the sprint. Of course, business priorities and product management usually serve this role, ideally before it hits the team, but if they were more informed about the various considerations that need to be covered in the development process (Capability, Scalability, Charisma, etc.) then they may have prioritized stories differently. Use models from a high-level in this session to educate your Product Owners, Developers and other Testers on what it really means to accept a story.
  2. Sprint Planning
    1. Larger Group: At this point, it makes sense to have the whole team involved in planning. Now, it is debatable if doing setting quantifiable estimates on user stories is a good or a bad thing, but in a general sense we can at at least agree that having the full team in this session is beneficial from a knowledge standpoint when evaluating work load.
    2. Continue to use models to inform your team so that more solid estimates can be made. Remember, test models can be used to increase awareness for everyone, not just testers, providing more insight into potential product risks to the client.
      1. E.G. Bring up the HTSM > Quality Criteria page and have the developers actually discuss Usability, Scalability, Compatibility, etc. for a given story. I guarantee that it is impossible just to go through this one node of HTSM without it informing your team members’ thinking on development considerations and product risks.
    3. Decide (pre-development) which story/stories will be candidates for Shake ‘N’ Bake (Dev/QA pair-testing process) and then execute them when the time comes.
  3. Day 1 (of Sprint)
    1. Test Strategy creation via collaboration (with other team member(s) and time-boxed per story):
      1. Create the test strategy (not test cases yet) using a model as a template with the other team members (testers, devs, POs, etc) in a time-boxed session. You’ll have to decide what amount of time is reasonable for a small, medium and large stories, but typically this is between 30 minutes and 2 hours.
      2. During this collaboration, I am seeking approval for the test direction I am headed, by evaluating cues from the other team member(s). I do not go into this thinking I know all the risks or proper priorities, otherwise the session is useless. The resident SME (Subject-Matter Expert) for a given story should see test strategies before they are turned into test cases.
      3. Good test strategies do not only explain what we are testing, but also what we are not testing, or cannot be tested by you, the tester.
        1. E.G. Load Testing on a given story might require someone who could write automation checks, but perhaps we do not have that resource available on the team or for the given timeline, so we intentionally make a note of that as a potential risk/gap in our test strategy.
        2. Coverage Reminder: Part of your test strategy involves telling stakeholders what you did and did not test, so be sure that is noted somewhere in your model/test suite creation.
      4. Time-Box:
        1. We time-box our test strategy creation session so that we can get the most bang for our buck, and mitigate time constraints. Many times testers complain about not having enough time to test, but that is because they are simply trying to complete their entire test case without having first created a prioritized test strategy.
        2. Now, in the interest of time management for the sake of the team, we probably cannot spend a whole day filling out the HTSM for one story, so if I have 5 stories, I might dedicate 1 to 1.5 hours to each story. You will need to decide what amount of time can be allotted per story based on your own team/testing capacity.
    2. Test Cases:
      1. Begin writing test plans/cases based on collaborative strategy (if you write your strategy correctly, then you should not have to recreate a lot of the foundation work during the test writing process – copy/paste is your friend)
      2. Automation Reminder: Be sure, early on in the sprint, ideally before the end of Day 1, to decide what can and cannot be automated. This will greatly prevent you from duplicating effort, or doing manual work in places that only make sense to do automation.
        1. NOTE: Automation may not be in your skill-set if you are a manual tester, but it should still be something of which you are aware and can help prioritize. This requires an automation strategy though – check out our HASM model that deals exclusively with creation of automation strategies (You will need X-Mind software to open this file type)
  4. Day 2
    1. By this point you should have already finalized or be finalizing your test strategies for any remaining stories.
      1. Continue to seek strategy approval from other team members, or SMEs outside of your team if others may have worked on the feature or something similar recently.
    2. Continue writing your test cases, making sure both they and your strategies are are visible to all stakeholders, both in an out of the team (via tool, e.g. Jira, Rally, etc.)
  5. Day 3+
    1. Continue test case creation, mitigating time management concerns as dev complete approaches (be aware that this, or the Shake ‘N’ Bake stories may be ready)
    2. Poll The Team (In-Sprint)
      1. Overview:  Ask the team members what they are currently struggling with and find out new information they have gathered since your sprint planning meeting. Typically this is the time when assumptions begin and simply asking around can nip these in the bud.
      2. Developers: What roadblocks are you experiencing? What new information have you found since our planning session?
      3. Product Owners: How is the customer feeling? What new priorities have come in? Have there been any shifts in the customer’s thinking that might affect current sprint items?
      4. Scrum Masters: Is there anything I am doing that might be causing friction? Do you notice any personality conflicts or roadblocks that I can help keep an eye on/mitigate?
  6. At Dev-Complete
    1. Execute Shake ‘N’ Bake on-demand when dev says the previously-decided story is complete
      1. Perform pair-testing process on developer’s box with them, before they make their code commit.
      2. Note: Shake ‘N’ Bake does not take the place of the normal testing process within your sprint. It is done in addition to the testing process.
    2. Execute normal testing process for stories per Testing Process (see next section)
  7. Testing Process:
    1. Assign story to yourself (via Sprint Tracking software and/or Scrum Board)
    2. Notify team which story you are starting to test (sometimes this notifies other team members to speak up about something they have been keeping in their head, perhaps that they had not made a note on yet in the story/case)
    3. Verify Dev-Task Complete (Pre-Testing): Are unit tests complete and passing? If not, have discussion with Developer who worked on the story as this should be complete before the testing process begins.
    4. Execute test cases for a given story in your Dev/Team branch environment
      1. Do not test on the Developer’s machine via IP unless you are doing pre-code commit testing earlier on in the sprint. You should have an initial environment where all code commits live for testing.
    5. Log Dev tasks for any issues found, as you go.
      1. Do not wait until the end of your test run to log the sum of tasks. Many times, Devs can fix items as you test, without invalidating your current testing.
      2. Assign tasks back to the specific Dev who worked the item or make a comment in the team room about it (at team’s discretion, depending on existing workflow)
  8. Story Ready-For-Release or Production-Ready
    1. Verify DoD (Definition of Done) Completion: At this point, the Tester needs to close the loop on any other areas that the team has specified in their DoD
      1. This can include: Test Peer Review, Code Review, Unit Test (code coverage %?), Documentation, Automation (in sprint, or delayed cadence?), Remaining task hours zeroed out, n-Sprints supported in productions, Manually Testing, Owner Review, Product Review, Demo, etc.
    2. SME Review: After testing is complete (Devs have completed all tasks and they have been retested) I would ask the subject-matter expert for the story to take a look at it, within a self-imposed time window.
      1. E.G: Setting Expectations – If I finish testing on a Wednesday, I would say to the PO, “Testing is complete on this story. Please review the functionality by end of day Thursday and let me know if you have any concerns, otherwise I will mark this story as “Ready for Release”.
        1. This may necessitate an “Owner Review” column in your sprint tracking tool (post-Testing but pre-Ready For Release) that would be managed by SMEs (the PO in this case, but this could and probably should have rotating ownership as the SME chosen for a given story should be the one most qualified, not necessarily the PO).
  9. Release Prep & Planning
    1. Attend pre-release meeting (formal or otherwise) to verify that all items that are in the “Ready to Release” state have been through the proper channels (outlined above, and per Team’s DoD).
    2. Clearly communicate post-release coverage (i.e. List of those who will be present directly after the release for any nighttime or daytime releases)
    3. Verify that release items to be tested have been marked (via your tracking tool: Jira, Rally, Release Checklist, etc.)
      1. Targeting: Ideally you reach a point in your continuous delivery process where you trust your deployments to the point that does not require production-time checking/testing of all release items. You should be targeting the high risk/major shift elements for production testing during your releases.
      2. Prioritization: This requires prioritization during the sprint of which items are high risk/high impact rather than trying to do this all at once at release time.
      3. Time Window: Items to be tested should be based on business priority of course, but evaluate release window time vs. amount of time needed to test items cumulatively.
        1. Time-to-Stories Ratio – In other words, if I have 12 stories, and each takes 10 minutes to test, which would take 2 hours. However, our release window is 1 hour, so we should evaluate which stories need to bubble up to the top as our highest risks items to merit production-time testing.
      4. Establish Reversion Hypotheticals for each story (these should be in place before the release starts, not created on the fly during the release when they occur)
        1. Structure: If ‘x‘ happens, then ‘y are the risks to the customer, so we recommend reverting code commits related to story z’.
          1. E.G. If the credit application will not submit in production, then lower conversion rates and lost financing revenue are the risks to the customer, so we recommend reverting code commits related to story #4567.
        2. Stories can have one or multiple reversion hypotheticals, depending on their complexity.
  10. Release & PVT Testing
    1. PVT (Production Validation Testing): This type of testing is done on the product in the production environment and meets all functional and cosmetic requirements.
    2. Test new development: High risk/priority items only (per release checklist created earlier)
    3. Perform basic smoke test (acceptance spot-checking) or related product areas, previous high-risk items, etc.
    4. Execute roll-back (if any hypothetical scenarios are satisfied), after discussions with the team/Product Owner:
      1. It is the testers job to inform the product management about risks caused by a given release, but at the end of the day we are NOT the gatekeepers. Other SMEs and management will have a higher-view of what is best for the business from a risk mitigation perspective, so we can give our recommendation not to release something, but ultimately that decision for go/no-go must come from product management.
  11. Post-Deployment & Monitoring
    1. This takes place within hours of the release/deploy, or during Day 1 of the following sprint.
    2. Performance systems (Splunk, NewRelic, etc.)
      1. Are there any new or unusual trends?
    3. Support Queue
      1. Are we noticing duplicate requests coming in from support teams?
    4. Team-level transparency on this can be hard, so this may require team ownership, not simply just the Tester.
  12. Release Retro
    1. Are you prepared to tell a compelling story about any caveats/prod defects that were found in the release?
      1. Where “compelling story” = define test strategy including what was and was not tested. You should already have this created from earlier in the sprint process for each story so minimal/no additional prep is needed.
    2. Is your attitude constructive rather than combative?
      1. Are you a listener and fixer or just a blamer?
      2. This includes being mindful of your speech: Your intention should be to make developers look good, by supporting their work with your testing. Be sure to compliment the solid work, before pointing out the faulty work.
  13. Team Retro
    1. Actionable Ideas: Arrive to the meeting with ideas on what can be modified (stop doing, start doing, do more, do less, etc.)
    2. Be very vocal in the team retros, but at the same time do it with tact and diplomacy.
    3. Poll The Team (Post Sprint):
      1. Overview: Ask the team members what they need from you, keeping in mind their context within the larger company. A Developer may ask you to be clearer about what you plan to test, while a Product Owner may want you to become more of an SME (Subject Matter Expert) in a given area.
      2. Developers: What more are you wanting out of me, your Tester? 
      3. Product Owners: What can I, your Tester, do to help make your job easier?
      4. Scrum Masters: Is there anything you are not getting from me, your Tester, that you need in order to increase team cohesion and efficiency?

As a professional skeptic and keen observer of human nature, it is incumbent upon me to request and consider feedback from the community on this work. My goal is to give something to Testers that they can immediately apply; however, given the various contexts in which each of us work, it would be foolhardy to think that this framework could apply exactly to any situation. Instead, I encourage you to treat this as a guideline to help improve your day-to-day process, and focus on the parts that help fill your current gaps. Please leave a comment, and let’s start a dialog, as I would appreciate your insight into which parts are most meaningful and provide the greatest value-add in your own situation.

Time Trial Testing Episode 2: Risk Heuristics

In this episode of Time Trial Testing, Brian Kurtz and I time-boxed ourselves to a 45-minute session to perform risk assessment of the X-Mind product. We used a heuristic-based risk analysis model to take a look at the UX/UI of this mind-mapping product. See Time Trial Testing – Episode 1: SFDIPOT Model for more details on how ‘Time Trial Testing’ sessions are meant to work.

  • Model: Risk Analysis Heuristics (for Digital Products) – by James Bach and Michael Bolton
    • Note: We limited our scope to only two of the sub-nodes.
      • Project Factors: I approached this from the perspective of a tester on an internal development team.
      • Technology Factors: Brian approached this from the perspective of an external tester, outside of the company.
  • Session Charter: UX/UI Product Risk Analysis
  • Product: X-Mind
  • Time: 45-minutes
  • Artifact (See image below or X-Mind file)

Click Image to Enlarge

Brian’s Observations (Technology Factors):

  • Conscious competence is alive and well. Using something that you have not used in a while or in a specific context takes effort. Sometimes it can be a downright struggle.
  • In this time trial we started with a mission. Find risk to the UX and the UI. Still I think next time it needs to be more focused based on the 45min window we are giving ourselves. Maybe risk to the UX and UI on the menu bar or icon/toolbar.
  • Every time I use a model I am reminded again how beneficial the results are to me after it is over. They always help me think about aspects of “something” that I would not have thought of on my own. I can always see the value afterwards.
  • I have only had to evaluate a third party application for purchase a few times. This time trials remind me what a daunting task it is to evaluate something as an outsider.
  • Although each of these time trials has produced a mind map that illustrates the value of just 45 minutes. It would be nice to take one to a more complete “state” to really illustrate what a more finished Strategy would look like.
  • I would remind people when you are creating these kinds of artifacts that it’s ok not to know all the answers. Because asking questions and having dialogue with stakeholders that do is what this is all about. Asking questions and picking others brains is a huge part of the learning process.

Connor’s Observations (Project Factors):

  • Not Yet Tester: This was actually my highest-priority item, so I am moving it to the top of this list, in the even that you get distracted and stop reading. Areas that have not yet been tested are likely going to have new bugs that we’ve never seen before, thus they have the potential to take longer to fix than familiar buggy areas. Also, these areas of the code typically only have one or two subject mater experts, the developer(s) that create it. The Product Owner and the Tester have no knowledge of how this area of the product was actually developed, post requirements, post planning, etc. so during these times, brain-dumps from the creator, original developer, are key. In our case, a UI Developer would knows how and why the product is made how it is, and what caveats there may be. Having this discussion up font with the developers, before diving into testing, will greatly increase your effectiveness at creating a more thorough test strategy and uncovering potential product risks. In these cases especially, we need to make sure we do not silo ourselves as a tester, under the guise of simply ‘needing to get the work done’. I have had many pre-test discussions that have drastically change the type and amount of time I plan to spend testing a given area, making me more efficient int he endeavor.
  • Learning Curve: This node forced me to consider the biases of the team, and how their existing knowledge of UX/UI from previous project or workplaces might positively or negatively influence the creation of a mind-mapping product. For example, if one of the UI Developers used to work in a vastly different industry with different customer needs (e.g. Medical Device Software), then this person may consciously, or subconsciously project those former needs on his new user group, even when the demographics are worlds apart.
  • Poor Control: This was a good reminder about making sure we control what we can, and not spending a lot of time trying to influence external factors. Do we have a solid DoD (Definition of Done)? Are we doing code reviews? Are the right people doing code reviews? Are we working from customer-approved mock-ups or are we just hoping that the UX/UI work is desirable? Are UX/UI Architects outside of the immediate team involved or are we just winging it with our limited knowledge?
  • Rushed Work: Every development team in the history of software development has struggled with time management. Either dev complete late in the sprint, so testers then have to rush, or product management sets hard-date deadlines in the mind of the customer, then the team has to release whatever they have, rather than move toward a more healthy ‘release when ready’ model. Perhaps estimates are created without UX/UI mock-ups, and then they arrive mid-sprint completely turning the original estimate on its head. Sometimes teams have good intentions, and do not intentionally think about how to best manage and section of their time. We need this to be one of the first things we think about, not the last.
  • Fatigue & Distributed Team: Before using this heuristic, I had (for some reason) always separated the fluid attributes of the workplace from the actual work that gets done and pushed out in releases. I had never considered the team being tired or distributed as a “product risk” persay. Since I was always comfortable with the deliverable being molded a hungred times along the way (Agile, not Waterfall), then whatever we got done, we got done, no matter how we felt along the way, and that would be accepted as our deliverable. I saw it as a performance risk to team operations rather than to the content of the product. While remote communication can sometimes spawn assumptions and miscommunication, I always felt like resolution in the 11th hour could handle any of these concerns. However, in using this model, it made me realize that this paradigm I had operated under was in fact the symptom of working in a blessed environment. I only thought this was because I’ve mostly worked with teams that were able to resolve major risks pre-release, or at least know about them and push intentionally. I feel that if I had more experience working in an environment with only remote teams (e.g. offshore), or less knowledgable folks, then I may have had this realization sooner.
  • Overfamiliarity: I think this is most easily noticed when we hire new people or bring others into an already well-oiled machine. These new perspectives can help expose ares to which the current development team(s) have become jaded. We should think about this with long running project teams especially. Perhaps shifting work from team to team is beneficial from time to time. Sure, Team A will not know what Team B is doing, and the velocity might slow down for a little while, but swapping teams’ works has many other upsides that I think are worth the time investment. If you cannot do that, then bring in external team members for a week, let them act as product, code and quality consultants. As it relates to our charter, perhaps they will see obvious avenues of UX improvement that you have just become used to. Remember, the barometer for good UX is determined based on how much user frustration is caused. How many times do new hires join the team who say, “Why does it work this way? That’s unintuitive.” to which we reply, “Oh, it is just like that, here’s the workaround…” In these situations we are part of the problem, not the solution. We are increasing product risk by ignoring the advice that comes from the fresh set of eyes simply because we have ‘gotten used to it’. Shame on us (us = team + product management, not simply testers).
  • Third-Party Contributions: You can decrease UX/UI product risks by limiting your dependency on 3rd-party technology. It typically requires a spike (development/technology research sprint, or two) to make such a determination, but if you can ‘roll-your-own’ tech that gives you exactly what the customer wants, and removes dependencies (and thus risks), then I would encourage product management to consider doing it, even if it takes twice as long (given the customer has been trained to accept a ‘release when ready’ development model).
  • Bad Tools: The Scrum Master should be in constant communication with the developers and testers on the team (and vice versa) in order to alleviate these kinds of concerns. A good Scrum Master does not need technical knowledge to help facilitate technology changes.
  • Expense of Fixes: First, let’s dispense with the following statement, “The later bugs are found, the more expensive it is to fix them.” Not necessarily. This statement does not contain any safety language (Epistemic Modality) or take into account context. This statement has been used historically to point fingers or use fear to motivate naive development teams, both despicable tactics. A better statement would be, “Depending on customer priorities and product priorities, bugs found later in the development process might be more expensive to fix, depending on their context.” E.G. What if we find a typo an hour before release? That’s a five minute fix that is not expensive. Now, if you have a broken development process that requires you to spend hours rebuilding a release candidate package, then sure, it might be expensive, but let’s be careful not to correlate unrelated problems and symptoms from two disparate systems.


Many testers do not even consider using some form of risk heuristics, mainly for two reasons: it is outside of their explicit knowledge, or they do not see value in it, usually due to never having tried to do risk assessment in a serious manner. Acceptance criteria is the tip of the iceberg, so don’t be the tester that stops there. What are your thoughts on this? Have you tried using this Risk Analysis Heuristics (for Digital Products) before, or used something similar? Do you even see value in risk analysis? Why or why not? What are your other takeaways? I encourage all Testers to do this same exercise for themselves. Reading through the model vs. actually using it, provided greatly different experiences for me. In reading it I found some nice ideas that sounded correct and good, but it was in its use that I found applicable value to what I do as a tester and am now compelled to use it again; a feeling I never would have experienced, had I only read through it.

This blog post was coauthored with Brian Kurtz.

CAST 2015: Distilled

Brian Kurtz and I recently traveled to Grand Rapids, Michigan to attend CAST 2015, a testing conference put on by AST and other members of the Context-Driven Testing (CDT) community. I was rewarded in a myriad of ways such as new ideas, enhanced learning sessions, fresh models, etc, but the most rewarding experience from the conference lies in the people and connections made. The entire CDT community currently lives on Twitter, so if you are new to testing or not involved in social media, I would recommend that you begin there. If you are looking for a starting point, check out my Twitter page here, Connor Roberts – Twitter, and look at the people I am following to get a good idea of who some of the active thought leaders are in testing. This community does a good job on Twitter of actually keeping the information flow clean and in general only shares value-add information. In keeping with that endeavor, it is my intention with this post to share the shining bits and pieces that came out of each session I attended. I hope this is a welcome respite from the normal process of learning that involves hours of panning for gold in the riverbanks, only to reveal small shining flakes from time to time.

Keep in mind, this is only a summary of my biased experience, since the notes I take mainly focus on what I feel was valuable and important to me based on what I currently know or do not know about the sessions I attended at the conference. My own notes and ideas are also mixed in with the content from the sessions, as the speaker may have been triggering thoughts in my head as they progressed. I did not keep track or delineate which are their thoughts and which are my own as I took notes.

It is also very likely that I did not document some points that others might feel are valuable, as the way I garner information is different than how they would. Overall, the heuristic that Brian and I used was to treat any of the non-live sessions as a priority since we knew the live sessions would be recorded and posted to the AST YouTube page after the conference. There are many other conferences that are worthwhile to attend, like STPCon, STAR East/West, etc. and I encourage testers to check them out as well.


Pre-Conference Workshop:

“Testing Fundamentals for Experienced Testers” by Robert Sabourin

Web:, Email: [email protected][email protected]

Slide Deck:

Session Notes:

  • Conspicuous Bugs – Sometimes we want users to know about a problem.
    • E.G. A blood pressure cuff is malfunctioning so we want the doctor to know there is an error and they should use another method.
  • Bug Sampling: Find a way to sample a population of bugs, in order to tell a better story about the whole.
    • E.G. Take a look at the last 200 defects we fixed, and categorize them, in order to get an idea where product management believes our business priorities are.
  • Dijkstra’s Principle: “Program testing can be used to show the presence of bugs but not their absence.”
    • E.G. We should never say to a stakeholder, “This feature is bug-free”, but we can say “This feature has been tested in conjunction with product management to address the highest product risks.”
  • “The goal is to reach an acceptable level of risk. At that point, quality is automatically good enough.” – James Bach
  • Three Quality Principles: Durable, Utilitarian, Beautiful
    • Based on book Vitruvius (book on architecture and design still used today)
  • Move away from centralized system testing, toward decentralized testing
    • E.G. Facebook – Pushed new timeline to New Zealand for a month before releasing it to the world
  • Talked about SBTM (Session Based Test Management): Timebox yourself to 60 minutes, determined what you have learned, then perform subsequent sessions by iterating on the previous data collected. In other words, use what you learn in each timeboxed session to make the next timeboxed session more successful.
  • Use visual models to help explain what you mean. Humans can interpret images much quicker than they can read paragraphs of text. Used a mind map as an example.
    • E.G. HTSM with subcategories and priorities
  • Try to come up with constructive, rather than destructive, conversational models when speaking with your team/stakeholders.
    • E.G. Destructive: “The acceptance criteria is not complete so we can’t estimate it”
    • E.G. Constructive: “Here’s a model I use [show HTSM] when I test features. Is there anything from this model that might help us make this acceptance criteria more complete?
  • Problem solving: We all like to think we’re excellent problems solvers, but we’re really only ever good problems solvers in a couple areas. Remember, your problem solving skill is linked to your experience. If you experience is shallow, your problem solving skill will lack variety.
  • Heuristics (first known use 1887): Book “How To Solve It” by George Pólya.
  • Be visual (models, mind maps, decisions charts)
  • If you don’t know the answer then take a guess. Use your knowledge to determine how wrong the first guess was, and make a better one. Keep iterating until you reach a state of “good enough” quality.
  • Large problems: Solve a smaller similar problem first, then try to use that as a sample to generalize so you can make hypothesis about the larger problem’s solution.
  • Decision Tables (a mathematical approach using boolean logic to express testing pathways to stakeholders – see slide deck)
  • AIM Heuristic: Application, Input, Memory
  • Use storyboarding (like comics) to visualize what you are going to test before you write test cases

Conference Sessions:

“Moving Testing Forward” by Karen Johnson (Orbitz)

Session Notes:

  • Know your shortcomings: Don’t force it. If you don’t like what you do, then switch.
    • E.G. Karen moved from Performance testing into something else, because she realized that even while she liked the testing, she was not very mathematical which is needed to become and even better performance tester.
  • Avoid working for someone you don’t respect. This affects your own growth and learning. You’ll be limited. Career development is not something your boss gives you, it is something you have to find for yourself.
  • Office politics: Don’t avoid, learn to get good at how to shape and steer this. “The minute you have two people in a room, there’s politics.”
  • Networking: Don’t just do it when you need a job. People will not connect with you at those times, if you have not been doing it all the other times.
  • Don’t put people in a box, based on your external perceptions of them. They probably know something you don’t.
  • Don’t be busy, in a corner, just focused on being a tester. Learn about the business, or else you’ll be shocked when something happens, or priorities were different than you “assumed”. Don’t lose sight of the “other side of the house”.
  • Balancing work and personal life never ends, so just get used to it, and get good at not complaining about it. Everyone has to do it, and it will level out in the long term. Don’t try to make every day or week perfectly balanced – it’s impossible.
  • Community Legacy: When you ultimately leave the testing community, which will happen to everyone at some point, what five things can you say you did for the community? Will the community have been better because you were in it? This involves interacting with people more than focusing on your process.
  • Be careful of idolizing thought leaders. Challenge their notions as much as the person’s next to you.
  • Goals: Don’t feel bad if you can’t figure out your long term goals. Tech is constantly changing, thus constant opportunities arise. In five years, you may be working on something that doesn’t even exist yet.
  • If your career stays in technology, then a cycle or learning is indefinite. Get used to learning, or you’ll just experience more pain resisting it.
  • Watch Test Is Dead from 2011, Google.
  • Five years from now, anything you know now will be “old”. Are you constantly learning so that you can stay relevant?
  • Be reliable and dependable in your current job, that’s how you advance.
    • Act as if you have the title you want already and do that job. Don’t wait for someone to tell you that you are a ‘Senior’ or a ‘Lead’ before you start leading. Management tasks require approval, leadership does not.
  • Care about your professional reputation, be aware of your online and social media presences. If you don’t have any, create them and start fostering them (Personal Website, Twitter for testing, etc.)

“Building A Culture Of Quality” by Josh Meier

Session Notes:

  • Two types of culture: Employee (ping pong tables) vs. Engineering (the way we ‘do’ things), let’s talk about the latter (more important)
  • Visible (Environment, Behaviors) vs. Invisible (Values, Attributes)
  • A ship in port is safe, but that’s not what ships are built for – Grace Hopper
  • Pair Tester with Dev for a full day (like an extended Shake And Bake session)
  • When filing bug reports, start making suggestions on possible fixes. At first this will be greeted with “don’t tell me how to do my job”, but eventually it will be welcomes as it will be a time saver, and for Josh, this morphed into the developers asking him, as a tester, to sign off on code reviews as part of their DoD (Definition of Done).
  • Begin participating in code-reviews, even if non-technical
  • *Ask for partial code, pre-commit before it is ready so you can supplement the Dev discussions to get an idea of where the developer is headed.
  • *Taxi Automation – Scripts than can be paused, allow the user to explorer mid-way through the checks, and then the checks continue based on the exploration work done.

“Should Testers Code” (Debate format) by Henrik Anderson and Jeffrey Morgan

My Conclusion: Yes and No. No, because value can be added without becoming technical; however, if your environment would benefit from a more technical tester and it’s something you have the aptitude for, then you should pursue it as part of your learning. If you find yourself desiring to do development, but in a tester role, then evaluate the possibility that you may wish to apply for a developer position, but don’t be a wolf in sheep’s clothing; that does the product and the team a disservice.

Session Notes:

  • It takes the responsibility of creating quality code off the developer if testers start coding (Automation Engineers excluded)
  • Training a blackbox tester for even 1 full hour per day for 10 months cannot replce years of coding education, training and experience. This is a huge time-sink for creation of a Jr. Dev as a best case scenario.
  • The mentality that all testers should code comes from a lack of understanding about how to increase your knowledge in the skill-craft of testing. Automation is a single tool, and coding is a practice. If you are non-technical, work on training your mindset, not trying to become a developer.

My Other Observations:

  • Do you want a foot doctor doing your heart surgery? (Developers spending majority time testing, Testers spending majority time developing?)
  • People who say that all testers should code do not truly understand that quality is a team responsibility, but rather only a developer’s responsibility. Those that hold this stance, consciously or subconsciously have a desire to make testers into coders, and only “then” will it be their responsibility because they will then be in the right role/title. Making testers code is just a sly way of saying that a manual exploratory blackbox tester does not add value, or at least enough value, to belong on my team.
  • By having this viewpoint, you are also saying that you posses the sum of knowledge of what it means to be a good tester and have reached a state of conscious competence in testing enough to make the claim that your determination of what a “tester” is, is not flawed.
  • The language we have traditionally used in the industry is what throws people off. People see the title “Quality Assurance” and think that only the person with that title should be in charge of quality, but this is a misnomer. We cannot claim that the team owns quality then say that it is the tester’s responsibility to be sure that the product in production is free from major product risks. They are opposing viewpoints, neither of which address testing.
  • Developers should move toward a better understanding of what it takes to test, while Testers should move toward a better understanding of what it takes to be a developer. This can be accomplished through collaborative/peer processes like Shake And Bake.
  • I believe that these two roles should never fully come together and be the same. We should stay complex and varied. We need specialists just like complex machines that have specialized parts. The gears inside a Rolex watch cannot do the job of the protective glass layer on top. Likewise, the watch band cannot do the job of keeping time, nor would you want it to. Variety is a good thing, and attempting to become great at everything makes you only partially good at any one thing. Also brands like Rolex and Bvlgari have an amazingly complex ecosystem of parts. The more complex a creation, the more elegant it’s operation and output will be.
  • Just like the ‘wisdom of the crowd’ can help you find the right answer (see session notes below from the talk by Mike Lyles) the myth of group reasoning can equally bite you. For example, a bad idea left unchecked in a given environment can propagate foolishness. This is why the role of the corporate consultant exists in the first place. In regards to testing organizations, keep in mind that just because an industry heads in a certain direction, it does not mean that is the correct direction.


“Visualize Testability” by Maria Kedemo


Slide Deck:

Session Notes:

  • Maria talked about the symptoms of low testability
    • E.G. When Developers say, “You’ll get it in a few days, so just wait until then,” this prevents the Tester from making sure something is testable, since they could be sitting with the Devs as they get halfway through it to give them ideas and help steer the coding (i.e. bake the quality into the cake, instead of waiting until after the fact to dive into it)
  • Get visibility into the ‘code in progress’, not just when it is committed at code review time. (similar to to what Josh Meier recommended, see other session notes above)
  • Maria presented a new model: Dimensions of Testability (contained within her slide deck)


“Bad Metric, Bad” by Joseph Ours

Email: [email protected], Twitter @justjoehere


Session Notes:

  • Make sure your samples are proper estimates of the population
    • I tweeted: “If you bite into a BLT, and miss the slice of bacon, you will estimate the BLT has 0% bacon”
  • Division within Testing Community (I see a visual/diagram that could easily be created from this)
    • 70% uneducated
    • 25% educated
    • 5% CDT (context-driven testing) educated/aware


“The Future Of Testing” by Ajay Balamurugadas


Session Notes:

  • My main takeaway was about the resources available to us as testers.
    • Ministry of Testing
    • Weekend Testing meetups
    • Skype Face-to-face test training with others in the community
    • Skype Testing 24/7 chat room
    • Udemy Coursera
    • BBST Classes
    • Test Insane (hold global test competition called ‘War With Bugs’, with $$cash prizes)
    • Testing Mnemonics list (pick one and try it out each day)
    • SpeakEasy Program (for those interested in doing conventions/circuits on testing)
  • Also talked about the TQM Model (Total Quality Management)
    • Customer Focus, Total Participation, Process Improvement, Process Management, Planning Process, etc.
  • Ajay encouraged learning from other industries
    • E.G. Medical, Auto, Aerospace, etc. by reading about testing on news sites or product risks found there. They may have applicable information that apply here.
  • “You work for your employer, but learning is in your hands.” (i.e. Don’t wait for your manager to train you, do it yourself)
  • Talked about the AST Grant Program – helps with PR, pay for meetups, etc.
  • Reading is nice, but if you want to become good at something, you must practice it.
  • Professional Reputation – do you have an online testing portfolio
    • On a personal note: He got me on this one. I was in the process then of getting my personal blog back up (which is live now), but also plan to even put up some screen recordings of how I test in various situations, what models I use, how I use them, why I test the way I do, how to reach a state of ‘good enough’ testing where product risks are mitigated or only minimal ones remain, how to tell a story to our stakeholders about what was and was not tested, understanding metrics use and misuse, etc.
  • “Your name is your biggest certificate” – Ajay (on the topic of certifications)


“Reason and Argument for Testers” by Thomas Vaniotis and Scott Allman

Session Notes:

  • Discussed Argument vs Rhetoric
    • Argument – justification of beliefs, strength of evidence, rational analysis
    • Rhetoric – literary merit, attractiveness, social usefulness, political favorability
  • They talked about making conclusions based on premises. You need to make sure your premises are sound, before you try to make a conclusion based on solely conjecture that only ‘sounds’ good on the surface.
  • Talked about language – all sound arguments are valid, but not all valid arguments are sound. There are many true conclusions that do not have sound arguments. No sound argument will lead to a false conclusion.
  • Fallacies (I liked this definition) – a collection of statements that resemble arguments, but are invalid.
  • Abduction – forming conclusion in a dangerous way (avoid this by ensuring your premises are sound)
  • Use Safety Language (Epistemic Modality) to qualify statements and make them more palatable for your audience. You can reach the same outcome and still maintain friendships/relationships.

My conclusions:

  • This was really a session on psychology in the workplace, not limited to testers, but it was a good reminder on how to make points to our stakeholders if we want to convince them of something.
  • If you work with people your respect, then you should realize that they are most likely speaking with the product’s best interests at heart, at least from their perspective, and not out to maliciously attack you personally. You can avoid personal attacks by speaking from your own experience. Instead of saying “That’s not correct, here’s why…” You can say “In my experience, I have found, X Y Z to be true, because of these factors…” In this way you will make the same point, without the confrontational bias.
  • If you want to convince others, be Type-A when dealing with the product, but not when dealing with people. Try to separate the two in your mind before going into any conversation.

“Visual Testing” by Mike Lyles

Twitter @mikelyles


Session Notes:

  • This was all about how we can be visually fooled as testers. Lots of good examples in the slide-deck, and he stumped about half of the crowd there, even though we were primed about being fooled.
  • Leverage the Wisdom of the Crowd: Mike also did an exercise where he held up a jar of gum balls and asked us how many were inside. One person guessed 500, one person guess 1,000. At that point our average was 750. Another person guessed 200, another 350, another 650, another 150, etc. and this went on for a while until we had about 12 to 15 guesses written down. The average of the guesses came out to around 550. The Total number of gum balls was actually within 50-100 of this average. The point that Mike was making was that leveraging the wisdom of the crowd to make decisions is smarter than trying to go it alone or based on smaller subsets/sources of comparison. Use the people in your division, around you on your team and even in the testing community at large to make sure you are on the right track and moving toward the most likely outcome that will better serve your stakeholders.
    • This involves an intentional effort to be humble, and realize that you (we) do not have all the answers to any given situation. We should be seeking counsel for situations that have potentially sizable product impacts and risks, especially in areas that are not in our wheelhouse.
  • Choice Blindness: People will come up with convincing reasons why to take a certain set of actions based on things that are inaccurate or never happened.


“Using Tools To Improve Testing: Beyond The UI” by Jeremy Traylor


Session Notes:

  • Testers should become familiar with more development-like tools (e.g. Browser Dev Tools, Scripting, Fiddler commands, etc.)
  • JSONLint – a JSON validator
  • Use Fiddler (Windows) or Charles (Mac)
    • Learn how to send commands through this (POST, GET, etc.) and not just use it to only monitor output.
  • API Testing: Why do this?
    • Sometimes the UI is not complete, and we could be testing sooner and more often to verify backend functionality
    • You can test more scenarios than simply testing from the UI, and you can test those scenarios quickly if you are using script to hit the API rather than manual UI testing.
      • Some would argue that this invalidates testing since you are not doing it how the user is doing it, but as long as you are sending the exact input data that the UI would send then I would argue this is not a waste of time and can expose product risks sooner rather than later.
    • Gives testers better understanding of how the application works, instead of everything beyond the UI just being a ‘black box’ that they do not understand.
    • Some test scenarios may not be possible in the UI. There may be some background caching or performance tests you want to do that cannot be accomplished from the front end.
    • You can have the API handle simple tasks rather than reply on creating front-end logic conversions after the fact. This increases testability and reliability.
  • Postman (Chrome extension) – this is an backend-HTTP testing tool that has a nice GUI/front-end. This helps decrease the barrier to entry for testers who may be firmly planted in the blackbox/manual-only world and want to increase their technical knowledge to better help their team.
  • Tamper Data (addon for Firefox) – can change data as it is in route, so you can better simulate Domain testing (positive/negative test scenarios).
  • SQL Fiddle – This is a DB tool for testing queries, scripts, etc.
  • Other tools: SOAPUI, Advanced Rest Client, Parasoft SOAtest, JSONLint, etc.
  • Did you know that the “GET” command can be used to harvest data (PII, user information, etc). Testers, are you checking this? (HTSM > Quality Criteria > Security). However, “GET” can ‘lie’ so you want to check the DB to make sure the data it returns is actually true.

My conclusions:

  • Explore what works for you and your team/product, but don’t stick your head in the sand and just claim that you are a manual-only tester. You have to at least try these tools and make a genuine effort to use them for a while before you can discount their effectiveness. Claiming they would not work for your situation or never making time to explore them is the same as saying that you wish to stay in the dark on how to become a better tester.
  • Since Security testing is not one of my fortes, I personally would like to become a better whitebox hacker to aid in my skill-craft as a tester. This involves trying to gain the system and expose security risks, but for noble purposes. Any found risks then go to help better inform the development team and are used to make decisions on how the product can be made more secure. Since testers are supposed to be informers, this is something I need to work on to better round out my skill-set.


“When Cultures Collide” by Raj Subramanian and Carlene Wesemeyer

Session Notes:

  • Raj and Carlene spent the majority of the time talking about communication barriers such as differences in body language, the limitations of text-only (chat or email), as well as assumptions that are made by certain cultures about others regardless if they are within the same culture or not.
  • Main takeaway: Don’t take a yes for a yes and a no for a no. Be over-communicative if necessary to ensure that the expectations you have in your head match what they have in their head.



I hope that my notes have helped you in some way, or at the very least exposed you to some new ideas and knowledgable folks in the industry from which you can learn. Please leave comments here on what area you received the most value from or need clarification. Again, these are my distilled notes from the four days I was there, so I may be able to recall more or update this blog if you feel one area might be lacking. If you also went to CAST 2015, and any of the same sessions, then I’d love to hear your thoughts on any important points I may have overlooked that would be beneficial to the community.

Time Trial Testing Episode 1: SFDIPOT Model

Introduction: Recently, Brian Kurtz and I thought it’d be fun to take a look at a process, tool or model within the testing industry at least once per week and use them on a specific feature or product to create a test strategy within a time-box of 30 minutes. Once complete, we draw conclusions letting you know what benefits we feel that we gathered from the exercise. We’re calling this our “Time Trial Testing” series (working title) for now, so if you come up with a better name let us know. We hope that you can apply some of the information we’re sharing here, to your daily testing routine. Remember, you can always pick and try out a Testing Mnemonics from this list and see what works for you. Be sure to share your own conclusions, either on Twitter or post a comment here, so that your findings can benefit the larger community of testers.


Episode 1: SFDIPOT Model & Evernote

This week, we decided to tackle the SFDIPOT model, created by James Bach and updated later by Michael Bolton. This is actually a revised version of the Product Elements node within the Heuristic Test Strategy Model (HTSM X-Mind), explained here:

So, in our 30-minute session, we decided to use this model on Evernote. Yes, the entirety of Evernote; we’ll explain later why that’s was a bad idea, but we forged ahead anyway, for the sake of scientific exploration. Brian and I worked on this separately from 3:00-3:30pm, then came together from 3:30-4:00pm to combine notes and our piece our models into one larger mind-map that ended up being more beneficial to our test strategy creation than either of our models would have been on their own. The following image was created from this collaboration, and below is the post-timebox discussion where Brian and I talk about the realizations and benefits of what we found using this model.

Time Trial Testing - Evernote and SFDIPOT

Click image to enlarge (X-Mind File)

Connor’s Observations:

  • Using this model increased my awareness of features within Evernote that I had never used before, even though I have used the app for years.
  • The UI challenged my assumptions of how a certain feature should work based on how I have used them with other applications. (e.g. Tags can be saved via Enter key or by using a comma)
  • The model helped me be a more reliable tester, especially when I need to test across multiple modules (i.e. multiple stories for a shared feature). “Just because you know something doesn’t mean you’ll remember it when the need arises.” – James Bach
  • Leverage the wisdom of the crowd. (e.g. A team with two testers could do this exercise separately, focusing on different parts, and then combine them after in conjunction with peer review. This makes your models much more robust, as well as uses time more efficiently).
  • I was not as familiar with this model (Product Elements node of HTSM) as I am others, so it somewhat create a sense of being a ‘new tester’ on a product, as if I had never used it before. I felt like the model gave me new ideas, as it provided me a pathway I have never explored before when using Evernote. I did not feel as jaded as I might have if I were to test it without a model.
  • Using the model made me realize that when you have massive products, or multiple stories around the same feature, you should not wait until you have a minimum viable product to test, because by then the testing effort may be insurmountable. Start testing soon and often, even if the code is not 100% complete, so that you do not get overwhelmed as a tester. Many times we complain about Dev-complete late in the sprint causing us not to meet a deadline, but this sometimes could be mitigated by testing things earlier, even if in an incomplete state. (e.g. If you are a blackbox/manual tester, then ask a developer to assist you with some API testing to verify backend functionality even before the UI is complete).

Brian’s Observations:

  • Using this model helped me to understand the language of the Evernote team, in how they use terminology as it relates to the application (e.g. Notes are stored in a “Notebook” not a “Folder”)
  • If you work on it together at the same time initially, then we roadblock each other, because we’re having to interrupt the other’s train of thought to get everything put down simultaneously. This is a failing of human nature and how the mind works, not related to any individual’s own fault.
  • Using the model helped to focus our thinking. I could just think about “Structure” then I could just think about “Function”, etc. Since I knew the model I was using was complete and eventually would cover everything I wanted, I knew I would get to all the important aspect at some point, so this freed my mind up from having to constantly focus/defocus. I could just think about the “Structure” node for a given set of time, without distraction. This prevents the potential loss of currently running threads in our mind, so that new thoughts do not supersede or completely squash existing or unfinished thoughts.
  • The model helped me realize that as I went through the nodes, I was reminded that I won’t have access to the backed since I am not an Evernote employee which reminded me that I needed to make a note about not being something I would be able to test, therefore no amount of additional testing time would have addressed that concern. This should be something I inform my stakeholders about, as it is a test limitation they may not assume exists.
  • The model helped me not start testing too soon. It helped me realize that there was a lot of learning that I needed to do before I jumped in. I could have started testing the GUI, and maybe been somewhat effective, but I think if I do research and investigation before I actually test, then I will test in a much more efficient way than that addresses my stakeholders’ concerns more completely, than if I had just started testing right out of the gates.


We realized about halfway through that we took on too much. We should have picked a specific feature or module, so that we could be much more focused and make great progress in one area rather than mediocre progress on the whole. In other words, don’t stretch yourself thin as a tester. Also, doing features/modules in smaller bite-sized chunks, then allows you to put them together later like a puzzle into a much larger and more complete mind map, allowing you to create a more valuable test strategy.

We hope this exploration exercise has helped, and look forward to posting many more of these episodes in the future. Please leave a comment and let us know your thoughts.

This blog post was coauthored with Brian Kurtz.