Time Trial Testing

Blog posts in the “Time Trial Testing” category are related to the test strategy creation exercises that are meant to explore a specific model, process or tool and report findings, all done within a time-box typically of thirty minutes or less.

Time Trial Testing Episode 2: Risk Heuristics

In this episode of Time Trial Testing, Brian Kurtz and I time-boxed ourselves to a 45-minute session to perform risk assessment of the X-Mind product. We used a heuristic-based risk analysis model to take a look at the UX/UI of this mind-mapping product. See Time Trial Testing – Episode 1: SFDIPOT Model for more details on how ‘Time Trial Testing’ sessions are meant to work.

  • Model: Risk Analysis Heuristics (for Digital Products) – by James Bach and Michael Bolton
    • Note: We limited our scope to only two of the sub-nodes.
      • Project Factors: I approached this from the perspective of a tester on an internal development team.
      • Technology Factors: Brian approached this from the perspective of an external tester, outside of the company.
  • Session Charter: UX/UI Product Risk Analysis
  • Product: X-Mind
  • Time: 45-minutes
  • Artifact (See image below or X-Mind file)
TimeTrialTesting-Episode2-RiskHeuristic

Click Image to Enlarge

Brian’s Observations (Technology Factors):

  • Conscious competence is alive and well. Using something that you have not used in a while or in a specific context takes effort. Sometimes it can be a downright struggle.
  • In this time trial we started with a mission. Find risk to the UX and the UI. Still I think next time it needs to be more focused based on the 45min window we are giving ourselves. Maybe risk to the UX and UI on the menu bar or icon/toolbar.
  • Every time I use a model I am reminded again how beneficial the results are to me after it is over. They always help me think about aspects of “something” that I would not have thought of on my own. I can always see the value afterwards.
  • I have only had to evaluate a third party application for purchase a few times. This time trials remind me what a daunting task it is to evaluate something as an outsider.
  • Although each of these time trials has produced a mind map that illustrates the value of just 45 minutes. It would be nice to take one to a more complete “state” to really illustrate what a more finished Strategy would look like.
  • I would remind people when you are creating these kinds of artifacts that it’s ok not to know all the answers. Because asking questions and having dialogue with stakeholders that do is what this is all about. Asking questions and picking others brains is a huge part of the learning process.

Connor’s Observations (Project Factors):

  • Not Yet Tester: This was actually my highest-priority item, so I am moving it to the top of this list, in the even that you get distracted and stop reading. Areas that have not yet been tested are likely going to have new bugs that we’ve never seen before, thus they have the potential to take longer to fix than familiar buggy areas. Also, these areas of the code typically only have one or two subject mater experts, the developer(s) that create it. The Product Owner and the Tester have no knowledge of how this area of the product was actually developed, post requirements, post planning, etc. so during these times, brain-dumps from the creator, original developer, are key. In our case, a UI Developer would knows how and why the product is made how it is, and what caveats there may be. Having this discussion up font with the developers, before diving into testing, will greatly increase your effectiveness at creating a more thorough test strategy and uncovering potential product risks. In these cases especially, we need to make sure we do not silo ourselves as a tester, under the guise of simply ‘needing to get the work done’. I have had many pre-test discussions that have drastically change the type and amount of time I plan to spend testing a given area, making me more efficient int he endeavor.
  • Learning Curve: This node forced me to consider the biases of the team, and how their existing knowledge of UX/UI from previous project or workplaces might positively or negatively influence the creation of a mind-mapping product. For example, if one of the UI Developers used to work in a vastly different industry with different customer needs (e.g. Medical Device Software), then this person may consciously, or subconsciously project those former needs on his new user group, even when the demographics are worlds apart.
  • Poor Control: This was a good reminder about making sure we control what we can, and not spending a lot of time trying to influence external factors. Do we have a solid DoD (Definition of Done)? Are we doing code reviews? Are the right people doing code reviews? Are we working from customer-approved mock-ups or are we just hoping that the UX/UI work is desirable? Are UX/UI Architects outside of the immediate team involved or are we just winging it with our limited knowledge?
  • Rushed Work: Every development team in the history of software development has struggled with time management. Either dev complete late in the sprint, so testers then have to rush, or product management sets hard-date deadlines in the mind of the customer, then the team has to release whatever they have, rather than move toward a more healthy ‘release when ready’ model. Perhaps estimates are created without UX/UI mock-ups, and then they arrive mid-sprint completely turning the original estimate on its head. Sometimes teams have good intentions, and do not intentionally think about how to best manage and section of their time. We need this to be one of the first things we think about, not the last.
  • Fatigue & Distributed Team: Before using this heuristic, I had (for some reason) always separated the fluid attributes of the workplace from the actual work that gets done and pushed out in releases. I had never considered the team being tired or distributed as a “product risk” persay. Since I was always comfortable with the deliverable being molded a hungred times along the way (Agile, not Waterfall), then whatever we got done, we got done, no matter how we felt along the way, and that would be accepted as our deliverable. I saw it as a performance risk to team operations rather than to the content of the product. While remote communication can sometimes spawn assumptions and miscommunication, I always felt like resolution in the 11th hour could handle any of these concerns. However, in using this model, it made me realize that this paradigm I had operated under was in fact the symptom of working in a blessed environment. I only thought this was because I’ve mostly worked with teams that were able to resolve major risks pre-release, or at least know about them and push intentionally. I feel that if I had more experience working in an environment with only remote teams (e.g. offshore), or less knowledgable folks, then I may have had this realization sooner.
  • Overfamiliarity: I think this is most easily noticed when we hire new people or bring others into an already well-oiled machine. These new perspectives can help expose ares to which the current development team(s) have become jaded. We should think about this with long running project teams especially. Perhaps shifting work from team to team is beneficial from time to time. Sure, Team A will not know what Team B is doing, and the velocity might slow down for a little while, but swapping teams’ works has many other upsides that I think are worth the time investment. If you cannot do that, then bring in external team members for a week, let them act as product, code and quality consultants. As it relates to our charter, perhaps they will see obvious avenues of UX improvement that you have just become used to. Remember, the barometer for good UX is determined based on how much user frustration is caused. How many times do new hires join the team who say, “Why does it work this way? That’s unintuitive.” to which we reply, “Oh, it is just like that, here’s the workaround…” In these situations we are part of the problem, not the solution. We are increasing product risk by ignoring the advice that comes from the fresh set of eyes simply because we have ‘gotten used to it’. Shame on us (us = team + product management, not simply testers).
  • Third-Party Contributions: You can decrease UX/UI product risks by limiting your dependency on 3rd-party technology. It typically requires a spike (development/technology research sprint, or two) to make such a determination, but if you can ‘roll-your-own’ tech that gives you exactly what the customer wants, and removes dependencies (and thus risks), then I would encourage product management to consider doing it, even if it takes twice as long (given the customer has been trained to accept a ‘release when ready’ development model).
  • Bad Tools: The Scrum Master should be in constant communication with the developers and testers on the team (and vice versa) in order to alleviate these kinds of concerns. A good Scrum Master does not need technical knowledge to help facilitate technology changes.
  • Expense of Fixes: First, let’s dispense with the following statement, “The later bugs are found, the more expensive it is to fix them.” Not necessarily. This statement does not contain any safety language (Epistemic Modality) or take into account context. This statement has been used historically to point fingers or use fear to motivate naive development teams, both despicable tactics. A better statement would be, “Depending on customer priorities and product priorities, bugs found later in the development process might be more expensive to fix, depending on their context.” E.G. What if we find a typo an hour before release? That’s a five minute fix that is not expensive. Now, if you have a broken development process that requires you to spend hours rebuilding a release candidate package, then sure, it might be expensive, but let’s be careful not to correlate unrelated problems and symptoms from two disparate systems.

Conclusion:

Many testers do not even consider using some form of risk heuristics, mainly for two reasons: it is outside of their explicit knowledge, or they do not see value in it, usually due to never having tried to do risk assessment in a serious manner. Acceptance criteria is the tip of the iceberg, so don’t be the tester that stops there. What are your thoughts on this? Have you tried using this Risk Analysis Heuristics (for Digital Products) before, or used something similar? Do you even see value in risk analysis? Why or why not? What are your other takeaways? I encourage all Testers to do this same exercise for themselves. Reading through the model vs. actually using it, provided greatly different experiences for me. In reading it I found some nice ideas that sounded correct and good, but it was in its use that I found applicable value to what I do as a tester and am now compelled to use it again; a feeling I never would have experienced, had I only read through it.

This blog post was coauthored with Brian Kurtz.

Time Trial Testing Episode 1: SFDIPOT Model

Introduction: Recently, Brian Kurtz and I thought it’d be fun to take a look at a process, tool or model within the testing industry at least once per week and use them on a specific feature or product to create a test strategy within a time-box of 30 minutes. Once complete, we draw conclusions letting you know what benefits we feel that we gathered from the exercise. We’re calling this our “Time Trial Testing” series (working title) for now, so if you come up with a better name let us know. We hope that you can apply some of the information we’re sharing here, to your daily testing routine. Remember, you can always pick and try out a Testing Mnemonics from this list and see what works for you. Be sure to share your own conclusions, either on Twitter or post a comment here, so that your findings can benefit the larger community of testers.

 

Episode 1: SFDIPOT Model & Evernote

This week, we decided to tackle the SFDIPOT model, created by James Bach and updated later by Michael Bolton. This is actually a revised version of the Product Elements node within the Heuristic Test Strategy Model (HTSM X-Mind), explained here: http://www.satisfice.com/tools/htsm.pdf#page=4

So, in our 30-minute session, we decided to use this model on Evernote. Yes, the entirety of Evernote; we’ll explain later why that’s was a bad idea, but we forged ahead anyway, for the sake of scientific exploration. Brian and I worked on this separately from 3:00-3:30pm, then came together from 3:30-4:00pm to combine notes and our piece our models into one larger mind-map that ended up being more beneficial to our test strategy creation than either of our models would have been on their own. The following image was created from this collaboration, and below is the post-timebox discussion where Brian and I talk about the realizations and benefits of what we found using this model.

Time Trial Testing - Evernote and SFDIPOT

Click image to enlarge (X-Mind File)

Connor’s Observations:

  • Using this model increased my awareness of features within Evernote that I had never used before, even though I have used the app for years.
  • The UI challenged my assumptions of how a certain feature should work based on how I have used them with other applications. (e.g. Tags can be saved via Enter key or by using a comma)
  • The model helped me be a more reliable tester, especially when I need to test across multiple modules (i.e. multiple stories for a shared feature). “Just because you know something doesn’t mean you’ll remember it when the need arises.” – James Bach
  • Leverage the wisdom of the crowd. (e.g. A team with two testers could do this exercise separately, focusing on different parts, and then combine them after in conjunction with peer review. This makes your models much more robust, as well as uses time more efficiently).
  • I was not as familiar with this model (Product Elements node of HTSM) as I am others, so it somewhat create a sense of being a ‘new tester’ on a product, as if I had never used it before. I felt like the model gave me new ideas, as it provided me a pathway I have never explored before when using Evernote. I did not feel as jaded as I might have if I were to test it without a model.
  • Using the model made me realize that when you have massive products, or multiple stories around the same feature, you should not wait until you have a minimum viable product to test, because by then the testing effort may be insurmountable. Start testing soon and often, even if the code is not 100% complete, so that you do not get overwhelmed as a tester. Many times we complain about Dev-complete late in the sprint causing us not to meet a deadline, but this sometimes could be mitigated by testing things earlier, even if in an incomplete state. (e.g. If you are a blackbox/manual tester, then ask a developer to assist you with some API testing to verify backend functionality even before the UI is complete).

Brian’s Observations:

  • Using this model helped me to understand the language of the Evernote team, in how they use terminology as it relates to the application (e.g. Notes are stored in a “Notebook” not a “Folder”)
  • If you work on it together at the same time initially, then we roadblock each other, because we’re having to interrupt the other’s train of thought to get everything put down simultaneously. This is a failing of human nature and how the mind works, not related to any individual’s own fault.
  • Using the model helped to focus our thinking. I could just think about “Structure” then I could just think about “Function”, etc. Since I knew the model I was using was complete and eventually would cover everything I wanted, I knew I would get to all the important aspect at some point, so this freed my mind up from having to constantly focus/defocus. I could just think about the “Structure” node for a given set of time, without distraction. This prevents the potential loss of currently running threads in our mind, so that new thoughts do not supersede or completely squash existing or unfinished thoughts.
  • The model helped me realize that as I went through the nodes, I was reminded that I won’t have access to the backed since I am not an Evernote employee which reminded me that I needed to make a note about not being something I would be able to test, therefore no amount of additional testing time would have addressed that concern. This should be something I inform my stakeholders about, as it is a test limitation they may not assume exists.
  • The model helped me not start testing too soon. It helped me realize that there was a lot of learning that I needed to do before I jumped in. I could have started testing the GUI, and maybe been somewhat effective, but I think if I do research and investigation before I actually test, then I will test in a much more efficient way than that addresses my stakeholders’ concerns more completely, than if I had just started testing right out of the gates.

Conclusion:

We realized about halfway through that we took on too much. We should have picked a specific feature or module, so that we could be much more focused and make great progress in one area rather than mediocre progress on the whole. In other words, don’t stretch yourself thin as a tester. Also, doing features/modules in smaller bite-sized chunks, then allows you to put them together later like a puzzle into a much larger and more complete mind map, allowing you to create a more valuable test strategy.

We hope this exploration exercise has helped, and look forward to posting many more of these episodes in the future. Please leave a comment and let us know your thoughts.

This blog post was coauthored with Brian Kurtz.