UPDATE 2020-04-02: Added the link to the HASM model that give ideas on how to creation automation strategies (You will need X-Mind software to open this file type) . Use in conjunction with other models like HTSM for a holistic testing approach. It’s not an ‘either or’ choice.
Abstract: A Sprint Framework For Testers is a brief outline of my suggested processes and practices employed by a Tester that resides within a software development scrum team, in an Agile environment. I have created this document with web-based product software teams in mind, but these practices and recommendations are not necessarily tied to a specific type of software, tester, platform or development environment. I say this simply to give you context into the formative process of this framework, but I believe these ideas have been generalized in a way that should be beneficial across many types of software testing. Having the ability to execute much of this relies on working within a healthy engineering culture, but Testers should also be intentionally employing practices like this to improve their own culture; and hopefully this sprint framework for testers can help with that.
Note: After a recent discussion on Twitter I decided to add this note. This model is in no way meant to be a prescriptive mandate on how to run your sprint, but rather a guide to help prime your thinking as you move through the various stages. Also, test cases may or may not fit into your current paradigm. If they do not, then be sure you have good reasons for that. Some are under the impression that being ‘context-driven’ means being anti-test cases, which is a fallacy. Writing scripted test cases requires a great amount of skill and may be necessary in your context, as I have found it in mine.
- Sprint Grooming
- Smaller Group: In the interests of efficient time usage, this should be composed of a small group as this part of the process does not require the input of the entire team. A single Developer, Tester and Product Owner would be sufficient, or whichever small group is composed of team members with the most product knowledge and people who will be doing the hands-on work. Two Developers may be required, if there is a large reach in the work being done between both backed and UI. It should be the exception, not the rule, that the whole team would need to be involved in the continual backlog grooming process.
- Use models (HTSM, RCRCRC or other Testing Mnemonics) to inform your thinking and team’s awareness of the potential vastness of acceptance criteria considerations.
- Models as Litmus Tests (for Story Acceptance):
- Using just a smaller part from an existing model (HTSM > Quality Criteria) can many times serve as a litmus test for which stories to bring into the sprint. Of course, business priorities and product management usually serve this role, ideally before it hits the team, but if they were more informed about the various considerations that need to be covered in the development process (Capability, Scalability, Charisma, etc.) then they may have prioritized stories differently. Use models from a high-level in this session to educate your Product Owners, Developers and other Testers on what it really means to accept a story.
- Sprint Planning
- Larger Group: At this point, it makes sense to have the whole team involved in planning. Now, it is debatable if doing setting quantifiable estimates on user stories is a good or a bad thing, but in a general sense we can at at least agree that having the full team in this session is beneficial from a knowledge standpoint when evaluating work load.
- Continue to use models to inform your team so that more solid estimates can be made. Remember, test models can be used to increase awareness for everyone, not just testers, providing more insight into potential product risks to the client.
- E.G. Bring up the HTSM > Quality Criteria page and have the developers actually discuss Usability, Scalability, Compatibility, etc. for a given story. I guarantee that it is impossible just to go through this one node of HTSM without it informing your team members’ thinking on development considerations and product risks.
- Decide (pre-development) which story/stories will be candidates for Shake ‘N’ Bake (Dev/QA pair-testing process) and then execute them when the time comes.
- Day 1 (of Sprint)
- Test Strategy creation via collaboration (with other team member(s) and time-boxed per story):
- Create the test strategy (not test cases yet) using a model as a template with the other team members (testers, devs, POs, etc) in a time-boxed session. You’ll have to decide what amount of time is reasonable for a small, medium and large stories, but typically this is between 30 minutes and 2 hours.
- During this collaboration, I am seeking approval for the test direction I am headed, by evaluating cues from the other team member(s). I do not go into this thinking I know all the risks or proper priorities, otherwise the session is useless. The resident SME (Subject-Matter Expert) for a given story should see test strategies before they are turned into test cases.
- Good test strategies do not only explain what we are testing, but also what we are not testing, or cannot be tested by you, the tester.
- E.G. Load Testing on a given story might require someone who could write automation checks, but perhaps we do not have that resource available on the team or for the given timeline, so we intentionally make a note of that as a potential risk/gap in our test strategy.
- Coverage Reminder: Part of your test strategy involves telling stakeholders what you did and did not test, so be sure that is noted somewhere in your model/test suite creation.
- Time-Box:
- We time-box our test strategy creation session so that we can get the most bang for our buck, and mitigate time constraints. Many times testers complain about not having enough time to test, but that is because they are simply trying to complete their entire test case without having first created a prioritized test strategy.
- Now, in the interest of time management for the sake of the team, we probably cannot spend a whole day filling out the HTSM for one story, so if I have 5 stories, I might dedicate 1 to 1.5 hours to each story. You will need to decide what amount of time can be allotted per story based on your own team/testing capacity.
- Test Cases:
- Begin writing test plans/cases based on collaborative strategy (if you write your strategy correctly, then you should not have to recreate a lot of the foundation work during the test writing process – copy/paste is your friend)
- Automation Reminder: Be sure, early on in the sprint, ideally before the end of Day 1, to decide what can and cannot be automated. This will greatly prevent you from duplicating effort, or doing manual work in places that only make sense to do automation.
- NOTE: Automation may not be in your skill-set if you are a manual tester, but it should still be something of which you are aware and can help prioritize. This requires an automation strategy though – check out our HASM model that deals exclusively with creation of automation strategies (You will need X-Mind software to open this file type)
- Test Strategy creation via collaboration (with other team member(s) and time-boxed per story):
- Day 2
- By this point you should have already finalized or be finalizing your test strategies for any remaining stories.
- Continue to seek strategy approval from other team members, or SMEs outside of your team if others may have worked on the feature or something similar recently.
- Continue writing your test cases, making sure both they and your strategies are are visible to all stakeholders, both in an out of the team (via tool, e.g. Jira, Rally, etc.)
- By this point you should have already finalized or be finalizing your test strategies for any remaining stories.
- Day 3+
- Continue test case creation, mitigating time management concerns as dev complete approaches (be aware that this, or the Shake ‘N’ Bake stories may be ready)
- Poll The Team (In-Sprint)
- Overview: Ask the team members what they are currently struggling with and find out new information they have gathered since your sprint planning meeting. Typically this is the time when assumptions begin and simply asking around can nip these in the bud.
- Developers: What roadblocks are you experiencing? What new information have you found since our planning session?
- Product Owners: How is the customer feeling? What new priorities have come in? Have there been any shifts in the customer’s thinking that might affect current sprint items?
- Scrum Masters: Is there anything I am doing that might be causing friction? Do you notice any personality conflicts or roadblocks that I can help keep an eye on/mitigate?
- At Dev-Complete
- Execute Shake ‘N’ Bake on-demand when dev says the previously-decided story is complete
- Perform pair-testing process on developer’s box with them, before they make their code commit.
- Note: Shake ‘N’ Bake does not take the place of the normal testing process within your sprint. It is done in addition to the testing process.
- Execute normal testing process for stories per Testing Process (see next section)
- Execute Shake ‘N’ Bake on-demand when dev says the previously-decided story is complete
- Testing Process:
- Assign story to yourself (via Sprint Tracking software and/or Scrum Board)
- Notify team which story you are starting to test (sometimes this notifies other team members to speak up about something they have been keeping in their head, perhaps that they had not made a note on yet in the story/case)
- Verify Dev-Task Complete (Pre-Testing): Are unit tests complete and passing? If not, have discussion with Developer who worked on the story as this should be complete before the testing process begins.
- Execute test cases for a given story in your Dev/Team branch environment
- Do not test on the Developer’s machine via IP unless you are doing pre-code commit testing earlier on in the sprint. You should have an initial environment where all code commits live for testing.
- Log Dev tasks for any issues found, as you go.
- Do not wait until the end of your test run to log the sum of tasks. Many times, Devs can fix items as you test, without invalidating your current testing.
- Assign tasks back to the specific Dev who worked the item or make a comment in the team room about it (at team’s discretion, depending on existing workflow)
- Story Ready-For-Release or Production-Ready
- Verify DoD (Definition of Done) Completion: At this point, the Tester needs to close the loop on any other areas that the team has specified in their DoD
- This can include: Test Peer Review, Code Review, Unit Test (code coverage %?), Documentation, Automation (in sprint, or delayed cadence?), Remaining task hours zeroed out, n-Sprints supported in productions, Manually Testing, Owner Review, Product Review, Demo, etc.
- SME Review: After testing is complete (Devs have completed all tasks and they have been retested) I would ask the subject-matter expert for the story to take a look at it, within a self-imposed time window.
- E.G: Setting Expectations – If I finish testing on a Wednesday, I would say to the PO, “Testing is complete on this story. Please review the functionality by end of day Thursday and let me know if you have any concerns, otherwise I will mark this story as “Ready for Release”.
- This may necessitate an “Owner Review” column in your sprint tracking tool (post-Testing but pre-Ready For Release) that would be managed by SMEs (the PO in this case, but this could and probably should have rotating ownership as the SME chosen for a given story should be the one most qualified, not necessarily the PO).
- E.G: Setting Expectations – If I finish testing on a Wednesday, I would say to the PO, “Testing is complete on this story. Please review the functionality by end of day Thursday and let me know if you have any concerns, otherwise I will mark this story as “Ready for Release”.
- Verify DoD (Definition of Done) Completion: At this point, the Tester needs to close the loop on any other areas that the team has specified in their DoD
- Release Prep & Planning
- Attend pre-release meeting (formal or otherwise) to verify that all items that are in the “Ready to Release” state have been through the proper channels (outlined above, and per Team’s DoD).
- Clearly communicate post-release coverage (i.e. List of those who will be present directly after the release for any nighttime or daytime releases)
- Verify that release items to be tested have been marked (via your tracking tool: Jira, Rally, Release Checklist, etc.)
- Targeting: Ideally you reach a point in your continuous delivery process where you trust your deployments to the point that does not require production-time checking/testing of all release items. You should be targeting the high risk/major shift elements for production testing during your releases.
- Prioritization: This requires prioritization during the sprint of which items are high risk/high impact rather than trying to do this all at once at release time.
- Time Window: Items to be tested should be based on business priority of course, but evaluate release window time vs. amount of time needed to test items cumulatively.
- Time-to-Stories Ratio – In other words, if I have 12 stories, and each takes 10 minutes to test, which would take 2 hours. However, our release window is 1 hour, so we should evaluate which stories need to bubble up to the top as our highest risks items to merit production-time testing.
- Establish Reversion Hypotheticals for each story (these should be in place before the release starts, not created on the fly during the release when they occur)
- Structure: If ‘x‘ happens, then ‘y’ are the risks to the customer, so we recommend reverting code commits related to story ‘z’.
- E.G. If the credit application will not submit in production, then lower conversion rates and lost financing revenue are the risks to the customer, so we recommend reverting code commits related to story #4567.
- Stories can have one or multiple reversion hypotheticals, depending on their complexity.
- Structure: If ‘x‘ happens, then ‘y’ are the risks to the customer, so we recommend reverting code commits related to story ‘z’.
- Release & PVT Testing
- PVT (Production Validation Testing): This type of testing is done on the product in the production environment and meets all functional and cosmetic requirements.
- Test new development: High risk/priority items only (per release checklist created earlier)
- Perform basic smoke test (acceptance spot-checking) or related product areas, previous high-risk items, etc.
- Execute roll-back (if any hypothetical scenarios are satisfied), after discussions with the team/Product Owner:
- It is the testers job to inform the product management about risks caused by a given release, but at the end of the day we are NOT the gatekeepers. Other SMEs and management will have a higher-view of what is best for the business from a risk mitigation perspective, so we can give our recommendation not to release something, but ultimately that decision for go/no-go must come from product management.
- Post-Deployment & Monitoring
- This takes place within hours of the release/deploy, or during Day 1 of the following sprint.
- Performance systems (Splunk, NewRelic, etc.)
- Are there any new or unusual trends?
- Support Queue
- Are we noticing duplicate requests coming in from support teams?
- Team-level transparency on this can be hard, so this may require team ownership, not simply just the Tester.
- Release Retro
- Are you prepared to tell a compelling story about any caveats/prod defects that were found in the release?
- Where “compelling story” = define test strategy including what was and was not tested. You should already have this created from earlier in the sprint process for each story so minimal/no additional prep is needed.
- Is your attitude constructive rather than combative?
- Are you a listener and fixer or just a blamer?
- This includes being mindful of your speech: Your intention should be to make developers look good, by supporting their work with your testing. Be sure to compliment the solid work, before pointing out the faulty work.
- Are you prepared to tell a compelling story about any caveats/prod defects that were found in the release?
- Team Retro
- Actionable Ideas: Arrive to the meeting with ideas on what can be modified (stop doing, start doing, do more, do less, etc.)
- Be very vocal in the team retros, but at the same time do it with tact and diplomacy.
- Poll The Team (Post Sprint):
- Overview: Ask the team members what they need from you, keeping in mind their context within the larger company. A Developer may ask you to be clearer about what you plan to test, while a Product Owner may want you to become more of an SME (Subject Matter Expert) in a given area.
- Developers: What more are you wanting out of me, your Tester?
- Product Owners: What can I, your Tester, do to help make your job easier?
- Scrum Masters: Is there anything you are not getting from me, your Tester, that you need in order to increase team cohesion and efficiency?
As a professional skeptic and keen observer of human nature, it is incumbent upon me to request and consider feedback from the community on this work. My goal is to give something to Testers that they can immediately apply; however, given the various contexts in which each of us work, it would be foolhardy to think that this framework could apply exactly to any situation. Instead, I encourage you to treat this as a guideline to help improve your day-to-day process, and focus on the parts that help fill your current gaps. Please leave a comment, and let’s start a dialog, as I would appreciate your insight into which parts are most meaningful and provide the greatest value-add in your own situation.
Where can the HASM model be found?
Good question. I added a link to it in the body above, as well as a call-out specifically at the beginning of the article. It is from 2015/2016, so beware of dated lingo or some parts of it that may need some ‘dusting off’. 🙂