Abstract: I frequently get asked how I interview testers, be it anyone from exploratory to automation and anywhere within that spectrum (i.e. including “Toolsmiths”, see Richard Bradshaw’s work here for context on that term). What the person is really asking me though is, “How do you know someone who interviews well will actually perform well once hired?” The real answer is, ‘You don’t’. You can use interview models to help reduce the unknowns, but ultimately if you’ve been a hiring manager long enough, you’ve hired some duds and had to manage them out. I ultimately try to talk to people about the number one thing that drives good testing, and that is the desire and capacity to learn. desire alone isn’t enough. Testing is after all, learning, at its core. We’re scientists, not showstoppers. We’re explorers, not Product Managers. Our passion lies within the journey, not so much the end or counting the number of things we found along the way (unless yo’re doing Domain Testing – I jest). So, as a boilerplate for a year or so, I used Dan Ashby’s interview model as my go-to when doing phone screens and in person interviews. After a few more years, I realized that my interview process, like my testing process, must be continually adapting and breaking so that it can reform and adapt to the contexts of whatever company or product in which I work. the major shifts in my interview process have coincided with the times I changed companies. Below are my current ways of ‘weeding out the weak’ persay, and saving myself time when it comes to finding passionate talent in testing and automation (notice, other than this sentence you won’t find any questions around specific tools like Selenium, SoapUI, etc. Good testing is tool agnostic). The sections are divided below: I typically use Phase I during the initial phone-screens, and Phase II when they come in-person. Sometimes I dive into Phase II on the phone if I get a feeling they are ahead of the curve. <Note: The term “agile” is intentionally typed as ‘little-a agile’, not ‘big-A Agile’. We’re talking about the ability to flex and adapt, not the marketing monolith that is pedaled heavily right now.>
Phase I: Initial Weed-Out Questions for Testers (in a Modern Software Development Environment)
- What is good testing?
- Poor answers: Clicking through a product to make sure the quality is good and all of the requirements are met.
- This person likely has a shallow definition of what it means to test. This is Claims Testing, sometimes called human checking, but does not indicate an understanding of deep testing. this candidate is also a Product Owner at heart if they think they “assure” quality, rather than cast light on risks so that others (Product Owners/Business) can assure what does or doesn’t meet the level of quality desired.
- Acceptable Answers: Exploration of a product, experimentation so that we can learn about what’s happening in a product, casting light on any risks that might threaten the value of the product or timing of the project and making those risks known to our stakeholders so they can make decisions on how to mitigate that risk (i.e. fix, ignore, backlog, etc)
- This candidate has at least a basic understanding of their role as a tester within a larger organization. Their statement around bringing risks to light, but not making decisions on them is healthy and speaks to their maturity of not being in the gatekeeper mindset.
- Poor answers: Clicking through a product to make sure the quality is good and all of the requirements are met.
- What is the role of a tester in an agile organization?
- Poor answers: Find bugs, write test cases, break things, stop releases, get certifications
- Shows gatekeeper mindset still exists, and heavy administrative focus on the value of tester being linked to test case writing or bugs-found instead of on providing customer value to the end-user with holistic testing approaches)
- Acceptable answers: Explore for value to the customer even if my PO didn’t mention it in the acceptance criteria, challenge the veracity of the acceptance criteria, operate under the assumption that Product probably always missed something when creating User Stories, use testing models to fill those gaps in my thinking so I am not just relying on my mental model/experience to do good testing.
- This display intellectual humility in understanding their thinking is inherently flawed in some respect – which it is for everyone, also shows healthy understanding of testing and flexibility to pivot for the purpose of providing customer value and not just check off acceptance criteria)
- Poor answers: Find bugs, write test cases, break things, stop releases, get certifications
- When does the testing process start and end in an agile scrum team?
- Poor answers: After code complete, when the Dev hands off the code to QA, after a deploy we start testing, and then we stop when we cover everything.
- This shows that they believe testing is something you “start” after development, and are still in a Waterfall mindset when it comes to what testing actually is (i.e not just clicking around a product). Also, this answer implies that we test until we as testers are satisfied (unhealthy), not until Product is satisfied (healthy)
- Acceptable answers: Throughout the entire SDLC process – this starts in the portfolio planning stages as we should have a QA/Test lead pairing with Dev, Product and Architecture to discuss risks up front as we initially design the product. If we’re waiting until the sprint to start testing, then we’ve missed a lot of opportunities to help our stakeholders cast light on risk, much of which can be uncovered earlier in the process before any code is actually written.
- This shows the candidate has a firm understanding of the fact that risk exposure and mitigation never starts and ends, but is rather ongoing. I would also ask follow-up questions around how they did this at previous companies, because it shows a high sense of maturity and leadership if they injected themselves into the design phase and not just down the line in the scrum-team portion of testing. In fact, a good tester in an agile org will be frustrated and may even have a story about leaving a company that did not allow them to participate earlier in the process.
- Poor answers: After code complete, when the Dev hands off the code to QA, after a deploy we start testing, and then we stop when we cover everything.
- With the world of agile testing constantly changing, what meetups or conferences do you attend, and what books do you read on the latest practices that would make us (your company) want to hire you over any other tester?
- Poor answers: I haven’t read any books or attended meetups, but I have 20 years of experience and I Google when needed to solve problems, as well as read Guru99 which has articles on testing and development.
- Years of experience does not make someone a good tester, nor does ad-hoc Googling show a learning mindset, as everyone has to do that as part of their job anyway. Also, when you Google “software testing”, the first non-ad hit that comes up is Guru99, so for obvious reasons this is a questionable answer when given alone.
- Acceptable answers: Every month or two I go to a local meetup, here are a few blogs I read regularly <names 3 or 4 sources>, one of my favorite books on testing is <names title and author and tells you about something they learned from it>, I follow people on Twitter <like it or not, this is where the testing community lives and thrives! E.g. Link>
- This shows that they are constantly learning (#1 skill needed for good testing is learning – getting tired of hearing this yet? No deep technical questions need to be asked to determine if someone is in the right mindset for a career in the test industry, like many managers think – Now, depth of knowledge for a specific role, is another story). This shows they are immersing themselves in the testing community and finding out about what other testers and companies are doing to stay up to date on the latest tools, practices and mindsets around testing and agility, and not waiting for their manager or the company to bring that to them.
- Poor answers: I haven’t read any books or attended meetups, but I have 20 years of experience and I Google when needed to solve problems, as well as read Guru99 which has articles on testing and development.
Phase II: Advanced Quality & Testing Theory topics
If the candidate breezes through these with flying colors, I then go into the deeper topics below, which typically can only be answered confidently by true practitioners of the testing skill-craft.
- Familiarity with the Four Schools of Software Testing (and why Context-Driven is healthier than the other three)
- Understanding of good/bad testing measurement and metrics (e.g. DLR, Defect Density, First/Second/Third order measurements and when to use each appropriately)
- Testing heuristics (e.g. HTSM model for testing)
- Explain good Test Reporting (i.e. the 3-Part Testing Story/Braid)
- Testers are not Gatekeepers (i.e. Product vs Tester responsibility understanding)
- Regression practices (RCRCRC model, as well as Combinatorial testing practices to decrease process waste)
- Testing Oracles (FEWHICCUPPS model for Product consistency)
- Quality Criteria: Capability, Reliability, Usability, Charisma, etc (ability to give example of test types in at least a few of these)
- Testing Techniques: e.g. Can they explain the difference between Scenario Testing and Flow Testing. What is Domain Testing? Etc.
- Agility within testing (shift left, pairing early, mindset of not having to wait for code to start testing, Shake ‘N’ Bake pair-testing process)
- How Exploratory Testing differs from Ad-Hoc or Random Testing (and why that matters – i.e. Exploratory testing should have a structure and they should be able to speak to that)
- Test Chartering and SBTM (Session Based Test Management)
- The Dead Bee Heuristic for problem solving and ensuring issues are actually fixed
- Artifact generation: Lean Test Strategy documentation vs Heavy Test Cases (i.e. hopefully the former, so they spend more valuable time testing rather than documenting)
- Understanding the difference between Checking and Testing
- What is Galumphing and why is it important in testing?
- What are the two pillars of Testability (Observability and Controllability) and can they explain why both Devs and Testers should care about them
- Good understanding of the difference between ‘Best’ and ‘Contextual’ Practices
- Good understanding of the detriment of IEEE testing standard ISO29119 (+other standards from the consortium or dogmatic static models)
- Bonus: Familiarity with the RST namespace (how and why this group of the testing industry has broken off from traditional norms, shedding legacy habits and mindsets, etc)
Conclusion
People who react well to the above more advance topics, in displaying that learning mindset (even if they do not comes across as experts for the specific question asked) are typically the ones you want in your shop. Of course, you must be sure that your in-person interview process has a good element of letting them experiment in the interview to see how they think. Many times I open our web product on a laptop and put it in front of them to see what they do. Do they sit there without touching it and just speak theory, or do they grab the laptop, pull it toward them, and start playing with the product? The latter usually tells me they have an experimentation mindset and willingness to learn, as well as leads to better questions from them about our business needs and desires.
At the end of the day, for most projects, I value a growth mindset and passion for learning over someone with 20 years of experience who thinks they have everything already figured out, and little to learn. Intellectual humility, the belief that one’s thinking is inherently flawed and has gaps, is key to being a good scientist, and thus a good tester. Some testers have even come to call themselves ‘Professional Skeptics’ to sum up that scientific, humble and critically thinking mindset in a single phrase – and I like it. If you’ve been hiring for at least some time, you’ve probably has people who interviewed well, but eventually fell short of your expectations; I know I have, and had to manage them out. That is to say, I do not present this information as a silver bullet or sorts. We are still humans, thus this blog post is yet another flawed model from which you must adapt your hiring process, to discard/keep what you feel is best suited to your environment. I am eager to hear your thoughts on what common interview behavior and attributes you’ve noticed across your good hires that did live up or grow beyond your initial expectations.
One thought on “Hiring Good Testers”