Explore capabilities, not features

This is an excerpt from my upcoming book, Fifty Quick Ideas To Improve Your Tests

Exploratory testing requires a clear mission. The mission statement provides focus and enables teams to triage what is important and what is out of scope. A clear mission prevents exploratory testing sessions turning into unstructured playing with the system. As software features are implemented, and user stories get ready for exploratory testing, it’s only logical to set the mission for exploratory testing sessions around new stories or changed features. Although it might sound counter-intuitive, story oriented missions lead to tunnel-vision and prevent teams from getting the most out of their testing sessions. Continue reading

How to get the most out of Given-When-Then

This is an excerpt from my upcoming book, Fifty Quick Ideas To Improve Your Tests

Behaviour-driven development is becoming increasingly popular over the last few years, and with it the Given-When-Then format for examples. In many ways, Given-When-Then seems as the de-facto standard for expressing functional checks using examples. Introduced by JBehave in 2003, this structure was intended to support conversations between teams and business stakeholders, but also lead those discussions towards a conclusion that would be easy to automate as a test.

Given-When-Then statements are great because they are easy to capture on whiteboards and flipcharts, but also easy to transfer to electronic documents, including plain text files and wiki pages. In addition, there are automation tools for all popular application platforms today that support tests specified as Given-When-Then.

On the other hand, Given-When-Then is a very sharp tool and unless handled properly, it can hurt badly. Without understanding the true purpose of that way of capturing expectations, many teams out there just create tests that are too long, too difficult to maintain, and almost impossible to understand. Here is a typical example:

    Scenario: Payroll salary calculations

    Given the admin page is open
    When the user types John into the 'employee name'
    and the user types 30000 into the 'salary'
    and the user clicks 'Add'
    Then the page reloads
    And the user types Mike into the 'employee name'
    and the user types 40000 into the 'salary'
    and the user clicks 'Add'
    When the user selects 'Payslips'
    And the user selects employee number 1
    Then the user clicks on 'View'
    When the user selects 'Info'
    Then the 'salary' shows 29000
    Then the user clicks 'Edit'
    and the user types 40000 into the 'salary'
    When the user clicks on 'View'
    And the 'salary' shows 31000

This example might have been clear to the person who first wrote it, but it’s purpose is unclear – what is it really testing? Is the salary a parameter of the test, or is it an expected outcome? If one of the bottom steps of this scenario fails, it will be very difficult to understand the exact cause of the problem.

Spoken language is ambiguous, and it’s perfectly OK to say ‘Given an employee has a salary …, When the tax deduction is…, then the employee gets a payslip and the payslip shows …’. It’s also OK to say ‘When an employee has a salary …, Given the tax deduction is …’ or ‘Given an employee … and the tax deduction … then the payslip …’. All those combinations mean the same thing, and they will be easily understood within the wider context.

But there is only one right way to describe those conditions with Given-When-Then if you want to get the most out of it from the perspective of long-term test maintenance.

The sequence is important. ‘Given’ comes before ‘When’, and ‘When’ comes before ‘Then’. Those clauses should not be mixed. All parameters should be specified with ‘Given’ clauses, the action under test should be specified with the ‘When’ clause, and all expected outcomes should be listed with ‘Then’ clauses. Each scenario should ideally have only one When clause, that clearly points to the purpose of the test.

Given-When-Then is not just an automation-friendly way of describing expectations, it’s a structural pattern for designing clear specifications. It’s been around for quite a while under different names. When use cases were popular, it was known as Preconditions-Trigger-Postconditions. In unit testing, it’s known as Arrange-Act-Assert.

Key benefits

Using Given-When-Then in sequence is a great reminder for several great test design ideas. It suggests that pre-conditions and post-conditions need to be identified and separated. It suggests that the purpose of the test should be clearly communicated, and that each scenario should check one and only one thing. When there is only one action under test, people are forced to look beyond the mechanics of test execution and really identify a clear purpose.

When used correctly, Given-When-Then helps teams design specifications and checks that are easy to understand and maintain. As tests will be focused on one particular action, they will be less brittle and easier to diagnose and troubleshoot. When the parameters and expectations are clearly separated, it’s easier to evaluate if we need to add more examples, and discover missing cases.

How to make it work

A good trick, that prevents most of accidental misuse of Given-When-Then, is to use past tense for ‘Given’ clauses, present tense for ‘When’ and future tense for ‘Then’. This makes it clear that ‘Given’ statements are preconditions and parameters, and that ‘Then’ statements are postconditions and expectations.

Make ‘Given’ and ‘Then’ passive – they should describe values rather than actions. Make sure ‘When’ is active – it should describe the action under test.

Try having only one ‘When’ statement for each scenario.

Quick Reference: Fifty Quick Ideas To Improve Your User Stories

For teams that need a bit of inspiration during user story refinement workshops, here is a quick reference online mind map with all the ideas from the Fifty Quick Ideas To Improve Your User Stories. The mind map contains a short description and a reminder image for each idea, grouped into categories. Just tap/click a category to open up the details of ideas.

If you like this but you’d prefer something physical rather than electronic, we partnered with DriveThruCards to create a poker-style card deck with all idea summaries. For more info, see drivethrucards.com

Introducing Bug Magnet – an exploratory testing helper

bug magnet I just compiled a bunch of checklists and notes I often use for exploratory testing into a handy Chrome extension. Bug Magnet provides convenient access to common problematic values and edge cases, so you can keep them handy and access them easily during exploratory testing sessions. Just right-click an input field!

The extension introduces only a tiny overhead per page (<1k), and has no third party library dependencies, works completely passive, so it does not interfere with your web app execution in any way. It works on input fields, text areas, content editable DIVs. Of course, it’s completely opensource (MIT license), so you can easily extend it with your config.

To install the extension, and for more info, head over to http://gojko.github.io/bugmagnet.

The extension started from a need to scratch my own itch and make common experiments easily accessible, both as test data and as an inspiration for similar ideas. If this sounds useful, propose improvements :)

How to get the most out of impact mapping

im-contexts Ingrid Domingues, Johan Berndtsson and I met up in July this year to compare the various approaches to Impact Mapping and community feedback and investigate how to get the most out of this method in different contexts. The conclusion was that there are two key factors to consider for software delivery using impact maps, and recognising the right context is crucial to get the most out of the method. The two important dimensions are the consequences of being wrong (making the the wrong product management decisions) and the ability to make investments.

These two factors create four different contexts, and choosing the right approach is crucial in order to get the most out of the method:

  • Good ability to make investments, and small consequences of being wrong – Iterate: Organisations will benefit from taking some initial time defining the desired impact, and then exploring different solutions with small and directed impact maps that help design and evaluate deliverables against desired outcome.
  • Poor ability to decide on investments, small consequences of being wrong – Align: Organisations will benefit from detailing the user needs analysis in order to make more directed decisions, and to drive prioritisation for longer pieces of work. Usually only parts of maps end up being delivered.
  • Good ability to make investments, serious consequences of being wrong – Experiment: Organisations can explore different product options and user needs in multiple impact maps.
  • Poor ability to make investments, serious consequences of being wrong – Discover: The initial hypothesis impact map is detailed by user studies and user testing that converge towards the desired impact.

We wrote an article about this. You can read it on InfoQ.