Recently I was working on a project utilizing SVG/HTML5 and I wanted users to be able to move shapes around within the browser window.

My first reaction was to write a test that did something like the following:

  • Create shape
  • Simulate a mouse down on shape
  • Simulate a mouse move with controlled event clientX and clientY values
  • Test that position of shape had moved by those amounts.

What Is That Really Testing Anyway?

But then it occurred to me that while this would be valid for the System Under Test (SUT), I don’t have any information beyond my own use and environment.  How do I really know a user in their environment will generate those expected inputs I use to make things go green?

See, here’s the rub with the way most Agile testing starts:

  1. A team works with a PO/PM and collectively they decide on a set of functionality.  Depending on the organization, questions might range from “what do you want the application to do?” to “what value is the user wanting to get out of the application?”.
  2. They come to an agreement on a certain set of exhibited functionality characteristics.  These represents the collective best guess at what will work when it is put in front of users.
  3. The team creates a system that exhibits those functionality characteristics.
  4. The work is delivered to the user, who may or may not achieve the success envisioned by the system creators.  They may be hampered by “usability” or “environment” issues.

When I write traditional TDD, I have validated that the system works as I expect it to work, but I have not validated that the user can use the system to bring about the expected result.   One can see how those in the testing community like Jon Bach suggest that:

The Agilista’s main vehicle for testing always seems to be “checking”, not investigation. It’s weird that they are quick to say there are no silver bullets, but imply that checking-via-automation is the only way to kill werewolves.

Put simply, a functionally correct system (as determined by tests) may still turn out to be unusable or not work or otherwise not achieve its intended goals.

The Underlying Assumption as Hypothesis

We generally start by creating an assumption about a set of functionality that we will believe will provide value.  Instead of assumption, can we state a hypothesis as an executable scenario that qualifies as a test?

Specifically, in my example case, instead of asking:  “how can I test that the system handles shape moves correctly?” I ask: “how will we (me and the user) /know/ that the user can correctly move shapes (in their environment)?”

Putting on my thinking cap, I came up with a simple exercise for the user to perform, here is a screen shot:

Drag the sphere to the Bullseye

This addressed two critical issues with this bit of interactivity:

  • Usability – Users ability to use or understand the application
  • Deployment Environment – HTML5/SVG browser support in this specific case

It does introduce additional questions about what to do in the case of failure: should I be emailed, etc.,  but let’s assume that I’m fine taking the approach of not allowing the user to proceed with using the app.  It’s a gating test that qualifies the user and environment as being suitable.  Now I have a feedback mechanism for usability and I’m eliminating large areas that don’t need exploratory testing (or I can drive exploration based on feedback).

My test is also automated.  I’m not testing SUT, I’m testing User + System + Environment so I need a test that runs in those conditions.  It feels different from a CI build.  A CI build gives a sense of control prior to release.  Our target environment contains elements beyond our control.  By placing testing assets in the actual target environment, we can extend our reach. It’s now included as part of the whole system.

Patterns?

This “tutorial” pattern seems appropriate for testing interactivity.  I can see a number of broad patterns for specific deployment environments around health monitoring and environment-checker type utilities.

I was talking with Ward Cunningham at AgileBlur last month about this approach, and he recalled a complicated financial application he had worked on where he created a pre-submission form that presented a synopsis of the data entered by the user including the expected results for complex calculations to be applied to that data.  Working with another colleague involved on a transactional application, our decided on best approach was a report would be used as a kind of receipt to show the user the end-result use of the functionality.

Benefits of Being in the Driver’s Seat

I do want to stress that I am advocating that the “how will we know question” should be done first, not an additional layer of work done after the fact (aka documentation).  By putting these concerns first, they have a proportionately larger influence over the overall product.

A Bit of Agile Theory

One of the fundamental tricks in the agile bag is the reordering of activities in order to avoid rework cycles by constraining downstream solutions and designs.  This is the “Driven” part of xDD.  I first came across a good explanation of this concept in J. B. Rainsberger’s post on “Queuing Theory” (see http://www.jbrains.ca/permalink/how-test-driven-development-works-and-more).  If we start approaching a development problem by coding functionality, the potential solution is unlimited:

If instead we write the test first, we constrain the possible solution to only those that pass the test:

Test First Sequence

In the code-first case, iterative testing is required until enough parts of the initial solution are moved within testing.  With a test-first approach, only code within the constraint of the tests is allowed to emerge.

Applying Theory To Practice

At a macro level with current Agile development practices, we have these steps:

  1. Create a system around assumptions about what will achieve value.
  2. Deploy that system and see if the value assumptions were correct.
  3. Response to feedback from #2 to rework parts of #1 that fell outside of the acceptable solutions.

Leveraging what we already know about “Queuing Theory”, let’s reduce or eliminate rework and establish upfront constraints that influence downstream solutions and designs.  We do this by starting with the question “how we will know?”

I’m finding thus far that this question exposes many hard integration, usability, deployment validation issues that often cause big problems when left to the end of a release.   Better to deal with these up front at the start versus having an unknown liability of rework at the end. This is a sign to me that while these questions are tough and difficult to answer, they are the right ones to ask and foster the right conversation with PO/PM.

The added step is to create executable validation pieces.  Some of these things may already be in your application.

I plan on posting some more examples.  I’m thinking of trying the classic “bowling game” example.  What experiences have you had in asking validating questions up front?  How would Validation Driven Development work for your team?

About these ads