It has been said that to be successful in software we need to do ‘The Right Thing Right’. The second ‘Right’ is about Technical Excellence, to which there is already much attention. The first ‘Right’ is more problematic.

Iterations introduced a way to tame chaos by successively asking ‘is this right?’. User Stories and XP’s on-site customer foster collaboration, but that conversation is typically about ‘what the system should do’ with the Product Owner left, in scrum terms, responsible for ROI.

ATDD formalized this concept into a set of system inputs and outputs that are meant to represent that ‘if the system can do this’ then it is assumed that we (the development team) have ‘delivered the right thing’.

There are major weaknesses to this model:

1. The inability of customers and users to design software.

Jenea Hayes at cooper design said this very well when she wrote:

Many of our clients come to us with a history of producing bad user interfaces, and they can’t understand why when they have included every popular feature request from their users.

The reason for this is very simple: users are not designers or visionaries. We should no sooner ask them to design a product than we would ask them to write the code.

2. It puts the primary focus on the technology

The conversation is about what the system should do, not on the customer and the ‘why?’ driving the need for software in the first place.

3. It introduces a two-tiered goal system

Development teams can end up in the vulnerable and awkward position of succeeding at what they do, but still having the software fail overall.  One has to wonder what the effect on moral is in such an environment, whether through lack of belief in a real, meaningful goal or through an antagonistic relationship with product management.

4. It encourages mediocrity

Both parties (customer and business) usually have already invested heavily in the process and ‘good enough’ salvages existing effort. It lowers the bar to what is functional, not what is good.

These observations are not entirely new. Techniques like BDD, Specification by Example, and more recently Lean Startup have emerged that are all addressing this same fundamental problem that understanding the customer goals and outcomes is essential in designing the right system. These are good techniques, more people should use them.

Yet they are all techniques for deriving a set of requirements. Desires and outcomes are still *translated* into a feature that is believed to satisfy those goals, largely for the benefit of fitting into a development machinery tuned for delivering features in a pipeline manner. The focus is on defining an input to some function then testing an output.

Why is it that we feel compelled to translate customer desires? Is it perhaps because we believe we must homogenize the development process into one single fundamental unit of ‘feature’ driven by functional requirement?

What if we instead addressed the satisfaction of customer desires head-on and as our top priority?

What does this mean exactly? I propose that we ask the customer this question directly by asking a question framed as an inquiry to what they want to be shown that proves they got the value at a moment in time after the system has been used in the traditional sense of a piece of functionality.

In fact, isn’t this exactly what we do one level down in TDD? We start by asking ‘how can I show that the code worked’ and that test becomes the thing that proves that it does work.

This suggests a natural dichotomy of features into 1) those that confirm that the value was realized (confirmation, or test features), and 2) features that are used as mechanisms to fulfill the requirements of creating the value (functional features). Indeed when we look at software through this lens we can begin to see both types of features. One of the most common examples is presenting a confirmation feature after a data entry feature.

And just like doing test-after instead of TDD, we may, more often than not, find the confirmation to be missing and wish there was better insight into the use of the system. And how often is that confirmation feature dropped or never executed because it is viewed as a ‘nice-to-have’?

Focusing customer conversations in this manner is very revealing. I posted an example from the first time I used this technique. It appears to be the most direct route to talking about what is important to customers and is incredibly sensitive to context. It also lends itself to revealing complexities of stakeholders, dovetailing nicely into the ‘Effect Mapping’ technique recently introduced by Gojko Adzic.

All of this leads us to one more fundamental question: if we have not been doing our ‘test’ confirmation features upfront, what does that say about the amount of churn and the quality of the features we normally do first: those mechanistic, complicated, expensive functional features?

Think about it, the sum total of confirmation features across all stakeholders represents the required output of the functional feature. If we define this upfront we avoid adding unnecessary things to our functional features. In fact we now have the outcome side of a traditional acceptance test.

One has to wonder if the introduction of iterations (not increments) was something of unnecessary thing. I like iterations as a safety net or as a method of successive approxiamation when dealing with an unknown problem or qualitative issue. But it really should be a technique of last resort for validation.

That’s the theory. In the next post, I’ll show an example of how this works in practice.

About these ads