I keep harping on the bad practice so prevalent in software development of focusing on functional requirements rather than how the system will prove value and make the customer happy (the “satisfaction of desires”). The first conversations we should have with our customers (or about our customers if we’re working with proxies) should start at the hypothetical point after the software has been used to bring about a result (what I call a “mechanism of fulfillment”).

In addition to better understanding of what is going to make the customer happy, this technique also reveals the subtle complexities of different roles within the customer, leading to a map of the different “value constituents”.

I introduced this technique in a work group meeting a few weeks back. It was a mix of product managers and developers and the focus was on a high-level integration of two products.  Before hand, one of the product managers had attempted on their own to define some roles and tasks.  Here’s what he had come up with (specific technologies scrubbed into alpha letters): Read the rest of this entry »

I’ve been chewing on Dan North’s “Programming is not a craft” post and subsequent reactions for the past couple weeks. I have come to the conclusion that I agree with what I think Dan is asserting; however, the critical point isn’t so much craft versus trade as it of utility versus construction. Let me taking a stab and laying it out in a different manner.

Success in software depends on doing the right thing right. The SCM has been focused on the second part of that equation. “Clean Code” ensures that whatever we are going to build, regardless as to whether the utility has been validated, we do it well. In that regard, we can call the code (the implementation meant to fulfill a requirement) “well-crafted”. In addition, part of that craftsmanship ensures that if there is volatility or evolution in the understanding of utility, it can be modify in the most effective manner (usually measured as a matter of time) both in terms of delivering that change and in not breaking things that were not meant to change.

So I think we can put the gold star on “code craftsmanship” and get behind the work the SCM is doing in this regard.

The problem is with the first half of the “right thing done right” equation. Imagine for a minute that a plumber installed a toilet and all of the details of the work were executed with the expert hand of a true craftsman, except that the toilet was installed in the wrong place. Now the house building metaphor is problematic for software comparisons for (at least) two reasons: Read the rest of this entry »

The “Language Hunting Proficiency Scale” is an adaptation of the ACTFL Proficiency Guidelines for language speaking proficiency. In typical Languge Hunting style, it can understood in a fun, easy, “obvious” way using a party paradigm:

ACTFL Level LH ‘Party’ Level LH Description
Novice Tarzan at a Party Single words, short vocab lists
Intermediate Getting to the Party Ask questions and get answers to get needs
met:  “where/when is the party?”, “what should I wear?”, “what
should I bring?”
Advanced What happened at the party? Recount experience, tell story: “Tarzan drank too much
jungle juice and threw a chair out the window, the cops came and took
him to the drunk tank and I had to bail him out”
Superior Why do we have parties? Discuss social, economical, political, culture nature
of why we have parties

Last month, both Mark Seemann and David Bernstein published a critique of TDD based Read the rest of this entry »

Anywhere you turn these days, people discussing agile development are talking about “delivering value.”  The value mantra resonates with customers and product managers, who want more of it, and developers, who would like to deliver more of it.  But wanting to deliver more value and actually delivering value consistently are two separate things.

As an industry, we work very hard at finding ways to improve our processes.  But sometimes improving the process isn’t enough; sometimes new insights are needed that introduce fundamentally different processes.  For example, TDD (Test Driven Development) went beyond how developers wrote code and how testers then tested that code.  It evolved the process by asking how a developer could know that their code would pass the necessary tests and reordered the sequence of activities (test, then code).

Similarly, I’d like to propose a fundamental change in how we engage customers and product managers to define product requirements.  If we shift from ‘what should we build?” to asking “what we need to build to show value was realized?”, then we will start:

  • Driving the customer conversation, and hence the software, towards the realization of value
  • Creating more innovate software and providing clearer product vision

Read the rest of this entry »

A former work colleague of mine, Steve, recently posted on facebook his enthusiasm during a “hack night” at his current company.  Max Guernsey, who I appreciate greatly as a current work colleague and progressive agile thinker, took exception to the phrase. I assume because it means dirty code and non-sustainable pace (working nights).  Here is part of the dialog:

MAX:  There are two words that seem to me like they should never be associated with what we do in a professional capacity…

STEVE:  wrong choice of words…it’s our autonomy day(s)… a time to work in a beneficial/needed/interesting/G5 project outside the scope of our normal areas of responsibility.

MAX: “Autonomy day” sounds as awesome as “hack night” sounds terrifying.

STEVE:  i wouldnt put too much meaning in when people say hack night…i’ve heard the term much more in the open source stack…it doesn’t have the negative connotation that it does in the .Net world. hack usually means (quality) work outside of (quality) day-to-day work.

I actually think that “Hack Night”, with all of it’s negative connotations, is an assertion of People over Processes (Individuals and interactions over processes and tools) from the Agile Manifesto. A hack implies a blatant disregard for process “rules” in order to produce a quick result (“working software”).

It is a heyoka/coyote technique for balancing our discipline of following the processes to which we elect to commit by periodically questioning those commitments and processes.

Think about it.  What or who exactly are you escaping in order to be “autonomous”?  Your own rules?

It also teaches that learning outcomes may not require all of the tools in our toolkit, and our processes are always contextual.

Recently I was working on a project utilizing SVG/HTML5 and I wanted users to be able to move shapes around within the browser window.

My first reaction was to write a test that did something like the following:

  • Create shape
  • Simulate a mouse down on shape
  • Simulate a mouse move with controlled event clientX and clientY values
  • Test that position of shape had moved by those amounts.

What Is That Really Testing Anyway?

But then it occurred to me that while this would be valid for the System Under Test (SUT), I don’t have any information beyond my own use and environment.  How do I really know a user in their environment will generate those expected inputs I use to make things go green?

See, here’s the rub with the way most Agile testing starts:

  1. A team works with a PO/PM and collectively they decide on a set of functionality.  Depending on the organization, questions might range from “what do you want the application to do?” to “what value is the user wanting to get out of the application?”.
  2. They come to an agreement on a certain set of exhibited functionality characteristics.  These represents the collective best guess at what will work when it is put in front of users.
  3. The team creates a system that exhibits those functionality characteristics.
  4. The work is delivered to the user, who may or may not achieve the success envisioned by the system creators.  They may be hampered by “usability” or “environment” issues.

When I write traditional TDD, I have validated that the system works as I expect it to work, but I have not validated that the user can use the system to bring about the expected result.   One can see how those in the testing community like Jon Bach suggest that:

The Agilista’s main vehicle for testing always seems to be “checking”, not investigation. It’s weird that they are quick to say there are no silver bullets, but imply that checking-via-automation is the only way to kill werewolves.

Put simply, a functionally correct system (as determined by tests) may still turn out to be unusable or not work or otherwise not achieve its intended goals. Read the rest of this entry »

This last weekend I attended the Save Your Language conference in Vancouver BC. Hosted by Dustin Rivers, it featured the Where Are Your Keys? (WAYK) language fluency techniques taught by Evan Gardner and Willem Larsen.

WAYK is difficult to explain, but easy to learn.  Evan and Willem are focused on saving endangered languages and to do that, they have ruthlessly honed a language learning game that strives to get people as fluent as possible as quickly as possible.  Because when a language only exists in a few people all over the age of 60, you have, as Evan puts it, a ticking time-bomb.

I now work at a macro level at my company as a software architect and my job involves a lot of facilitation.  Facilitation of information sharing.  I need developers to understand our customers, product managers to understand technology, and developers to have a shared mental model of our architecture that spans multiple products.  And like most software companies, we have time-based commitments and market opportunities.  Specifically in development, our access to key customers or domain experts is often limited (yes, ideally we “sit with the customer”, but that isn’t always the case and often who we sit with is really a customer proxy and “real” customer interaction may be less frequent).

You likely see where I am going with this.  Software development is all about making decisions.  Fluency in a domain enables us to make better decisions.

How specifically does WAYK address increasing fluency?  Through a number of “techniques”.  Techniques in WAYK correspond to Practices (though they often embody Values and Principals as well).

For example, technique “obviously” was identified when they realized that information that could be construed in multiple ways is not as readily absorbed as information that is clear, concise and without ambiguity (it’s “obvious”). Technique “limit” recognizes that little, small chunks that move gradually beyond what people already know work better than information overload (like having small user stories).

We can see in these two (fundamental) techniques the principal that confused people don’t learn (or create good software) as rapidly as those who are exposed to obvious, accessible information.  And a word like “Simplicity” (among others) from XP’s value list fits in quite nicely here.

Over the coming weeks and months, I plan on exploring adapting WAYK to improve the rate of fluency.  I don’t do much day-to-day development anymore, but I did for over a decade, so some of my thoughts in that area may be more theoretically.  In my group facilitations, I’ll strive for more practical experimentation.

Marty Nelson

The Agile Architect

Latest Tweets

Follow

Get every new post delivered to your Inbox.