You are currently browsing the category archive for the ‘Architecture’ category.

For the past year or so, I have been working on a new technique for creating software. I was frustrated with the disconnect between the efforts of development teams (both those I have been on and those I have helped manage) and the outcomes the software was providing when it was actually used.

TDD and other technical practices (exemplified by the Software Craftsmanship movement) are great for validating the correctness and quality of the code, but how do we establish the validity and quality of running software for its intended use? (No, I don’t believe ATDD solves that problem. I’ll dive into the differences in a bit).

Outcome Driven Software (ODS) represents a new way to create software. In short, it is a way to make software by first creating the demonstrative outcomes that will prove validity, then using those outcomes to drive the creation of the functional systems to support them.

Read the rest of this entry »

Last month I had the opportunity to experience the thrill-ride of a five minute Ignite talk at SAO’s first ever TechIgnite event.  Here’s the intro description and video:

Software Is Massless

Our approach to software creation is largely based on our experience of constructing and interacting with physical objects. Software is massless and moves at the speed of light, so why do we delay our high aspirations for ecstatic outcomes and allow them to fade as we labor on structure, mechanisms and infrastructure? The time has come to break these self-inflicted chains that hold us back from our true potential as wizards of creation.

Lifting the veil, I’ll show you how to reconnect with the power to empathize and collaborate directly with customers and users to create the most important, meaningful parts of your software right from start instead of falling down the rat hole of system design. Then see how this inversion of fulfillment simplifies and minimizes the creation of the system necessary to support the outcomes that have already been created.

 

The problem with ‘systems thinking’ is, well, thinking about the system. It is a trap to envision creation as a set of processes, steps, or mechanisms that need to be ‘figured out’. It typically results in either building predictable, yet uninspiring, solutions for known problems (clocks[1]), or ineffectively searching by trial and error for solutions to unknown problems (clouds[1]).

We should set aside “the system” and “thinking” entirely. Instead, focus on a moment in time after the system has already been used and ask what is required to bring about positive emotional outcomes for the participants. This is a different type of labor, one Seth Godin in his book Linchpin describes as “emotional labor” (read about or read on Seth’s blog or one-minute video). I refer to it more generally as just empathy.

It is effective because it allows us to directly envision what will delight and fulfill the emotional needs and wants that truly drive human behavior and satisfaction. It also liberates us creatively because we temporarily remove any focus on the constraints of what the mind believes is possible in the construction of any system.

If we start by this process of “empathetic visioning”, then the subsequent use of systems thinking is done with a clear knowledge of the set of outcomes that must be produced to support the vision. In my experience thus far (it forms the basis of Feature TDD), this reduces system problems to either trivial concerns or makes them well suited to solving by traditional analytical creativity.

[1] From Karl Popper, I was recently introduced to the concept by my father who sent me a link to Gaurav Mishra’s interesting post on applying clocks or cloud to social media.

In the first post, I created the confirmational aspects of a simple event registration web site.  Now I am well set up to use that to drive the functional aspects.

Implement the Functional Behavior


The Existing Event Spec

Let’s look again at the main part of event-spec.js:

vows.describe('Event').addBatch({
  'An Event': {
    'when asked for guests': {
      topic: function () {
        Event.getGuests(this.callback);
      },
      'should return the standard guests':
        function (err, guests) {
          assert.deepEqual (guests,
            ['Bob', 'Sally', 'Tim', 'Joe']);
        }
    },
  'when registering Bob': {
      topic: function() {
        Event.register('Bob', this.callback);
      },
      'should return Bob':
        function (err, guest){
          assert.equal(guest, 'Bob');
        }
     }
  }
}

Adding Event Registration Read the rest of this entry »

I’d like to introduce a new software development technique that is reminiscent of TDD, but at a higher, feature-oriented level. It’s based on the observation that features have both a confirmational and functional aspect. In brief summary:

  • The functional aspect is the mechanism used to bring about a result
  • The confirmational aspect is what lets the user know that their desires and goals for using the software have been met.

The key to Feature TDD is to build the confirmational aspect first, which then serves as the ‘test’ for the subsequently developed functional aspect. Here are the steps I’ll be using:

  1. Capture vision and intent through literal confirmational representation
  2. Pin output of #1 via test
  3. Make vision templates data-driven
  4. Extract data model and create functional stubs
  5. Implement functional behavior
  6. Use techniques #1 and #2 to create functional user interfaces
  7. Link #6 to functional behavior created in #5

In the post, I’ll cover steps #1-#4. Read the rest of this entry »

Once I realized that software features had a duel confirmational and functional aspect, I began to see that pattern at all levels. While it is much more apparent in transactional or workflow application, it can be found anywhere in software.

Keep in mind that currently, because it is traditional written after the functional part of the feature, the confirmational aspects may be missing or poorly implemented. Google does a pretty decent job on simple usability, so let’s look at Gmail for a visible example of the conformational versus functional aspect of features.

Sending an Email

When I send an email from GMail by clicking the send email button, I am exercising the functional aspect of sending an email. My button click causes browser code to execute and a call gets made to the server which uses SMTP to send that message to the recipient(s):

Read the rest of this entry »

I keep harping on the bad practice so prevalent in software development of focusing on functional requirements rather than how the system will prove value and make the customer happy (the “satisfaction of desires”). The first conversations we should have with our customers (or about our customers if we’re working with proxies) should start at the hypothetical point after the software has been used to bring about a result (what I call a “mechanism of fulfillment”).

In addition to better understanding of what is going to make the customer happy, this technique also reveals the subtle complexities of different roles within the customer, leading to a map of the different “value constituents”.

I introduced this technique in a work group meeting a few weeks back. It was a mix of product managers and developers and the focus was on a high-level integration of two products.  Before hand, one of the product managers had attempted on their own to define some roles and tasks.  Here’s what he had come up with (specific technologies scrubbed into alpha letters): Read the rest of this entry »

I’ve been chewing on Dan North’s “Programming is not a craft” post and subsequent reactions for the past couple weeks. I have come to the conclusion that I agree with what I think Dan is asserting; however, the critical point isn’t so much craft versus trade as it of utility versus construction. Let me taking a stab and laying it out in a different manner.

Success in software depends on doing the right thing right. The SCM has been focused on the second part of that equation. “Clean Code” ensures that whatever we are going to build, regardless as to whether the utility has been validated, we do it well. In that regard, we can call the code (the implementation meant to fulfill a requirement) “well-crafted”. In addition, part of that craftsmanship ensures that if there is volatility or evolution in the understanding of utility, it can be modify in the most effective manner (usually measured as a matter of time) both in terms of delivering that change and in not breaking things that were not meant to change.

So I think we can put the gold star on “code craftsmanship” and get behind the work the SCM is doing in this regard.

The problem is with the first half of the “right thing done right” equation. Imagine for a minute that a plumber installed a toilet and all of the details of the work were executed with the expert hand of a true craftsman, except that the toilet was installed in the wrong place. Now the house building metaphor is problematic for software comparisons for (at least) two reasons: Read the rest of this entry »

Anywhere you turn these days, people discussing agile development are talking about “delivering value.”  The value mantra resonates with customers and product managers, who want more of it, and developers, who would like to deliver more of it.  But wanting to deliver more value and actually delivering value consistently are two separate things.

As an industry, we work very hard at finding ways to improve our processes.  But sometimes improving the process isn’t enough; sometimes new insights are needed that introduce fundamentally different processes.  For example, TDD (Test Driven Development) went beyond how developers wrote code and how testers then tested that code.  It evolved the process by asking how a developer could know that their code would pass the necessary tests and reordered the sequence of activities (test, then code).

Similarly, I’d like to propose a fundamental change in how we engage customers and product managers to define product requirements.  If we shift from ‘what should we build?” to asking “what we need to build to show value was realized?”, then we will start:

  • Driving the customer conversation, and hence the software, towards the realization of value
  • Creating more innovate software and providing clearer product vision

Read the rest of this entry »

Recently I was working on a project utilizing SVG/HTML5 and I wanted users to be able to move shapes around within the browser window.

My first reaction was to write a test that did something like the following:

  • Create shape
  • Simulate a mouse down on shape
  • Simulate a mouse move with controlled event clientX and clientY values
  • Test that position of shape had moved by those amounts.

What Is That Really Testing Anyway?

But then it occurred to me that while this would be valid for the System Under Test (SUT), I don’t have any information beyond my own use and environment.  How do I really know a user in their environment will generate those expected inputs I use to make things go green?

See, here’s the rub with the way most Agile testing starts:

  1. A team works with a PO/PM and collectively they decide on a set of functionality.  Depending on the organization, questions might range from “what do you want the application to do?” to “what value is the user wanting to get out of the application?”.
  2. They come to an agreement on a certain set of exhibited functionality characteristics.  These represents the collective best guess at what will work when it is put in front of users.
  3. The team creates a system that exhibits those functionality characteristics.
  4. The work is delivered to the user, who may or may not achieve the success envisioned by the system creators.  They may be hampered by “usability” or “environment” issues.

When I write traditional TDD, I have validated that the system works as I expect it to work, but I have not validated that the user can use the system to bring about the expected result.   One can see how those in the testing community like Jon Bach suggest that:

The Agilista’s main vehicle for testing always seems to be “checking”, not investigation. It’s weird that they are quick to say there are no silver bullets, but imply that checking-via-automation is the only way to kill werewolves.

Put simply, a functionally correct system (as determined by tests) may still turn out to be unusable or not work or otherwise not achieve its intended goals. Read the rest of this entry »

Marty Nelson

The Agile Architect

Latest Tweets

  • @jo_liss Why does `broccoli build` not overwrite? Am I misusing it to want it to? 59 minutes ago
  • @jaybazuzi @jamesshore Agree, that's what I meant by demystifying. And I've changed my mind (gasp!): you need to know technique, not tool. 20 hours ago
  • @jamesshore Sure. Though tdd would be hard to *get* without seeing the tool in action. 21 hours ago
  • @jamesshore was going to disagree on tools supporting practices, but realise how writing a rudimentary unit testing tool demystifies TDD. 21 hours ago
  • @Rich_Harris yes, it makes a grown man cry. But I did find out what was wrong! Ive got it all automated to build, including hierarchy. :) 22 hours ago
Follow

Get every new post delivered to your Inbox.