https://www.hbhau.net/feed/atom/ 2012-07-13T05:17:58Z hbhau.net Copyright 2012 WordPress http://www.hbhau.net/2008/03/27/to-fail-fast-you-need-to-know-when-to-fail/ <![CDATA[To Fail Fast, you need to know when to Fail]]> 2012-04-16T04:12:19Z 2008-03-27T05:14:58Z Brett Henderson brett.henderson@gmail.com http://hamstaa.hbhau.net Continue reading ]]> One of the basic principles of XP is to provide value. To achieve this ,you do the stories first that bring the most value. Now if those stories are risky or big, then you want to fail fast and move on. However in software development, most things are possible…it's just a matter of time. So how do you fail?

At Ephox we are extremely good at making the impossible possible, just take a look at EditLive!. The things we did in the early days were not possible with the limitations of the Java APIs, so we found a way around them. As such, it is possible to continue with a feature beyond the current return on investment of the feature.

What we need to do is find a way of identifying when to stop the investment in a given feature, mark it as "failed" and move onto the next most valuable feature. To do this however, you need to know what are the conditions by which you define something as having failed.

We recently moved to a tri-estimate approach1 where all stories include a Best case, Worst case and Most Likely estimate. These estimates not only give us an indication of the risk associated with a feature, but also a basis by which to determine "failure".

My framework for "failing" a story is as follows. Once we hit the Most Likely estimate, we re-evaluate the Worst Case estimate with the knowledge gained so far. Assuming the revised Worst case is an acceptable investment in the feature, the new value is then considered the "line in the sand". Once we hit the Worst Case estimate, we review, asking how much longer to go to completion. If the time to go is acceptable, then this is the "failure" time. Once that time has expired, the story fails.

So, for example, figures of 20 and 40hrs are given for Most Likely and Worst Case estimates respectively. At 20hrs, the team revises the Worst Case to 45hrs. It is accepted that the features value is worth 45hrs, and development continues. At 45hrs, the team says there is an additional 3 hrs to go to completion. This final amount of effort is short enough that development continues for the 3 additional hours. At the end of that time, we "fail" the story if it's not completed.

Now of course, there are exceptions to this, but the aim is to identify when a feature is just going to keep going and stop it.

1 – Doug discussed the idea in his article Estimations – Best, Expected and Worst

]]>
http://www.hbhau.net/2007/10/08/xp-practice-champions/ <![CDATA[XP Practice Champions]]> 2012-04-16T04:13:05Z 2007-10-08T04:38:44Z Brett Henderson brett.henderson@gmail.com http://hamstaa.hbhau.net Continue reading ]]> Ephox adopted eXtreme Programming (XP) at the beginning of 2006 when Doug joined the team. As part of our commitment to continuous improvement we have XP Adoption Reviews every few months. The aim of these is to review the XP practices with a view to rating our success with adoption of them. From this we choose 3 to focus on for the next few months.

In our first review, we identified Test Driven Development (TDD), Daily Stand-ups and Iteration Demos as the practices we would get the most value by focussing on.

To inject some fun into our practice focus, we introduced the "Talking Car"1 for Stand-ups. For TDD one member of each team volunteered to be the TDD representative for the week, during the weekly retrospective. Their job was to remind the team during the stand-up that we were focussed on TDD. Finally, we made it the responsibility of the "client" to hold Iteration Demos to the business.

In our most recent XP review we have identified Root Cause Analysis, Planning Game and Weekly Iterations as the practices to focus on. We will continue to work on improving the previously identified practices, but we felt that as a team, we would gain the most value through the new practices.

The question is, how do we remind the team of our commitment to improving these practices?

Atlassian recently posted about their Agile Process and one thing they mentioned caught my attention. Chris explained that they "have practice champions for many of the more challenging practices".

I'm really interested in how we could use Practice Champions to help focus on and improve the 3 practices we have chosen. I'm hoping these champions can bring some fun and energy to the adoption process and galvanize the team behind improving some fundamental XP practices.

1 – I'll explain the "talking car" in a future post.

]]>
http://www.hbhau.net/2007/06/18/retrospective-deltas/ <![CDATA[Retrospective Deltas]]> 2012-04-16T04:14:09Z 2007-06-18T03:06:39Z Brett Henderson brett.henderson@gmail.com http://hamstaa.hbhau.net Continue reading ]]> I was recently reading a post on InfoQ, "Frequent Retrospectives Accelerate Learning and Improvement". As it's title suggests, the key messages in the article is about having frequent retrospectives to aid in the learning and improvement process.

When we adopted XP (eXtreme Programming) we undertook to have a retrospective at the beginning of each development iteration, preceding the planning game. With weekly iterations, we have a chance to reflect on the previous weeks pluses and to formulate some changes/improvements (deltas) identified from the week.

In the article the author made the following comment,

In fact, looking back is only half of the retrospective pattern. Reflection is not learning. To bring about learning and improvement, it is necessary to identify areas for improve and explicitly document a brief action plan for which the team becomes accountable.

We regularly come up with a number of delta's and then choose the highest priority ones to be tackled during the next iteration. We even assign someone to "own" the task. So we are on track to learn and improve however what we struggle with is when we have too many delta's to be done in a single iteration.

Currently we review the previous retrospective deltas at the beginning of the retrospective and copy any un-addressed ones to this retrospective's delta list. The problem with this is that the list is getting bigger.

I'm not really sure what the solution to this is yet so if you have an suggestions I'd love to hear them. For now, we'll continue to tackle the most pressing or productive delta's and keep reviewing the previous one.

]]>