Agile Mythbusting

Yesterday I ran a session for another group at my current client introducing them to agile. The first part of this session helped me get a better understanding of what they thought of, or had heard about, or applied about agile and during this I noticed a few comments they touched on (and I hear numerous times from other people) that I thought might be worth sharing. I have found that when people observe and even when people practice some of the agile practices, it can be easy for them to incorrectly draw the wrong conclusions. Here’s a few of them and my thoughts on each:

Being agile means we don’t have to write documentation
I have never heard any agile coach nor practitioner ever say this, nor encourage this form of thinking. The agile manifesto states a value is “Working Software over Comprehensive Documentation”, with the thinking that people would rather some delivered instead of being given a document telling them what they could have had instead. You will find people trying to avoid writing comments in code because most of the stuff people write is redundant (it tells someone what the code does, when they can read the code right there and then – if it can’t be read, then maybe you have a bigger problem at hand). There are situations where comments are invaluable (libraries APIs given to external parties, or even commenting why something may not be obvious to a reader because of some bug/library/performance reason). I have found that sometimes having general diagrams with some descriptive phrases and a brief narrative is much more useful than any 300 page document might give. Consider that an hour spent in a conversation about what someone wants can be more productive than an hour spent writing and then another hour spent reading a document hastily put together.

Being agile means we don’t do design
Avoiding Big Design Up Front (BDUF) is a popular term but so easily abused. Not practicing BDUF is not about not doing *any* design and is even worse for a justification of a *poor* design and is a very big judgement thing. We have found that some good discussion about an approach helps (whiteboarding might be involved) can be useful, but in reality the implementation will be greatly different from the original design. It is sometimes better to just do things, and refactor to patterns or a better design (otherwise referred to emergent design) because you cannot anticipate everything. UML and class diagrams may help for *communicating* ideas, but not a great idea for trying to generate the code from it as the code tends to move faster than what the diagrams can keep up with.

Agile doesn’t work with old projects
There is no reason why inheriting someone else’s code automatically implies you cannot apply agile principles. You might find that some of the practices might be more difficult to implement such as talking to the people that originally wrote the code, or find the system difficult to test, but it shouldn’t stop you from trying. It does not mean that when you get new requirements, you cannot involve the customer more, it does not mean you cannot write your tests first and it certainly does not restrict you to release changes early and get more feedback. If you find there are problems, the old project is unlikely the cause and is more likely other processes such as the release procedures or the way your operations team works. Blaming agile or the project is not the answer.

Agile is no different from other processes, we do testing in RUP (etc)
I’ve never heard any agile practitioner say that testing is not executed in other processes. Testing in agile is different because there are different levels of testing and there is an expectation for the testing to start as early as possible. By focusing on testable, releasable and value-added units, more things can be achieved by the team such as getting better feedback, and getting more value earlier rather than in one big bang at the end (that may or may not happen).

Scary Conversations On Agile

A conversation I had with someone today:

Me: So how is your project going?
Manager: We’re doing 12 iterations, we’re up to the eighth and we’re just getting a QA environment to run tests in.
Me: Do you have any form of continuous integration?
Manager: Well, we are only interested in the files that change, so we don’t need to run it against the entire system
Me: (Warning bells starting to ring…) Do you have any tests?
Manager: Our developers have too much trouble writing unit tests because the system is made of all web pages
Me: (Okay, keep calm, just run with it for now) Have they tried looking at HttpUnit or JWebUnit?
Manager: No, not yet.
Me: (Ask it anyway – maybe they’ll surprise you) So what sort of tests do your developers have?
Manager: Oh well, since they don’t have a remote environment, they just run tests on their machines.
Me: Do you mean they run them manually?
Manager: Yes
Me: (Sigh and so forth…)

The Retrospective Starfish

Diagrams are always useful focal points for starting discussions, and that’s one reason I like using the starfish diagram for a retrospective. This particular retrospective technique helps people by getting them to reflect on varying degrees of things that they want to bring up, without having it fit into the black or white category of ‘What Went Well’ or ‘Not So Well’ so I think it scales a little bit better.

StarTechnique

A little bit about each category:

  • Keep Doing – Is a good starting point for team members to focus on typically all the good things that they liked about a project. You might want to encourage people to think about things in terms of, what would they miss if they didn’t have a particular practice, technique, technology, person, role, etc. A good example from a real session I’ve been in before is ‘Running performance benchmarking and tuning during an iteration helps to identify regressions or slowdowns so we can address them earlier’.
  • Less Of – Helps to focus on practices that might need a bit more refining or that were simply not helpful in the current circumstance. Perhaps they add value but not as much as other practices could. An example here is that perhaps stand ups have become status meetings and so there should less of talking to one person (and more of talking to each other) during them.
  • More Of – Is another type of focus that helps further refine or highlight practices, technologies, etc that team members might want to try more and are not necessarily taking full advantage of. A good example is that maybe people are pair programming but knowledge transfer and a better understanding of the code changing might be gained by doing more of swapping programming partners.
  • Stop Doing – Obviously for things that are not very helpful to development practices or not adding much value. Perhaps it’s about writing that status reporting email at the end of the day (because you can substitute a simple one minute conversation for it instead)
  • Start Doing – Is a great opportunity for team members to suggest new things to try because of things that may not have gone so well or just for simply keeping things dynamic and fun. Perhaps you might want to try a burn up chart on the whiteboard or try some new open source tool for helping improve developer productivity.

Interpreting the Starfish
Getting people to either write things up under the starfish in this manner gives you a scattergram of sorts and is a great visual technique of estimating the overall health of your project. Most of the points on the starfish also try to coerce people into actually creating action items instead of simply saying that something was not good.

Starting A New Collection

My kit of development tools never seems to cease expanding from the sheer multitude of technologies, techniques and amazing people to learn from. In contrast to my development kit, the other one that I am trying to kindle more awareness of, the process toolkit, grows much more slowly because there aren’t as many people as impassioned or maybe aware of what is a useful thing to try. I am lucky to have worked with some great people, subscribe to some good blogs and I am willing to learn from other people’s experiences.

The first step in learning about anything (thanks Dave and Ade) is to Expose Your Ignorance, and in doing this I hope that others can benefit from this. As I find something that I really like, I’m going to blog about what I think are effective practices that are worth adding to a toolkit, starting with a series of Retrospective Exercises. I look forward to any comments and I hope to add more to these as time passes and apply them to my future projects.

Evolving An Agile Architecture

Last week I heard that the project I just rolled off was a huge success, both for our UK office (in terms of significant wins and growth), as well as the development team (for the freedom of both process and technology choices). When I started work on the project, I was working with Joe Walnes, and one of the project’s foundation principles was about making technology and process decisions that helped to reduce the build time (and therefore delivery time). I have been on, and heard about, many other projects that end up with several staged builds, leading to the average developer build of around an hour. When I left, we had a sub-minute build (including end to end testing) and a deployable application with a dependency on only Java 5 and Ant (but okay, it’s not a huge system yet).

I’m posting this entry so that somewhere I hope that someone learns something which they apply in optimising their own build (or even sharing their strategies) to make their projects even more agile. What follows is a list of some of the more significant decisions we made, and more importantly, why we made them. Read more “Evolving An Agile Architecture”

Rearchitecting the Architect

I’ve had this entry in draft mode for a while, and seeing as I am likely to have limited access to the Internet next week, it’s probably best that I get it out there. After all, I believe that all constructive feedback (both positive or negative) is useful.

The traditional software architect role is an interesting one. Developers, who choose not to go down the management line for one reason or another, typically take up this role. Defining what an “architect” does in a traditional software environment seems to be pretty well known. Trawling some job advertisements you see role descriptions like, “translate business requirements into a framework”, “high level analysis and design”, and “technological evangelization”. My observations of a real ‘architect’ at work is that they generally do spike-like work, develop a little (but usually not so little) framework here or there, or suggest implementation patterns in the form of a tutorial or cookbook.

As a mad keen developer, who is into Test Driven Development (TDD), constant refactoring, and evasion of Big Design Up Front (BDUF), I question where does this role fit into newer software development processes? I respect these people, because they have been around for much longer than I have. They probably have a wider repository of patterns to drawn upon and the wisdom of learning from many different projects. I question this, because I think the typical responsibilities fail in an environment that I like to work it.

Architects in their traditional sense, typically fail because they work leagues ahead of the development team. Sometimes they run with BDUF, but other times, they do spike work that sets a pattern for a simple use case, but fails to address the needs of many other ones. Since they rarely look back at the things they have done, architects rarely get any feedback, and one of the most useful of eating your own dog food. It sometimes gets to the point where the architect starts dictating architecture with diagrams instead of actual code, failing to see the difficulties of how untestable or how inflexible it might be when actually implemented!

Here is my revised list of descriptions on the software architect role in a modern software development team:

  • The architect should be an embedded member of the development team – Like it is important to have a customer representative and a tester during an iteration, the architect must be someone a team member can go to for direction relating to a technical feature when unsure. Although it is okay for architects to work on technical spikes in advance, they should also be looking at the implemented result under all real circumstances.
  • Identifying, extracting and naming patterns out of the system – Refactoring code is excellent at a microscopic developer level as you can draw out a better, more maintainable piece of code. Refactoring during an iteration by developers working on stories tend to have too much of a narrow focus to influence a large entrenched system. Excellent test coverage and well-written tests also give the architect an ability to execute on this, without having to leave the code in the state that it is.
  • System-wide Odor Purification – Bad smells reside in code all over the place for one reason or another. Identifying bad smells throughout a system, highlighting them, developing a strategy for getting rid of them, or better yet, actually removing them is important. Once again, developers do this on a microscopic level but similar smells may co-exist in several parts of the system.
  • Mediating technical discussion – Although BDUF is never useful, it is sometimes to useful for a team to talk about different problem solving strategies (a general approach) to take. Reality will always step in and cause something to deviate from a ‘plan’, and the architect should support brainstorming activities and should be able to step in and prevent discussions that become ineffective, and do turn into BDUF.

Think Distance, Not Speed

Somehow I always seem to end up coding the user stories that have the most demanding time constraints. In a way, I feel flattered that someone trusts my ability to deliver when a critical deadline must be met (and real deadlines are very rare in software, despite what everyone tells you), but it is always interesting to see how people react when a critical deadline must be met.

Over the numerous occasions that I have been working on these “time critical” stories, the common question you are normally asked is, “So when do you think we’re going to be done?” or better yet, “Is it ready yet?” As a developer I find it is better to preempt these sorts of questions by delivering feedback earlier than they expect. Typically this means walking through significant, visible progress with the stakeholder, or bringing visibility to issues that are hindering your ability to deliver (e.g. database environments not available, etc). Customers are typically trained to ask “Is it ready?” because they are given such little feedback. The customer should not be surprised when something will be delivered and it easy to forget this.

Another question that you tend to get asked is, “Can you get it done any faster?” Speed is essential to any business, but it is important to highlight what you sacrifice for speedy delivery. Translated into software terms, this may mean less regression tests to provide automated feedback of features breaking when future changes are made, duplicated code, leading to confusion, additional maintenance and even developer shame, or an undesirable path that meets requirements but leads to an unacceptably sluggish solution as load increases. When time is critical, ideally the business should be prioritising which things are more important, but usually it is left for developers (for better or worse).

Achieving your goals as fast as you possibly can is good, but keep in mind that developers are more like runners than they are computers and do get “tired” (for want of a better term). If you want to run a marathon, you certainly don’t run at the same speed as you would the 100m. Instead of asking how fast you can run, the question that should be first asked is how far do you want to go?