The intersection of technology and leadership

Category: Agile (Page 4 of 22)

The ThoughtWorks Anthology, Volume 2 Released

I’m pretty excited to announce the release of The ThoughtWorks Anthology, Volume 2, a varied collection of essays in the field of software development.

The ThoughtWorks Anthology, Volume 2

Alistair Jones and I contributed a chapter titled, “Extreme Performance Testing” that I have talked about in the past. In this chapter, we discuss how we apply over decades of agile experience (particularly Extreme Programming) to the field of performance testing. We describes techniques and approaches for faster feedback that you can immediately apply to performance testing.

Why we moved off heroku

We recently made the move off heroku to a more basic box because we ran into a couple of issues. Although I don’t think that these are issues you will face in many other places, it proved to be worth moving.

First, it’s worth noting a bit of context for the application we were building. Although being web-based, we are building a prototype application connecting up a pre-designed user interface to a number of backend systems. We do not own the backend systems, we simply integrate with them. Our goal is to quickly see how difficult some of the implementation of an interface such as one envisaged would be, and also to discover gaps or contradictions in the data structures we get back by working through a real example.

Our First Problem
The first problem we had is a hard timeout, limited to 30 seconds and enforced by the routing layer by Heroku. They document it very well here. As they mention in their document, “the timeout value is not configurable” and is a reasonable constraint on normal web applications that you want to scale. Given our current design and architecture, moving away from a synchronous, stateless application to a model of asynchronous, polling would certainly add more application complexity to our code.

Our Second Problem
The second problem we had was strange behaviour with a java application deployed. When connected up to other external systems, the java application seemed to continue growing in its memory use. Restarts, new redeploys didn’t seem to fix it. We tried this labs feature (log-runtime-metrics) to get more insight but could only see memory go up and up. We tried setting the max heap size to a small amount but that didn’t seem to do anything. Concerned we had a real memory leak, we ran the code locally with the same production settings with a very small heap size and could not replicate it.

Our conclusion
Given where we are, we wanted to move fast without worrying about the infrastructure taking more development time, or unnecessary application code complexity. So we simply moved off. Spun up a new virtual box and were up and running in about half a day. It probably took another day and half to move all of our continuous deployment tasks to our new platform too.

Book review: “This is service design thinking”

A couple of years ago, a very kind Product Manager gave me a book called “This is Service Design Thinking.” It was shrink-wrapped and everything after they had received it on a training course. I finally got around to reading it this weekend. The book is beautifully made… hard cover, thick pages and even with little coloured book mark ribbons strewn throughout.

I consider myself lucky, having worked with many different user experience folk who have helped shape my understanding of Service Design and this book helped to add a few more tools to my toolkit and a nice way of trying to shape it. When we write software, we already incorporate a lot of the design thinking concepts – really trying to understand the “touch points” that a customer has with an organisation and how software fits into these different needs. We don’t always get to work at a high level of an organisation – something that I believe is necessary if you are truly going to help shape or influence an organisation’s service offering to customers. Software is only one part of the puzzle. However it is becoming more and more relevant as software (or hosted software) starts to become a major or only channel for service delivery to customers.

This is Service Design Thinki

We already make use of many of the tools described, but a few new ones to me included:

  • Service safaris – A nice name for the technique of visiting people to observe them interacting with an existing service.
  • Cultural probes – The “probe” in a scientific sense but basically a kit given to customers to allow them to take snapshots of their own life in the context of a service to build a greater awareness of what’s important to them. These probes stay with candidates for a while but a researcher may send texts or emails to prompt for a different insight. Requires constant attention to the information being submitted back
  • Expectation maps – Building a visualisation of what a customer expects when they interact with a service. Useful for comparing different expectations across different touchpoints, or offerings.
  • Desktop walkthrough – I haven’t seen this technique probably because it seems to demand more preparation than others. Basically this is a 3D small scale model of a service environment that allows people to interact with it. I can see this being highly engaging.
  • Service Roleplay – A scenario where staff members are asked to enact several situations where they might come into contact with a customer. Video is often used to provide feedback and act as a basis for discussion.
  • Customer lifecycle maps – A holistic view of a customer’s relationship with a service provider. Their example one maps out loyalty over time. I can see the map being annotated by events to trigger insight

I really enjoyed the book. There are some nice studies at the end. I did protest at the simplified description of “Agile software development” but it’s small detail in the larger set of things. My only gripe is that the beauty of the book comes at the price of being significantly heavy to lug around.

Retrospectives as tools for change

Jutta Eckstein recently asked me to write a few sentences about my perception of “Retrospectives as tools for change.” After writing my response, I thought it would be worth publishing here.

I see retrospectives as an effective seed for starting change within a group of people whose ultimate outcome depends on each other. I find they can build a shared context in a place where each person holds different “parts of the puzzle” in their head. When executed well, retrospectives also establish opportunities for the group to progress the way they work, improve their environment and build better relationships between group members.

I do not see retrospectives as the only opportunity for change. I also believe they do not guarantee change. I believe they certainly help, and I am definitely in support of that.

Showdown: Systems Thinking vs Root Cause Analysis

I gave a presentation in Recife about Systems Thinking and had a great question about where does root cause analysis fit in versus systems thinking which describes emergent behaviour and that there may be no single cause to the system behaviour.

Fight
Image courtesy of tamboku under the Creative Commons licence

Firstly I like the quote from statistician George E.P. Box, “essentially all models are wrong, but some are useful.”

What I like about the root cause analysis is how it teaches you to not react to symptoms. It encourages you to look at the relationship between observations and move deeper. All of this is subjective interpretation and, like systems thinking, depends on how a person draws the relationships. From this perspective, they are similar.

Many people describe the five whys as a technique and one that I draw upon more often. I prefer the fishbone method of root cause analysis because it helps encourage you to think that there may be more than one cause for an effect you see.

When you take the results of root cause analysis and try to see if there are any cyclic relationships, you might end up identifying more effective leverage points where breaking, accelerating or dampening a reinforcing loop with a small effort might have a significant impact on the system.

After studying complexity theory, an interesting approach at looking at these models is never thinking about them in a mode of conflict. Instead, you should be looking at where there is value and trying to apply them where you can realise that value. Never look at models as competing (OR-mode) thinking. View them as complementary (AND-mode thinking)

Book Review: The Human Side of Agile

A majority of books in the agile space always relate to the practices talked about in various methodologies, if not focusing on the methodologies themselves. With the Agile Manifesto talking about Individuals and interactions over Processes and Tools our community seemed to have missed a bit about how you go about building better interactions between individuals.

The Human Side of Agile

Fortunately a community member, Gil Broza wrote a book called The Human Side of Agile: How to Help Your Team Deliver. Gil was kind enough to send me a book a while back, but it was only on this trip to Agile Brazil that I managed to find the time read and reflect upon what I learned in the book.

My first impressions about the book is that it covers a solid range of topics. It addresses the role of leadership, strategies for getting to the ideal “self-empowered team” and useful advice on practical topics such as communication, meeting facilitation and about how to go about constantly improving. These are all topics that are often skipped, assumed easy, and are also the topics many people ask about at conferences. Fortunately Gil has been able to put a lot of practical advice, peppered with some great stories about what impact some of these ideas might have on the team.

The book is laid out in a series of questions, and so I can imagine people finding it particularly useful in a, “What do I do here?” situation. He covers topics some might avoid such as how to deal with behaviour seen as potentially destructive to the interactions of a team as well as dealing with the fact that people change in organisations and advice on how to deal with it.

This book provides a much needed guide to our industry where there was a big gap before. The writing is clear, easy to digest and quite approachable. Definitely one to add to any essential agile reading list.

XP2013 Open Space: What is an Effective Tech Lead

Open Space is a usual traditional the XP2013 or (XP20xx series of conferences). I proposed a session around what makes an Effective Tech Lead because last year we had a great discussion and it was a great opportunity to explore the space with other people either trying to understand the role more, or just interested in the discussion.

We tried to capture the results on the board below as much as we could:

Implementing Continuous Delivery Workshop Prework

At XP2013, Christian Trabold and I will be giving a tutorial on Implementing Continuous Delivery. In order to prepare for the workshop we ask you to bring a computer and do the following before the workshop:

  1. Install VirtualBox (https://www.virtualbox.org/wiki/Downloads)
  2. Install Vagrant with minimum of v1.2.1 (http://downloads.vagrantup.com/tags/v1.2.1)
  3. Ensure that you have at least 2.5GB of free space on your hard disk
  4. Ensure that your USB port is working so that we can provide the image via a portable Hard Drive/Stick.
  5. Make sure you don’t use local ports like 8080 and 8153 (We need these ports for accessing the apps on the VirtualMachine)

Being agile instead of doing agile

My last delivery gig was a great reminder about keeping the principles and values of the Agile Manifesto in mind. It was unlike any agile method that you would probably recognise.

We were building a prototype system to demonstrate the capabilities of a technical standard and the data that it contained. Our main stakeholders, both relatively non-technical knew the type of audience the prototype was intended to influence and making a technical standard come to life with an interactive prototype was our remit. We had free reign of how we worked, the technology we used but our approach consistently remained true to the agile set of values and principles – focused on value through the demonstration of working software in a collaborative and adaptive manner.

I think about how we worked at the end of the project, and it looked nothing like how we worked at the start of the project.

Communication was key

Our small team was highly distributed – our two stakeholders located in one part of Switzerland, two of our team members (mostly working) in Germany, one of our team members moving between London, Germany and the client and myself where, during one week was in London, Manchester and then Germany. We knew communication was going to be tough. Our first step was ensuring we kicked the work as a team together in the same place. We spent the first week working with a large variety of industry stakeholders, understanding their concerns, the issues the new standard was going to address, and trying to get into the heads of our own stakeholders about what they were trying to do. This time turned out to be essential in guiding our future work and helped us connect the “what” to the “why”.

With our high level of distribution, we knew that we would need to do multiple channels of communication. We started off with daily stand-ups using video conferencing software with our stakeholders, which later turned into daily showcases at some point to help us further plan more work. When one of our team members moved remotely, we even did daily afternoon stand-ups to ensure that we kept synced with any major shifts in the day.

We experimented with many forms of chat software early on. We tried google hangout (but it seemed to turn one of our machines into furnace), we tried group skype, we tried google chat and we ended up settling up on chatzy as a way of having a persistent chat. We evolved a protocol of trying to have conversations in the chat room, so that when our remote member came back from meetings with stakeholders, they could easily answer and respond.

We experimented (probably half-way into the project) with leaving our video conferencing software (GoTo Meeting) on when one of our members were remote so that it felt like they were in the room and they could see us in the room. A spare laptop (our build server) and a simple crate fixture to prop it up so they could see us worked well.

We gave up using a physical wall in favour of an online card wall (Trello) as a mechanism for sharing work. Our ability to concurrently edit/update that turned out a great way of capturing and sharing notes.

Daily planning and story-writing sessions

Our “stories” were small. Our technical team of two developers, a user experience/front-end designer could churn through a lot in this context (moving between 20-40 stories per week) and we worked hard to ensure that we identified and split stories based on new needs and feedback from our stakeholders.

When we first started off, we started off doing a story-writing session to tackle our initial user journeys. The first session, I remember being remote and we were still able to “sit down” as a group and do a brainstorm. We set aside fifteen minutes, and split in two sites (me remote) and the rest of the team in one place we all got index cards out and brainstormed potential user stories. I used the in-built video camera in my laptop to share my index cards and they did the reverse as we talked through each one via video conferencing software. At some point, further on in the prototype, we moved to a daily planning session where we’d capture ideas directly into our online card wall.

Continuous Deployment

We still had a small build pipeline into production with a handful of unit tests to sanity check our final product that would automatically deploy the latest build into our “production” environment. I’m not sure, but I would guess that we deployed into production at least twenty times a day (as a team) and our stakeholders were surprised to see the system evolve even as they were interacting with it.

Even during some showcases, we were able to fix small issues (such as wording, placement) and have it appear a few minutes later.

Automated unit tests for quick feedback and design

Given it was a prototype, we were clear we didn’t need to have as much automated testing in place for a system that would have a very short shelf life. For us, testing goes beyond validating the behaviour you expect to a mechanism to drive design. If it’s hard to test, there’s probably something about the design that needs to change. For us, our tests provided two mechanisms – fast feedback about when to address code smells, and the other was to validate the simulated data model that needed to represent “realistic data.”

Given the domain our prototype worked in, our clients needed a balance where they didn’t want any real examples, but they wanted “realistic” samples. This means that we ended up modelling our customer’s domain in a way that let us generate new datasets easily. As an example, when we needed to extrapolate a “points based system” based on constraints that someone ran a search by, we only needed to touch one class to generate data that we tested as quite realistic against end users of the system. Prices had a relationship between the brand of the product and class of service, and this was just as easily modelled. Our unit tests allowed us to evolve characteristics of our model before plugging in new inputs to generate different outputs.

Poly-skilling or anyone could fulfill any responsibility

We all took part in story-writing. We started off with our analyst taking care of tracking progress, but we quickly moved that responsibility to a developer so that they could better use of their time testing the system with end users, and distilling the feedback from the large number of sources.

Our user experience designer was fantastically competent as a front-end developer and even let us lonely developers have a go at reworking some icons that needed adapting based on user feedback because his time was better spent looking at overall flow.

We took time out as a group to take part in exploratory testing trying to find weird user-nuances, edge bugs compiling them into a google spreadsheet before working with our stakeholders to feed them back into work.

Everyone on the team had a go at facilitating retrospectives and organising the feedback sessions that gave us the steer on which further direction to move in.

As a team, we made sure that everything that needed to happen happened without being precious about who did it.

Continuous reflection and continual improvement

Although we held a small handful of retrospectives, they weren’t our only opportunity to improve. If someone wanted to try something, we gave it a go for a day and then worked out if we wanted to continue doing it. Some things died off if someone felt we weren’t getting value from it. Other things just continued on because it seemed to work for us.

Developing social connections in the team

One activity we used to kick off our team was also to be explicit about some of our natural preferences – such as what sort of chronotype we were.

I think one thing that worked really well for us was having lunch together when we were in the same location. Mostly this was in Germany, which has a culture of having a sit-down lunch rather than the grab-and-go culture in the UK. I think this helped us bond as a team, and we grew to know each other more, sense each other’s working preferences and just get a chance to know each other much more.

What is Failure Demand in Software Development

The idea of “Failure Demand” comes from systems thinker, John Seddon, who describes it as “unnecessary burden on the system.” By looking at removing failure demand on a system, you free up more capacity to focus on value added work. Much of failure demand also maps to the lead concept of “waste” although not all “waste” is the same as failure demand.

Some classic examples (and tell-tale signs) I see with companies include:

  • Poor quality work – Features that are not tested, or well designed end up generating bugs. A smell to look for is lots of issues reported by end users. Lots of errors in production logs are also another great smell for detecting this.
  • Features designed without thinking about User Experience – Without putting the end user of a system in mind, many organisations build functionality without exploring how/why an end user of their system will end up using it. Working with an effective user experience capability means simpler, clearer interfaces that help end users get the job done. Smells to look out for include interfaces that have too many additions or features added to it.
  • Requirements solely driven by a Product Manager – Many organisations rely solely on the HiPPO (the Highest Paid Person’s Opinion) to drive requirements. Although a Product Manager role is still useful for other reasons, faster experimentation and data collection of testing hypothesis is of use. Look out for smells like long release cycles, date driven requirements, or large backlog requests of detailed “user requirements” specified by the Product Manager without real involvement or feedback from end users.
  • Misunderstandings – As a software organisation grows, the communication channels significantly increase. When people do not validate their understandings with each other, they end up doing more rework than necessary. Depending on how complex the problem space is, using visual models, workshops that explore a certain approach and simply showing progress constantly (daily or weekly basis) help to resolve this.

What other examples of failure demand do you see? Please leave a comment.

« Older posts Newer posts »

© 2024 patkua@work

Theme by Anders NorenUp ↑