patkua@work

The intersection of technology and leadership

Category: Analysis

Number of commits in git

I have been working on a project that required me to work out the number of commits in a git repository. I thought github would show that in their display, but could find no such statistic once you get over a certain number. The only number I could find that indicated it was over 1000.

Git commit count on github

Searching around, there are a few solutions including most recently:

git rev-list HEAD --count

or

git shortlog | grep -E '^[ ]+\w+' | wc -l

Data on Estimation vs Number of Stories

Last year, I worked on an inception trying to work out how big a three way product merge and rebuild would take. The business wanted to know what they could have by the start of summer this year.

During this intense four week inception we identified a huge number of stories – way more than I had ever identified in previous inceptions. Almost 500 storie by the end. I can’t recommend anyone going through this experience though we had drivers which meant we couldn’t avoid it this time.

My previous experience and gut feel tells me 100-ish stories (when I’m working with people to break it down) is probably enough work for a small dev team (3 dev pairs) for about 3 months. This was definitely a whopping year long programme of work (if done right).

We also had a lot of pressure to estimate them all. Up front. Obviously, attempting to estimate a year’s worth of work accurately is going to be pretty inaccurate. The longer the piece of work, the more assumptions will change, the more estimates made on those assumptions will be wrong. I know. However people still wanted numbers to understand how large this programme of work would be.

Some statistics
We ran incremental estimation sessions using relative story sizing, following fibonacci planning poker and estimating in points. Our maximum point size was 8 points. 5 was generally the highest though we tended to have 1 in 30 cards about this size.

We even iterated over a few estimates at random intervals to see if our relative sizing of stories changed significantly.

Interestingly enough, we stored some spreadsheets for various time during out estimation and I’ve pulled out some statistics from them, laid out in the table below:

Spreadsheet Version # Stories Identified # Stories Estimated Total Estimates in Points Average Point / story
0.22 135 129 340 2.63
0.26 529 395 1037 2.62
0.30 494 488 1346 2.75

What can we learn this from?
Firstly, one can see that the average story size isn’t significantly different over this large spread of stories. One could argue that given the dataset, it could be enough to extrapolate further estimates.

The next thing to consider is why do the numbers tend to average out? One could argue the story breakdown process for this project, leads to stories of the same size. It would be dangerous to assume all projects have similar story breakdown process.

Alternatively one could argue that the estimation process helped us breakdown stories to be approximately the same size. Nevertheless, an interesting observation and one I’ll continue to explore.

Putting the “user” into user stories

I’m helping to kick start a project where we’re identifying enough user stories to help build a solution towards a tight deadline for legal compliance. We’ve been making great effort at identifying and talking through the users of the system, particularly exploring how difference users have different needs. We’ve managed to talk to existing users of a similar system, and talked to people who will manage the new users. The hard bit is that our real system users don’t even exist yet.

My work colleague, Alex McNeil, will be proud that I’m pushing for the UX flag and we’ve finally got a UX person onboard which I’m delighted with. I think it’s worth fighting for guerilla UX whenever and however you can get it. In this case, more is always a better thing for end users. I want to be proud that users actually use all the features we build and enjoy using them at the same time. Without properly understanding who they are and what they are trying to accomplish, I’m confident we would have ended up with a naff system.

Also, unlike some developers who want to jump in and start coding, this part of getting into the mindset of a user you’re targeting is a key attribute to the proper use of user stories. Focusing first on understanding what problem your users are trying to solve is the key to building what is actually neeed. I’m not the only one in the agile community to feel this way, with many people inverting the classic and now dated, “As a … I want … so that … “ story format made popular in User Stories Applied.

We’ve made plenty of headway in the right direction and although I know we can take the UX much further, constrained by client expectations, time-conscious of a pending legal deadline. Don’t worry though. We’ll be trying to live out another agile principle of getting fast feedback, even if its done using other guerilla user experience methods.

I care about emphasising the user in user stories because I don’t want to be responsible for one of those systems people loathe to use. I hope you will care about it too.

What they don’t tell you about user stories

ShhAgile methods have a gap on how to do deal with requirements that often leaves business analysts confused about what to do. From an analyst’s point of view, user stories seem to be the only technique agile methods prescribe. When people are new to agile, they take this as agile’s “only” way of modelling requirements. What isn’t made explicit often gets forgotten, so I want to make this clear: Agile methods don’t discourage thorough analysis or modelling. They don’t discourage the use of diagrams and other visualisations that help people better understand and refine their domain representation. They want everyone to focus on value and communication by using the rights tools for the job.

When coaching analysts, I often find myself telling them, all the tools they draw upon to question, to challenge, to refine and capture their understanding is important. Agile methods focus on better understanding of value, not better note taking. Writing things down does not automatically translate into understanding between two people. Cockburn’s already demonstrated a model that talks about the richness of communication methods, with written documentation being one of the worst.

The best business analysts I’ve worked with have little need to write things down, having absorbed what needs to be done, understood the real requirements, and constantly available ready to help clarify them. Conversely, the worst business analysts see their job only as writing “requests” and “demands” down, doing little to question and challenge what is really needed versus what is merely articulated. These I call overpaid scribes.

Question what value your tools are using and if they help you either better understand or better communicate with other people. If you find modelling in UML helps you better see relationships, do so but avoid spending all your time polishing the model and getting the syntax correct. Be wary of investing yourself too much in one particular model or document that makes you potentially more resistant to changing it. Neal Ford describes this as Irrational Artefact Attachment. If you find yourself writing something down to remind yourself of what is important to get across, do so but don’t use it as a way to avoid helping others understand it.

Agile methods appear nebulous to many people because “user stories” simply appear. It’s not prescriptive about how you get there because there are simply too many different ways to get there. Whilst being prescriptive will help some projects, it would no doubt hurt many others.

Image of Shh! taken from Anthony Gattine’s flickr stream under the Creative Commons Licence

Guide to using Design Comics

I’ve been using the Design Comics site recently for putting together some better presentations. Here’s a system that worked well for me:

  1. Think about what you want to do first
  2. Write the script; then
  3. Animate it via Design Comics

Systems Constrain Thought

One of the most interesting observations Ajit and I made when we finished our inception a while back was that defining a system too early puts constraints around the way you work and potentially hinders learning.

We set out putting together a mental model of what we thought the system should be and what the scope of the project entailed. We did lots of brainstorming, gathered tons of input, asked lots of questions and inevitably, had fair amounts of discussion as we tried to understand it from different points of view. We tried using some software based systems like a spreadsheet, or some modelling software to capture the information we had, and eventually got too frustrated as we struggled to deal with both what we needed to model and how we were going to model it.

We found it’s much easier to work out how to model things once we understood what sort of things we needed to model. It didn’t mean we didn’t try modelling it at all. Rather, we used cheap techniques to quickly change the way we wanted to represent them. In the end, we used colour coded papers, index cards and broad categories on flip charts to represent different types of information, allowing us to group, category and reclassify bits of information quickly and easily.

Our system let us deal with larger concepts when we needed to, with the ability to drill down into enough detail to have better conversations with people closer to the project. We ended up distributing the information we modelling into more common formats – a simple spreadsheet for stories, and another for a risk log as well as some high level diagrams representing the system.

It felt much more satisfying to uncover the natural groupings of information instead of trying to cram information into the system we happened to pick.

What Analysis Skills Are Useful

I’ve worked with some analysts who think their job is to scribe, writing down what users say into documentation. Sure, some skills of an analyst might require some scribing though rarely is it the entire job. Here’s a list of things I’ve observed excellent analysts focus on.

  • Focus on differences, look for patterns, and highlight these differences to the stakeholders. Determine where the origins of these differences come from. Is it a real reason, or is it an inconsistency because the domain vocabulary is a little bit too loose. Are the differences driven through something that adds value or do they come from coincidences.
  • Involve more than just the stakeholders. Talk to developers or operations people and involve them in meetings. Understand the potential costs and brainstorm options with them to weigh up each options costs and benefits. Different points of view (ala Wisdom of Crowds) results in higher quality results.
  • Go beyond what people say what they want. Don’t follow the “customer is always right” saying blindly. They may know what they want deep down. They just may not be able to express it. Also be sure that you’re talking to the right customer. Different groups of end users have different interests that are separate from stakeholders. A solution needs to balance all of their needs. Use scenarios and personas to draw these out.
  • Clarify the vocabulary. Look for synonyms. Encourage people to use the same word all the time as much as possible. Use clarifying techniques, “Oh, do you mean X?” or contrasting techniques, “So you don’t mean X, you mean Y”, or “Do you mean X or Y or Z”.
  • Drive for as much consistency as possible. Drive it through everything where possible beyond just vocabulary. Think about how features complement each other, and how the behaviours work.
  • Customer time is important. You really want to prepare for meetings. Ensure people now about the agenda, questions (using both specific, directed or open), priorities.

How much detail do you put into a story card?

Most of the teams I work with prefer using story cards to capture and manage requirements. For analysts who write story cards, it’s important to remember that they are that “placeholder for a conversation” and it’s useful to know about the three C’s, and understand the INVEST principles that make more ideal story cards. In this post I’m going to answer the frequently asked question: “How much detail should go into a story card?” Of course, please remember that this is just my answer to the question and unlikely a definitive one.

Before I begin to answer, I guess it’s important to understand why we even bother with story cards, something definitely worth a whole other post (and I’m sure there’s plenty of great ones out there). I’m going to distil this to just the essentials and skip the why (you can read it in other people’s posts). For the purpose of this post, I’m going to assume that story cards:

  • Exist to help people have conversations about a small set of requirements
  • Have an associated cost (an estimate)
  • Offer some business value to people such as stakeholders or end users
  • Help stakeholders prioritise requirements
  • At some point may be implemented

The Short Answer
You want to have enough detail to meet the objectives above balanced against the cost of capturing and maintaining that detail in order to support as much change as possible.

The Long Answer

I like to draw the diagram below to visualise how much effort I’d put into collecting detail.

For stories that need to be implemented now, you want to have enough precision that allows developers and testers to be clear about what needs to be achieved. The waste of not having enough detail here is essentially rework in many of the downstream activities. If critical detail in a story is missing, it will lead to misunderstandings that, in turn, leads to bugs that, then, leads to additional coding time and additional testing time, delaying delivery of the business value. What is important at this stage is that everyone involved (the business, analysts, developers, testers, etc) share the same detailed understanding of exactly what is being delivered and what is exactly not being delivered.

Story Detail

For stories that need to be implemented in the distant future, you don’t need the same level of detail. The waste of capturing too much detail too early is essentially rework at the analysis level. Depending on how requirements are managed, this can be costly. Conditions change that may change requirements not yet in production, and I’ve seen many analysts who’ve written down so much detail and invested so much of their own time that they refuse to deal with the change. The want to avoid the change because they now have to rewrite reams and reams of documentation, or drop the last month or two of work to develop a different and better solution. What you need for stories in the distant future is enough detail to allow people to have that conversation over the same thing without confusion, whilst minimising the cost of capturing detail unless it impacts things like estimates, priorities or business value.

The Summary
Although it sounds like I’m saying don’t worry about future requirements, my emphasis is all about balance. You want to balance the costs associated with collecting and maintaining details for requirements that may or may not end up being implemented. Precision has a cost associated with it, and this cost always needs to be weighed up against its value. Excessive precision too early may increase cost via additional rework at the analysis level. Lack of precision too late may increase cost via additional rework in downstream activities such as development and testing.

Further References
James Shore writes more about the life cycle of a story and when you might want to start creating them. Read about it here.

© 2017 patkua@work

Theme by Anders NorenUp ↑