patkua@work

The intersection of technology and leadership

Category: Complexity

Disrupt yourself, or someone else will

A couple of days ago, Apple announced their new range of iPads including a mini with a retina display and a lighter, thinner full-sized iPad. Notice anything strange about their prices?

iPad2 Pricing

As can see the new mini retina priced at exactly the same price point as the (old) ipad2. Seem strange? You’re not the only one to think so as outlined by a comment from this appleinsider article.

Comment

This isn’t the first time Apple have priced a new product at the same as an older product (and probably won’t be the last).

The Innovator’s Dilemma
In the book, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, the author shows organisations that fail to embrace new technologies because they focus on the success of a single product or technology.

By offering these lines (new and old) Apple effectively segments their market into the early adopters of technology while providing (older and probably less appealing) options to all others. A conflict of features (retina display in a smaller form factor) makes users decide what is more important to them.

By removing the older iPad2, they effectively force consumers to move to the new platform and some of their customers may be happy with the existing product. Apple is effectively disrupting themself before their competition can.

Uses our cognitive biases

Having the same price point for a newer technology also taps into the anchoring cognitive bias (sometimes called the relativity trap).

Without the older product in the lineup, the newer products would not appear so appealing. The “anchor” of the older product is effectively pushing people to the newer products.

A person would ask:

Why would I buy older technology for the same price?

and then make the trade off for a smaller size, or if they want a newer size, pay another premium for it.

Book Review: Rethinking the Future

I recently finished the book, “Rethinking the Future” and I have to say how impressed I was by the book. The book is structured as a collection of essays from different well-known leaders and authors in different fields. I knew many, but not all, of the contributors and, as a result, the book offers a wide variety of perspectives. Some that complement, others that contrast with each author’s very opinionated view of the “future.” Bearing in mind this edition of the book was published in 1998, I find it interesting to see how still relevant many of the writings are today.

Rethinking the Future

Definitely focused as a business book, the contents are divided into different chapters trying to envisage the future from many different angles includes the way that businesses work, competition, control and complexity, leadership, markets and the world view. The book resonates very strongly with some of the works recently published such as truly understands what motivates people (i.e. Dan Pink’s Drive), or the need for management to balance even more and more states of paradox (e.g. Jim Highsmith’s Adaptive Leadership).

I don’t necessarily agree with all of the contributions in the book, particularly the idea of being focused on a single thing as described in the chapter, “Focused in a Fuzzy World.” I agree some focus is important, but I also believe in order to innovate, you sometimes have to unfocus. I see this as the problem often described by the Innovator’s Dilemma.

Showdown: Systems Thinking vs Root Cause Analysis

I gave a presentation in Recife about Systems Thinking and had a great question about where does root cause analysis fit in versus systems thinking which describes emergent behaviour and that there may be no single cause to the system behaviour.

Fight
Image courtesy of tamboku under the Creative Commons licence

Firstly I like the quote from statistician George E.P. Box, “essentially all models are wrong, but some are useful.”

What I like about the root cause analysis is how it teaches you to not react to symptoms. It encourages you to look at the relationship between observations and move deeper. All of this is subjective interpretation and, like systems thinking, depends on how a person draws the relationships. From this perspective, they are similar.

Many people describe the five whys as a technique and one that I draw upon more often. I prefer the fishbone method of root cause analysis because it helps encourage you to think that there may be more than one cause for an effect you see.

When you take the results of root cause analysis and try to see if there are any cyclic relationships, you might end up identifying more effective leverage points where breaking, accelerating or dampening a reinforcing loop with a small effort might have a significant impact on the system.

After studying complexity theory, an interesting approach at looking at these models is never thinking about them in a mode of conflict. Instead, you should be looking at where there is value and trying to apply them where you can realise that value. Never look at models as competing (OR-mode) thinking. View them as complementary (AND-mode thinking)

Systems Diagramming Tools

Just a quick reminder to myself about a number of tools available to people interested in Systems Thinking:

  • Flying Logic (Commerical) – My favourite so far with nice looks, and an emphasis on building the diagram collaboratively instead of simply focusing on output. It automatically adjusts the layout when adding in nodes for minimal line-cross overs. Can be a be nauseating sometimes.
  • Graphviz (Free) – Simple looping diagram that’s easy to automate. Not was good representation for causal loops if you want to discriminate amplifying/dampening cycles, but good bang for buck
  • Omnigraffle (Commercial) – Diagramming tools that makes very snazzy diagrams. Less powerful on automatic layout than Flying Logic. Mostly manual.

Summary of XP2011

First full day of XP2011 was a pretty full schedule as I had to prepare for two lightning talks on different subjects. Fortunately both of the topics were very close to my heart, one about learning using the Dreyfus Model (slides) and the other about Systems Thinking (slides). The second day started off with a great breakfast selection at the conference hotel before kicking into the keynote by Esther Derby. Clearly jetlagged, Derby used a set of hand drawn slides to explain her topic, “No Silver Bullets”.

Her presentation style was very conversational and I can’t say that the crowd responded very well to this. Perhaps it was their jetlag as well, or the way the room had been set up. Nevertheless, through many of her stories, I still saw many heads nodding and a really great response on twitter to the things that she was saying.

I’ve followed Derby’s writing for years and could only wish more people would be exposed to them. As a result, I found many of the topics and opinions I found interesting reinforced, such as failing to address the management layer inevitably means agile adoption hits a hard ceiling. Or the oscillating behaviour that results when managers attempt to react to a system with long delays in its feedback cycle. I appreciated the very vivid term, “Bang! Bang!”-management style describing the style of managers who seem to have only two distinct and opposing reactions to a system, unable to moderate their use and wait for systems to find a new equilibrium. If you imagine these two opposing reactions the result of a huge iron lever being flipped, hopefully you can imagine where the noise comes from.

Derby covered lots of different areas, quoting a few people like Donella H Meadows, “The original purpose of hierarchies was to serve the sub systems, not the other way around.” And the work that George Lakoff does with word association with metaphors in our everyday use. Raising self awareness of your own in built biases and metaphors is another key thing she emphasised focusing on the judgements, habits, feelings, thoughts, mental models, beliefs, rules and values we tend to be intrinsically governed by. I particularly liked the phrase she uses to help people uncover their own and others’ mental models, “In what world would this make sense?”

She told one great story about the dangers of measurements as targets, using the example of the manager who decided to “Grade developer estimates”. This manager decided to give A’s to those who estimated on time, B’s to those who estimated over time, and C’s to those who estimated under time. Of course, you can imagine what magically happened as people’s grades mysteriously improved.

She also reminded me of the work of Ackoff, who I need to revisit, and the great work that he’s written about Systems Thinking. I have only been able to refer to the Fifth Discipline as a Systems Thinking book, but I really need to read his other ones to see if they would be of use, or are more accessible.

The rest of the day was a bit of a blur. A couple of highlights included seeing Marcus Ahnve take the work Luca Grulla and Brian Blignaut did with TDDing javascript to the next level and doing a demo of BDD.

David J. Anderson also reminded me of the importance to think in terms of the languages executives speak in order to better get our message across. He reminded me of all the great things that Ross Pettit has to say, although I think Anderson’s analysis on accounting for software development costs doesn’t seem to match with some of the data I’ve heard from Pettit.

There was so much more to the conference. Always the way great conversations emerged and the wonderful atmosphere of the hotel adding to the uniqueness to this event.

Data on Estimation vs Number of Stories

Last year, I worked on an inception trying to work out how big a three way product merge and rebuild would take. The business wanted to know what they could have by the start of summer this year.

During this intense four week inception we identified a huge number of stories – way more than I had ever identified in previous inceptions. Almost 500 storie by the end. I can’t recommend anyone going through this experience though we had drivers which meant we couldn’t avoid it this time.

My previous experience and gut feel tells me 100-ish stories (when I’m working with people to break it down) is probably enough work for a small dev team (3 dev pairs) for about 3 months. This was definitely a whopping year long programme of work (if done right).

We also had a lot of pressure to estimate them all. Up front. Obviously, attempting to estimate a year’s worth of work accurately is going to be pretty inaccurate. The longer the piece of work, the more assumptions will change, the more estimates made on those assumptions will be wrong. I know. However people still wanted numbers to understand how large this programme of work would be.

Some statistics
We ran incremental estimation sessions using relative story sizing, following fibonacci planning poker and estimating in points. Our maximum point size was 8 points. 5 was generally the highest though we tended to have 1 in 30 cards about this size.

We even iterated over a few estimates at random intervals to see if our relative sizing of stories changed significantly.

Interestingly enough, we stored some spreadsheets for various time during out estimation and I’ve pulled out some statistics from them, laid out in the table below:

Spreadsheet Version # Stories Identified # Stories Estimated Total Estimates in Points Average Point / story
0.22 135 129 340 2.63
0.26 529 395 1037 2.62
0.30 494 488 1346 2.75

What can we learn this from?
Firstly, one can see that the average story size isn’t significantly different over this large spread of stories. One could argue that given the dataset, it could be enough to extrapolate further estimates.

The next thing to consider is why do the numbers tend to average out? One could argue the story breakdown process for this project, leads to stories of the same size. It would be dangerous to assume all projects have similar story breakdown process.

Alternatively one could argue that the estimation process helped us breakdown stories to be approximately the same size. Nevertheless, an interesting observation and one I’ll continue to explore.

Book Review: Object-oriented Software Metrics

Working for a client in Berlin, I find the plane time where I normally catch up on some reading. Services like Read It Later make bookmarking online pages for offline reading a pleasure. This morning’s trip, I finished reading a book Michael Feathers tweeted about that. Titled, “Object-oriented software metrics”, and published 15 years ago I found this book most easily from a online second hand book store, and have to say I enjoyed many aspects to this book.

Wondering how much interest a metrics book could be, the author did well to keep the short book punchy and brief. I enjoyed the conversational style of the writing, and the pragmatic nature of his recommendations, such as “I put the threshold at zero so that we explicitly consider any violations.” He starts the book describing metrics and that they should be used for a real purpose, not just randomly corrected, and something I’m pleased resonates very well with a chapter I’m contributing to a book. It’s obvious he comes from applying metrics with real purpose in the real world, talking about examples where various metrics might be used to drive various design choices, or further investigation.

The author divides the metrics into two sections, the first focusing on metrics related to estimating and sizing, or project planning. The second set focuses on design metrics related to code. The metrics that emphasise estimation piqued my interest as a reflection on how estimation methods used to be run, or maybe in some places still are run such as Developer Days per Line of Code, or his suggestion on Developer Days per Public Responsibility. I think the second set proved more relevant to me.

The author shares some of his preferred metrics thresholds and, they too, resonate strongly with my own views of size of methods, number of instance variables in classes, etc. In fact, I’d almost say they were definitely much more extreme such as 6 message sends per method, with my preferred number between 5-10 depending on the team I’m working with. Part of this, something that the author emphasises, is also heavily influenced by the programming language of choice.

Few of the metrics talked about were new to me, having made use of tools like Checkstyle and PMD, although I found he used several I’ve not really tracked such as number of classes thrown away, number of times a class is reused and the number of times a class is touched, something I’d like to ponder on a lot more. One metric I’ve also never considered collecting or tracking is number of problems or errors reported by class/module though I suspect the overhead in tracking this may outweigh the benefit it brings because it’s much harder to automate this.

His emphasis on the influencing factors on code metrics also got me reflecting on my own experiences, once again strongly resonating with my own experiences. His mentioning of key classes resonate with the domain model described in Eric Evans’ Domain Driven Design. I would also anecdotally agree that the type of UI heavily influences the number of supporting classes with technical interfaces (i.e. APIs) requiring less classes than rich GUIs. I like a lot of the distribution graphs he uses and will definitely consider using these in the future.

I’d highly recommend this book if you’ve never really sat down and thought about using code metrics on your projects. It’s got me thinking about a number of other interesting side projects about visualisation and further feedback loops on projects.

Performance is Emergent Behaviour

Mark’s tweets got me thinking when I tweeted a short number of responses back at him recetly. Unfortunately twitter isn’t a great place to have a long and well thought conversation and figured I would blog about it like Mark did.

The gist of the conversation seem to float around two positions that seem to be in conflict.

One position states: someone’s behaviour is determined by their environment (or system). This is certainly the view that John Seddon writes about a lot in his book, Freedom from Command and Control.

Most people read this position and (incorrectly) deduce, someone’s behaviour is solely determined by their environment (or system) therefore, it is best to focus on one of them.

Mark makes a great observation that people perform “differently” given they have the same environment. In an environment, sometimes those “differences” may be “better”.

To which I responded in the brevity required by 140 characters, “different strengths and interests at play. Emergent behaviour based on individual and environment.”

Emergent behaviour in this case can be seen as much more than just strengths and interests at play. People are complex systems and highly unpredictable. Each individual goes through many different experiences. Just as an example, everyone’s commute around London is different. Train failure? Tube strike? Traffic on the street? Walking to work? Everyone has different social structures – sleep deprivation from young kids, happiness from a child’s graduation, hard night out celebrating a friend’s send off. Even the weather has a huge impact on people (though everyone responds differently).

It’s rare that I think we are all in the “same” environment all the time.

Add in the topics you’re currently interested in, the events unfolding around you and regardless of the system, and the skills, strengths at play and ambitions, I hope you start to understand why everyone behaves differently. Of course some of those elements in the system have minor impact on the end result yet might explain why some perform “better” (i.e. differently) to others.

© 2017 patkua@work

Theme by Anders NorenUp ↑