The intersection of technology and leadership

Category: Complexity (Page 1 of 2)

What hypergrowth is like at N26

It’s an understatement to say that N26 is experiencing hypergrowth. In August 2017, we had 450,000 customers. We now have more than 2.3 million customers, with many more joining everyday. We’ve almost quadrupled the number of people in tech in that very same time.

It is my first experience of working in a hypergrowth company as a permanent employee. Although the US has its large share of hypergrowth companies, Europe has very few. In this post, I want to share some lessons learned and insights.

What is hypergrowth?

You can find several definitions of hypergrowth on the internet. I like to describe it as a company experiencing a doubling effect in growth. Some refer to this as the snowball effect.

The Snowball Effect

Or simply put:

“The game changes really, really, really fast.”

Patrick Kua

What does it feel like?

I’m sure I picked up this analogy elsewhere, but I can’t find the reference. Regardless, the imagery stuck with me.

“Working in hypergrowth feels like you’re building the spaceship as its flying”

Unknown source
Riding the rocketship of hypergrowth

Hypergrowth can feel chaotic. The organisation doesn’t grow at the same rate, so you feel where bottlenecks emerge. At the same time, a few weeks later, the constraint is often resolved and moved to a different area. Hypergrowth means constant but rapid change. Upon returning from a week’s holiday, many people ask, “What’s changed?” They know that somewhere a team structure, process, or decision has changed.

Hypergrowth means uncertainty. I am comfortable stating what I think may happen in three or six months time. I’m also used to being wrong. I try to put statements about the future into context, explaining possible alternatives.

Hypergrowth demands new capabilities and skills. I read how an organisation grows exponentially faster than individuals. I’ve seen this first hand. Skills take time to develop mastery. An organisation requires that skill now. Although you can wait for people to learn all the required skills, I’ve learned you need to support both. Allow people to grow, but also bring in expertise to learn from. This is why I am a big fan of pair programming (or pairing in general). It’s a great way of transferring experience across people.

Why work in an environment like this?

You may be reading this and wonder, “Why on earth would I want to work in an environment like this?” Here are five reasons why it’s worth it:

  1. New problems to solve – Engineers love tackling new problems. Our product changes all the time. We improve the security build into our product. We look at ways to scale our systems and improve our ways of working.
  2. New skills to develop – New problems and a changing environment forces people to build new skills. I have seen so many people grow in many different ways.
  3. See a business “grow up” – Every six months, it’s like working with a new company. At the same time, you have personal relationships across the business. This means you’re now always starting from scratch. What started out as a single person may now be a whole team. What was once a whole team may now be an entire department.
  4. Ability to have a big impact – Our founders have a broad mission. It’s exciting to work on something that millions of people use. It’s also great to be a B2C product where you get to use your own product too!
  5. Everyone can be a leader – Some people may get hung up on titles. Like I wrote in a previous blog post, everyone can be a leader. There are always opportunities to show acts of leadership and have immediate impact.

How we support people

Although everyone owns their personal career paths, I’ve tried to support people as much as I can. I run an explicit Tech Lead Development Program. This gives people explicit expectations and tools about how people in or heading towards a Tech Lead can improve their impact. Leaders build other leaders. We’ve been deliberate in how we structure our Product and Tech teams. I introduced a Target Operating Model. The Target Operating Model represents a written down mental model of how we’d like to work. This often incorporates new roles, structures and explains the why and the what. Although we experience hypergrowth, it doesn’t mean we do so without trying to shape it.

We listen for feedback throughout the organisation. The leadership team takes a company wide pulse on a quarterly basis. Tech teams use retrospectives to take on improvements. Organisational smells outside of influence of a team get escalated and we try to deal with them as much as we can.

I tried to create as much transparency as possible. We have a shared product roadmap. As a company there are updates about events announced at the start of each week. We end the week with company wide celebrations.

Is this for you?

I will admit that this environment is not for everyone. Our environment may be suitable if:

  • You are looking for new and interesting challenges; or
  • You love an environment with constant change; or
  • You want to work in a place which tries to manage this intentionally; or
  • You are looking to rapidly grow and show acts of leadership.

We are always looking for new talent to add to our culture. Join me on our mission, to build the bank the world loves to use. Look at our open roles and apply now.

Learning about More with LeSS

Background

I took part in a three day course before Christmas to better understand Large Scale Scrum (LeSS). LeSS’ tagline is “More with LeSS”. I’m pessimistic about most “Scaling Agile Frameworks.” Many give organisations an excuse to relabel their existing practices as “agile.” Not to fundamentally change them. Bas Vodde (one of the founders of LeSS’) invited me to take part in a course just before Christmas. I took him up on the offer to hear it “From the horse’s mouth.”

This article summarises my notes, learnings and reflections from the three day course. There may be errors and would encourage you to read about it yourself on their LeSS website, or post a comment at the end of this article.

About the Trainer

I met Bas Vodde about a decade ago. We met at one of the Retrospective Facilitator’s gathering. He is someone who, I believe, lives the agile values and principles and has been in the community for a long time. He still writes code, pair programming with teams he works with. He has had a long and successful coaching history with many companies. He worked with huge organisations where many people build a single product together. Think of a telecommunications product, for example. Through his shared experiences with his co-founder, Craig Larman, they distilled these ideas into what is now called LeSS.

What I understood about LeSS?

LeSS evolved from using basic Scrum in a context with many many teams. I took away there are three common uses of the term LeSS.

  • LeSS (The Complete Picture) – The overview of LeSS including the experimental mindset, guides, rules/framework, and principles. See the the main website, Less.
  • LeSS (The Rules/Framework) – The specifics of how you operate LeSS. See LeSS Rules (April 2018).
  • LeSS (for 2-8 teams) – Basic LeSS is abbreviated to LeSS and is optimised for 2-8 teams. They have LeSS Huge for 8+ teams, and modifications to the rules. See LeSS Huge.

Practices & Rituals in LeSS

LeSS has a number of practices and rituals as part of its starting set of rules. Some of these include:

  • A single prioritised Backlog – All teams share a single backlog with a priority managed by the Product Owner.
  • Sprint Planning 1 – At the end of this, teams have picked which Backlog Items they work on during a sprint.
  • Sprint Planning 2 – All teams do this separately. Like in Scrum, Sprint Planning 2 focuses on the design and creation of tasks for their Sprint.
  • Daily Scrum – Each team runs their own Daily scrum as per standard Scrum.
  • Backlog Refinement – Teams clarify what customers/stakeholders need. Good outcomes include Backlog Items refined into sizes where teams can take 4/5 into a Sprint. LeSS encourages groups, made up of different team members, to refine Backlog Items. This maximises knowledge sharing, learning and opportunities to collaborate.
  • Sprint Review – Teams showcase their work to customers/stakeholders for feedback. The Product Owner works to gather feedback and reflect this in the overall Backlog. Sprint Reviews should not be treated as an approval gate. It’s about getting more input or ideas.
  • Sprint Retrospective – Each team runs their own retrospective. As per standard Scrum.
  • Overall Retrospective – Members from every team plus management hold a retrospective. This retrospective focuses on the system and improving the overall system.
  • Shared Definition of Done – All teams share an overall Definition of Done, which they can also update. Teams can build on the basis of the shared Definition of Done.
  • Sprint – There is only one sprint in LeSS, so by definition all teams synchronise on the same sprint cadence.

Roles in LeSS

  • Scrum Master – Like in Scrum, LeSS has the Scrum Master whose goal is to coach, enable and help LeSS run effectively. The Scrum Master is a full time role up of up to 3 teams.
  • Product Owner – The Product Owner is the role responsibility for the overall Backlog prioritisation
  • Area Product Owner – In LeSS (Huge), Area Product Owners manage the priority of a subsection of the Backlog. They also align with the Product Owner on overall priorities.
  • Team – There are no explicit specialist roles in LeSS, other than the team (and its members).

Principles of LeSS

A key part of LeSS is the principles that guide decisions and behaviours in the organisation. People can make better decisions when taking these principles into account. You can read more about LeSS’ principles here. Like many other agile ways of working, Transparency is a key principle. Unlike other agile methods, LeSS calls upon both System Thinking and Queuing Theory as principles. Both are useful bodies of knowledge that create more effective organisations.

Another explicit difference is the principle of the Whole Product Focus. This reminds me very much of Lean Software Development’s Optimise the Whole principle. I also like very much the description of More with LeSS principle. This principle challenges adding more roles, rules and artefacts. So think carefully about these!

Overall observations

  • In LeSS, having LeSS specialisations is a good thing. This encourages more distributed knowledge sharing.
  • LeSS explicitly priorities feature teams over component teams to maximise the delivery of end to end value. Both have trade-offs.
  • LeSS doesn’t explicitly include technical practices in it’s rules. It assumes organisations adopt modern engineering practices. To quote their website, “Organizational Agility is constrained by Technical Agility.”
  • A lot of LeSS has big implications about organisational design. Agile teams showed how cross-functional teams reduce waste by removing hand-off. LeSS will be even more demanding on organisations and their structure.

LeSS Huge

The creators of LeSS made LeSS Huge because they found a Product Owner was often a constraint. Since Product Owner’s focus on prioritisation, it’s hard to keep an overview and manage the priority of 100+ Backlog Items. (Note that teams still do the clarification, not the Product Owner). With 8+ teams, they found even good Product Owners could not keep on top of the ~100+ refined Backlog Items (which normally covers the next 3+ sprints).

LeSS Huge addresses this by introducing Categories (aka an Area). Each Backlog Item has its own category, and each category then has an Area Product Owner to manage the overview and prioritisation of Backlog Items in that category.

Guidelines for creating an area:

  • This should be purely customer centric
  • Often grouped by stakeholder, or certain processes
  • Could be organised by a certain market or product variant
  • No area in LeSS Huge should have less than 4 teams

Conclusions

After taking the course, I have a much stronger understanding of LeSS’ origins and how it works. After the course, it feels much LeSS complex than when I first read about it on their website. It includes many principles which I run software teams by. I can also see many parallels to what I have done with larger organisations and LeSS. I can also see how LeSS is a challenging framework for many organisations. I would definitely recommend larger product organisations draw inspiration from LeSS. I know I will after this course.

Just Published: Building Evolutionary Architectures

I’m very proud to announce the release of a new book that I co-authored with Neal Ford and Rebecca Parsons whilst I was at ThoughtWorks. Martin Fowler writes the Foreword (snippet below):

While I’m sure we have much to learn about doing software architecture in an evolutionary style, this book marks an essential road map on the current state of understanding

Building Evolutionary Architectures

It marks the end of a very long project that I hope will have a positive impact on the way that developers and architects consider building and designing software. We will also post related news to our accompanying website evolutionaryarchitecture.com

You can find the book available on:

Enjoy!

Disrupt yourself, or someone else will

A couple of days ago, Apple announced their new range of iPads including a mini with a retina display and a lighter, thinner full-sized iPad. Notice anything strange about their prices?

iPad2 Pricing

As can see the new mini retina priced at exactly the same price point as the (old) ipad2. Seem strange? You’re not the only one to think so as outlined by a comment from this appleinsider article.

Comment

This isn’t the first time Apple have priced a new product at the same as an older product (and probably won’t be the last).

The Innovator’s Dilemma
In the book, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, the author shows organisations that fail to embrace new technologies because they focus on the success of a single product or technology.

By offering these lines (new and old) Apple effectively segments their market into the early adopters of technology while providing (older and probably less appealing) options to all others. A conflict of features (retina display in a smaller form factor) makes users decide what is more important to them.

By removing the older iPad2, they effectively force consumers to move to the new platform and some of their customers may be happy with the existing product. Apple is effectively disrupting themself before their competition can.

Uses our cognitive biases

Having the same price point for a newer technology also taps into the anchoring cognitive bias (sometimes called the relativity trap).

Without the older product in the lineup, the newer products would not appear so appealing. The “anchor” of the older product is effectively pushing people to the newer products.

A person would ask:

Why would I buy older technology for the same price?

and then make the trade off for a smaller size, or if they want a newer size, pay another premium for it.

Book Review: Rethinking the Future

I recently finished the book, “Rethinking the Future” and I have to say how impressed I was by the book. The book is structured as a collection of essays from different well-known leaders and authors in different fields. I knew many, but not all, of the contributors and, as a result, the book offers a wide variety of perspectives. Some that complement, others that contrast with each author’s very opinionated view of the “future.” Bearing in mind this edition of the book was published in 1998, I find it interesting to see how still relevant many of the writings are today.

Rethinking the Future

Definitely focused as a business book, the contents are divided into different chapters trying to envisage the future from many different angles includes the way that businesses work, competition, control and complexity, leadership, markets and the world view. The book resonates very strongly with some of the works recently published such as truly understands what motivates people (i.e. Dan Pink’s Drive), or the need for management to balance even more and more states of paradox (e.g. Jim Highsmith’s Adaptive Leadership).

I don’t necessarily agree with all of the contributions in the book, particularly the idea of being focused on a single thing as described in the chapter, “Focused in a Fuzzy World.” I agree some focus is important, but I also believe in order to innovate, you sometimes have to unfocus. I see this as the problem often described by the Innovator’s Dilemma.

Showdown: Systems Thinking vs Root Cause Analysis

I gave a presentation in Recife about Systems Thinking and had a great question about where does root cause analysis fit in versus systems thinking which describes emergent behaviour and that there may be no single cause to the system behaviour.

Fight
Image courtesy of tamboku under the Creative Commons licence

Firstly I like the quote from statistician George E.P. Box, “essentially all models are wrong, but some are useful.”

What I like about the root cause analysis is how it teaches you to not react to symptoms. It encourages you to look at the relationship between observations and move deeper. All of this is subjective interpretation and, like systems thinking, depends on how a person draws the relationships. From this perspective, they are similar.

Many people describe the five whys as a technique and one that I draw upon more often. I prefer the fishbone method of root cause analysis because it helps encourage you to think that there may be more than one cause for an effect you see.

When you take the results of root cause analysis and try to see if there are any cyclic relationships, you might end up identifying more effective leverage points where breaking, accelerating or dampening a reinforcing loop with a small effort might have a significant impact on the system.

After studying complexity theory, an interesting approach at looking at these models is never thinking about them in a mode of conflict. Instead, you should be looking at where there is value and trying to apply them where you can realise that value. Never look at models as competing (OR-mode) thinking. View them as complementary (AND-mode thinking)

Systems Diagramming Tools

Just a quick reminder to myself about a number of tools available to people interested in Systems Thinking:

  • Flying Logic (Commerical) – My favourite so far with nice looks, and an emphasis on building the diagram collaboratively instead of simply focusing on output. It automatically adjusts the layout when adding in nodes for minimal line-cross overs. Can be a be nauseating sometimes.
  • Graphviz (Free) – Simple looping diagram that’s easy to automate. Not was good representation for causal loops if you want to discriminate amplifying/dampening cycles, but good bang for buck
  • Omnigraffle (Commercial) – Diagramming tools that makes very snazzy diagrams. Less powerful on automatic layout than Flying Logic. Mostly manual.

Summary of XP2011

First full day of XP2011 was a pretty full schedule as I had to prepare for two lightning talks on different subjects. Fortunately both of the topics were very close to my heart, one about learning using the Dreyfus Model (slides) and the other about Systems Thinking (slides). The second day started off with a great breakfast selection at the conference hotel before kicking into the keynote by Esther Derby. Clearly jetlagged, Derby used a set of hand drawn slides to explain her topic, “No Silver Bullets”.

Her presentation style was very conversational and I can’t say that the crowd responded very well to this. Perhaps it was their jetlag as well, or the way the room had been set up. Nevertheless, through many of her stories, I still saw many heads nodding and a really great response on twitter to the things that she was saying.

I’ve followed Derby’s writing for years and could only wish more people would be exposed to them. As a result, I found many of the topics and opinions I found interesting reinforced, such as failing to address the management layer inevitably means agile adoption hits a hard ceiling. Or the oscillating behaviour that results when managers attempt to react to a system with long delays in its feedback cycle. I appreciated the very vivid term, “Bang! Bang!”-management style describing the style of managers who seem to have only two distinct and opposing reactions to a system, unable to moderate their use and wait for systems to find a new equilibrium. If you imagine these two opposing reactions the result of a huge iron lever being flipped, hopefully you can imagine where the noise comes from.

Derby covered lots of different areas, quoting a few people like Donella H Meadows, “The original purpose of hierarchies was to serve the sub systems, not the other way around.” And the work that George Lakoff does with word association with metaphors in our everyday use. Raising self awareness of your own in built biases and metaphors is another key thing she emphasised focusing on the judgements, habits, feelings, thoughts, mental models, beliefs, rules and values we tend to be intrinsically governed by. I particularly liked the phrase she uses to help people uncover their own and others’ mental models, “In what world would this make sense?”

She told one great story about the dangers of measurements as targets, using the example of the manager who decided to “Grade developer estimates”. This manager decided to give A’s to those who estimated on time, B’s to those who estimated over time, and C’s to those who estimated under time. Of course, you can imagine what magically happened as people’s grades mysteriously improved.

She also reminded me of the work of Ackoff, who I need to revisit, and the great work that he’s written about Systems Thinking. I have only been able to refer to the Fifth Discipline as a Systems Thinking book, but I really need to read his other ones to see if they would be of use, or are more accessible.

The rest of the day was a bit of a blur. A couple of highlights included seeing Marcus Ahnve take the work Luca Grulla and Brian Blignaut did with TDDing javascript to the next level and doing a demo of BDD.

David J. Anderson also reminded me of the importance to think in terms of the languages executives speak in order to better get our message across. He reminded me of all the great things that Ross Pettit has to say, although I think Anderson’s analysis on accounting for software development costs doesn’t seem to match with some of the data I’ve heard from Pettit.

There was so much more to the conference. Always the way great conversations emerged and the wonderful atmosphere of the hotel adding to the uniqueness to this event.

Data on Estimation vs Number of Stories

Last year, I worked on an inception trying to work out how big a three way product merge and rebuild would take. The business wanted to know what they could have by the start of summer this year.

During this intense four week inception we identified a huge number of stories – way more than I had ever identified in previous inceptions. Almost 500 storie by the end. I can’t recommend anyone going through this experience though we had drivers which meant we couldn’t avoid it this time.

My previous experience and gut feel tells me 100-ish stories (when I’m working with people to break it down) is probably enough work for a small dev team (3 dev pairs) for about 3 months. This was definitely a whopping year long programme of work (if done right).

We also had a lot of pressure to estimate them all. Up front. Obviously, attempting to estimate a year’s worth of work accurately is going to be pretty inaccurate. The longer the piece of work, the more assumptions will change, the more estimates made on those assumptions will be wrong. I know. However people still wanted numbers to understand how large this programme of work would be.

Some statistics
We ran incremental estimation sessions using relative story sizing, following fibonacci planning poker and estimating in points. Our maximum point size was 8 points. 5 was generally the highest though we tended to have 1 in 30 cards about this size.

We even iterated over a few estimates at random intervals to see if our relative sizing of stories changed significantly.

Interestingly enough, we stored some spreadsheets for various time during out estimation and I’ve pulled out some statistics from them, laid out in the table below:

Spreadsheet Version # Stories Identified # Stories Estimated Total Estimates in Points Average Point / story
0.22 135 129 340 2.63
0.26 529 395 1037 2.62
0.30 494 488 1346 2.75

What can we learn this from?
Firstly, one can see that the average story size isn’t significantly different over this large spread of stories. One could argue that given the dataset, it could be enough to extrapolate further estimates.

The next thing to consider is why do the numbers tend to average out? One could argue the story breakdown process for this project, leads to stories of the same size. It would be dangerous to assume all projects have similar story breakdown process.

Alternatively one could argue that the estimation process helped us breakdown stories to be approximately the same size. Nevertheless, an interesting observation and one I’ll continue to explore.

Book Review: Object-oriented Software Metrics

Working for a client in Berlin, I find the plane time where I normally catch up on some reading. Services like Read It Later make bookmarking online pages for offline reading a pleasure. This morning’s trip, I finished reading a book Michael Feathers tweeted about that. Titled, “Object-oriented software metrics”, and published 15 years ago I found this book most easily from a online second hand book store, and have to say I enjoyed many aspects to this book.

Wondering how much interest a metrics book could be, the author did well to keep the short book punchy and brief. I enjoyed the conversational style of the writing, and the pragmatic nature of his recommendations, such as “I put the threshold at zero so that we explicitly consider any violations.” He starts the book describing metrics and that they should be used for a real purpose, not just randomly corrected, and something I’m pleased resonates very well with a chapter I’m contributing to a book. It’s obvious he comes from applying metrics with real purpose in the real world, talking about examples where various metrics might be used to drive various design choices, or further investigation.

The author divides the metrics into two sections, the first focusing on metrics related to estimating and sizing, or project planning. The second set focuses on design metrics related to code. The metrics that emphasise estimation piqued my interest as a reflection on how estimation methods used to be run, or maybe in some places still are run such as Developer Days per Line of Code, or his suggestion on Developer Days per Public Responsibility. I think the second set proved more relevant to me.

The author shares some of his preferred metrics thresholds and, they too, resonate strongly with my own views of size of methods, number of instance variables in classes, etc. In fact, I’d almost say they were definitely much more extreme such as 6 message sends per method, with my preferred number between 5-10 depending on the team I’m working with. Part of this, something that the author emphasises, is also heavily influenced by the programming language of choice.

Few of the metrics talked about were new to me, having made use of tools like Checkstyle and PMD, although I found he used several I’ve not really tracked such as number of classes thrown away, number of times a class is reused and the number of times a class is touched, something I’d like to ponder on a lot more. One metric I’ve also never considered collecting or tracking is number of problems or errors reported by class/module though I suspect the overhead in tracking this may outweigh the benefit it brings because it’s much harder to automate this.

His emphasis on the influencing factors on code metrics also got me reflecting on my own experiences, once again strongly resonating with my own experiences. His mentioning of key classes resonate with the domain model described in Eric Evans’ Domain Driven Design. I would also anecdotally agree that the type of UI heavily influences the number of supporting classes with technical interfaces (i.e. APIs) requiring less classes than rich GUIs. I like a lot of the distribution graphs he uses and will definitely consider using these in the future.

I’d highly recommend this book if you’ve never really sat down and thought about using code metrics on your projects. It’s got me thinking about a number of other interesting side projects about visualisation and further feedback loops on projects.

« Older posts

© 2024 patkua@work

Theme by Anders NorenUp ↑