The intersection of technology and leadership

Category: Philosophy (Page 2 of 3)

QCon London 2010 Day 2

Keynote: Ralph Johnson on Living and working with aging software
The final keynote of the conference kicked off the second day of QCon and presented to us by Ralph Johnson of the Gang of Four fame and whose students contributed to Fowler’s timeless Refactoring book.

Ralph was asking about the number of people who work on systems they created (versus systems that other created) and then argued about the meaning of maintenance programmer arguing for software evolutionists instead. Whilst I agree that no one likes to be labelled a maintenance programmer, changing the name doesn’t really matter.

I like the different perspectives that Ralph brought. He talked about Software Capital as about truly understanding how the software works. Code is one example of this, but actually there are many other things that go into understanding how the software works. He argues that the right sort of documentation (and other artefacts) that helps understand how the software works is another part of Software Capital. As a result, you also need the people around the software to help keep the story alive.

My observations of various organisations also backs up some of the things that he says such as, the larger a system gets, the more likely you need some form of documentation and social structure around the software.

I think a lot of people didn’t like Ralph’s keynote because he talked about refactoring as if it was a new idea and, unfortunately this audience is one that has been exposed to these ideas for a long time.

Track: Beyond Tools and Technologies
I spent the rest of my time on the track that I was speaking at later that day. The first talk of the day was Martin Fowler whose talk, “What are we programming for?” brought the tiny room to a stand still. The room was so full (although it was a small room) that they had to turn people away.

WhatAreWeProgrammingFor

Martin used the talk to highlight the importance of going beyond acting like hired guns and actually thinking about how to use the skills for something good. He asked people to consider the essence of the firms that they work for and whether or not they are benefiting humanity.

Rebecca Parsons, ThoughtWorks’ CTO then talked about the importance of team diversity in her talk, “How to avoid ‘We never even thought of that’!” She talked about the role that diversity played in creating innovations in science and how too much of the “sameness” creates problems with creating too many assumptions. I like the idea of a balancing tradeoffs of not enough diversity leading to problems in teams and too much diversity to the point that the team don’t share the same vocabulary.

After the lunch break, I ran my session, “Building the next generation of Technical Leaders“. I was very happy with the turnout and had some great feedback from the participants. It’s an issues I think is sorely neglected in our industry and has a huge impact on the team effectiveness.

The next session was run by our Director of Social Engagement (Jeff Wishnie) and Merrick Schaefer of Unicef discussing the importance that technology has on developing countries and the significant impact it can have on the lives. They talked about the Rapid FTR project that is being developed to help capture information about children who’ve been separated from their parents with hope of quickly reconnecting them, and at worst, giving them legal status by registering them with an agency such as Unicef.

The final session of the day rounded out with Mick Blowfield with a presentation titled, “IT in a Low Carbon Future“.

Whilst there were many other potentially interesting sessions, I was really happy to hear about the issues that go beyond just Tools and Technology.

A Community of Thinking Practitioners

I first read “A Community of Thinkers” that Liz, Jean and Eric published late last year. I remember thinking that I felt strongly aligned to it, yet slightly uncomfortable with the exact wording. I toyed around with some words and now, a couple of months later, I am much more comfortable with a slightly abridged version.

It isn’t enough to be a member of a community of thinkers. We can philosophise and think ourselves to death. The world continues to operate in complex ways (yes, as in this, and this sort of complexity). It is not enough to sit back and only think. We need to experiment. We need to apply. We need to practise, then reflect and feed those learnings back into our thinking. This is the essence I respect the most about certain people in the agile community. This is what I want to keep alive. Remind yourself the Do is an important part of PDCA just as much as Check (Reflect) is.

For me, I am not just a member of a community of thinkers. I am a member of a community of thinking practitioners. If you’re not practicing and actively thinking, you’re not part of my community.

A Community of Thinking Practitioners
I am a member of a community of thinking practitioners.

I believe that communities exist as homes for professionals to learn, teach, and reflect on their work.

I challenge each community in the software industry to:

  • reflect and honor respect the practitioners who make its existence possible;
  • provide an excellent experience for its members;
  • support the excellent experience its members provide for their clients and colleagues in all aspects of their professional interactions;
  • exemplify, as a body, the professional and humane behavior of its members;
  • engage and collaborate within and across communities through respectful exploration of diverse and divergent insights;
  • embrace newcomers to the community openly and to celebrate ongoing journeys; and
  • thrive on the sustained health of the community and its members through continual practice, reflection and improvement.

I believe that leaders in each community have a responsibility to exhibit these behaviors, and that people who exhibit these behaviors will become leaders.

I am a member of a community of thinking practitioners. If I should happen to be a catalyst more than others, I consider that a tribute to those who have inspired me.

Share Alike Creative CommonsThis one is based upon the original posted on Liz Keogh’s blog here. This licensed under the Share Alike Creative Commons License. All modifications/addendums I made are emphasised in italics.

Shu Ha Ri as the flow of Energy

Andy wrote a great blog post trying to relate Shu Ha Ri to the Dreyfus Model of Skills Acquisition. When I posted my thoughts, he suggested I blog about my story, so here it is.

In it’s simplest form, Shu -> Ha -> Ri roughly translates to Follow -> Detach -> Transcend. When I think back to the days when I studied Aikido (where I believe these concepts originate), I considered Shu Ha Ri as the flow of energy, or where you focus the majority of your efforts.

Flow

Photo of energy taken from HocusFocusClick’s Flickr stream under the creative commons licence

A Shu person, for example, focuses their energy on simply executing a very basic move. They repeat the kata, over and over, with the weight of their conscious mostly on thinking, “I move my arm up to block”.

The Ha person, no longer follows the rote kata, “detaching” from the original conscious thought, now focused on its application. They spend their time thinking, “An arm is coming my way, I better block”.

The Ri person is certainly spectacular to witness with energy flowing from move to move, something the dojo sensei demonstrated during a yearly open house. During this event, lasting a good twenty minutes, five black belt students attacked the sensei from all sides. They attacked with their hands and a small assortment of weapons. The sensei defended by turning, locking and throwing each student back in return. What I still remember vividly was comparing the black belts, completely drenched to the skin in sweat, to the sensei, who barely showed any signs of sweat.

I see this same flow of energy and focus of effort when watching people learn development skills. At one end of the spectrum, the Shu developer spends an enormous effort thinking about how to execute a particular practice. At the other end of the spectrum, the right practices occur and great quality code (and tests) appear.

Support multiple models

As I gather more experience (i.e. get older) I’ve discovered every model has a breaking point. What does that mean and why should you care? Accepting that models break is the first step to understanding and identifying their limitations. More importantly, because models have a breaking point, you should be actively discovering other models that help you better communicate and grasp new concepts.

Sounds easy right? Unfortunately my experience in life proves the opposite with most people wanting to only hold a single “valid” model that manages to explain and justify everything. I see this as a consequence of western education guided by Socratic thought and a Platonic ideal but that is a post for another time. In real life, this desire to hold onto “one valid model” translates to arguments over the merits of a particular model and often the basis for justifying a position. Note that I have no problems arguing for the sake of testing and discovering the boundaries of a particular model.

What do you do about it?

Accept that models are simplifications of sometimes complicated, sometimes complex systems. Be open to exploring the boundaries of a particular model, uncovering where one model excels at explaining certain characteristics of a system. Seek out and invent new models that provide a different point of view, or that emphasise and highlight different aspects for that system.

You can’t measure everything effectively

Definitely agree with this from the 10 Things I Wish Lean Practitioners Wouldn’t Say in 2010 (via the Lean Blog):

7. “If you can’t measure it, you can’t manage it.”

I don’t believe that statement. I think this phrase should be avoided and it certainly shouldn’t be attributed to Dr. W. Edwards Deming, as it often is mistakenly. Dr. Deming never said this and he, in fact, meant quite the opposite. Some of the most important factors in a system are very difficult, even impossible, to measure. That doesn’t mean you can’t try to manage them. John Hunter, friend and fellow blogger, has the definitive take definitive blog take on this here…

Don’t get me wrong. Metrics are important but they aren’t always most important.

Is the term ‘Agile’ still useful?

This year proved interesting for those in the agile community, or at least for me. The most commercialised agile methodology lost one of its largest figureheads, and a new community of thinkers emerged focused on prioritising a practice often implied or described as optional in other methodologies. What I found fascinating about a new community forming is that in finding its own identity, it naturally results in denouncing ties with existing communities (to a certain degree) and comparisons about whether or not they are “agile”.

This lead me to ask myself, “What is wrong with agile?” To better understand it, I went back to finding out why “agile” first started. From my understanding, a community of thinking practitioners (not just thinkers) coined the phrase “agile” to unite people working in different ways to help identify with each other. The actual methods for working in an “agile” fashion existed well before they coined the term.

I respect and heartily thank this group of thinking practitioners for coining the term “agile”. A simple word acting as an umbrella that encourages people to swap ideas between communities. I recognise this same “vagueness” that draws people together also makes it difficult for newcomers to identify with what it means.

Look at how other communities relentlessly protect their set of practices, branding and declaring you are or are not doing methodology X or approach Y, particularly when reinforced with certification programs. I’ve realised this same protectionist attitude acts as a wall to new ideas spreading between people and organisations that could be beneficial.

I vouch that the term “agile” remains useful. Let us not forget the original intent behind the word. Let us embrace and create opportunities to continue welcoming all thinking practitioners from different backgrounds to connect and swap ideas and experiences to be more effective and successful.

XP2009 Day 1

General impressions about the conference
I really enjoyed this year’s conference with the combination of a remote island in Italy and the small numbers (100+) giving many great opportunities for chatting with our experienced practitioners, and a handful of academics about lots of different topics. I found it refreshing that there seemed to be significantly more experienced practitioners and thus, I found it extremely nice to be able to chat about similar experiences rather than simply unidirectional advice I find when present with a higher proportion of beginners.

conferencepool

Who wouldn’t want to gather around this place for some great conversations?

The quality of sesions was better than the last two conferences, once again focused less on the introductory nature and more focused on specific aspects. Of course, I had recommendations about how to improve, particularly the organisational aspects, of the conference and I’ve at least had an opportunity to give that feedback having shared a return train with one of the organisers for next year’s conference.

Thoughts about the first day
The first part of this day was a keynote delivered by lean evangelist, Mary Poppendieck. Titled, “The Cultural Assumptions behind Agile Software Development”, Mary proposed that there are several (American-style) cultural assumptions behind many of the agile practices that make it all the more difficult to implement. She referenced heavily the work discussed by Geert Hofsted in his book, Cultural Dimensions.

I didn’t find the keynote particularly inspiring, nor particularly challenging. Country-based cultural dimensions are just one of the factors that permeate the way that people behave. As an agile consultant, you end up fighting corporate culture, and the systems that encourage and maintain that corporate culture and I see country-based cultural dimensions yet another contributing systemic effect. This does not mean that just because a country has a high degree of individualism, working in pairs or working collaboratively in a team will be impossible (perhaps just all the more difficult). As much as I enjoy hearing Mary speak, I also found her presentation a little bit too heavy in the whole powerpoint presentation with far too much text and outdated clipart.

I also ran my workshop in the morning, titled “Climbing the Dreyfus Ladder of Agile Practices” and want to thank all the experienced people that attended as it resulted in some really rich discussions. We managed to discuss seven different agile practices in detail, brainstormed a large set of behaviours for each, classifying them and classifying them into the different levels described the the Dreyfus Model of Skills Acquisition. The results from the workshop can be found here (photos to be updated).

In the afternoon, I helped out Emily Bache’s coding dojo, focused on Test Driven Development. We saw four different pairs tackling four different coding problems using different tools and languages. I learned about the tool JDave (it’s still a little bit too verbose for my liking), and saw different styles in the way that people executed the code kata. For me, I was hoping to demonstrate more on the pair programming side of test driven development as a pair, and I had a lot of fun even though I felt a little bit out of my depth with the tools. Thanks to Danilo for complementing my lack of experience with tools like Cucumber. 🙂

More to come…

Repeatability distracts from the real goal

Most organisations emphasise repeating the “process”, serving to distract from the real goal – repeating “success”. You need a certain amount of flexibility and adaptability in your process because you never work in the same environment twice.

This does not mean you do not use things that have worked well for you in the past. It means you should be prepared to change them if you can do it better.

The world mocks too much

One of my biggest annoyances when it comes to testing is when anyone reaches for a mock out of habit. The “purists” that I prefer to call “zealots”, are overwhelming in numbers, particularly in the TDD community (you do realise you can test drive your code without mocks?) Often I see teams use excuses like, “I only want to test a single responsibility at a time.” It sounds reasonable, yet it’s generally the start of something more insidious, one where suddenly mocks become the hammer and people want to apply it to everything. I sense it through signals like when people say “I need to mock a class”, or “I need to mock a call to a superclass”. Alternatively it’s quite obvious when the number of “stub” or “ignored” calls outnumber the “mocked” calls by an order of magnitude.

Please don’t confuse my stance with “classical state based testing purists” who don’t believe in mocking. Though Martin Fowler describes two, apparently opposing, styles to testing, Classical and Mockist Testing, I don’t see them as mutually exclusive options, rather two ends of a sliding scale. I typical gauge risk as a factor for determining which approach to take. I believe in using the right tool for the right job (easy to say, harder to do right), and believe in using mocking to give me the best balance of feedback when things break, enough confidence to refactor with safety, and as a tool for driving out a better design.

Broken Glass

Image of broken glass taken from Bern@t’s flickr stream under the creative commons licence

Even though I’ve never worked with Steve or Nat, the maintainers of JMock, I believe my views align quite strongly with theirs. When I used to be on the JMock mailing list, it fascinated me to see how many of their responses focused on better programming techniques rather than caving into demands for new features. JMock is highly opinionated software and I agree with them that you don’t want to make some things too easy, particularly those tasks that lend themselves to poor design.

Tests that are difficult to write, maintain, or understand are often a huge symptom for code that is equally poorly designed. That’s why that even though Mockito is a welcome addition to the testing toolkit, I’m equally frightened by its ability to silence the screams of test code executing poorly designed production code. The dogma to test a single aspect of every class in pure isolation often leads to excessively brittle, noisy and hard to maintain test suites. Worse yet, because the interactions between objects have been effectively declared frozen by handfuls of micro-tests, any redesign incurs the additional effort of rewriting all the “mock” tests. Thank goodness sometimes we have acceptance tests. Since the first time something is written is often not the best possible design, writing excessively fine grained tests puts a much larger burden on developers who need to refine it in the future.

So what is a better approach?
Interaction testing is a great technique, with mocks a welcome addition to a developer’s toolkit. Unfortunately it’s difficult to master all aspects including the syntax of a mocking framework, listening to the tests, and responding to appropriately with refactoring or redesign. I reserve the use of mocks for three distinct purposes:

  1. Exploring potential interactions – I like to use mocks to help me understand what my consumers are trying to do, and what their ideal interactions are. I try to experiment with a couple of different approaches, different names, different signatures to understand what this object should be. Mocks don’t prevent me from doing this, though it’s my current preferred tool for this task.
  2. Isolating well known boundaries – If I need to depend on external systems, it’s best to isolate them from the rest of the application under a well defined contract. For some dependencies, this may take some time to develop, establish, stabilise and verify (for which I prefer to use classical state based testing). Once I am confident that this interface is unlikely to change, then I’m happy to move to interaction testing for these external systems.
  3. Testing the boundaries of a group of collaborators – It’s often a group of closely collaborating objects to provide any useful behaviour to a system. I err towards using classical testing to test the objects in that closely collaborating set, and defer to mocking at the boundary of these collaborators.
« Older posts Newer posts »

© 2024 patkua@work

Theme by Anders NorenUp ↑