The intersection of technology and leadership

Category: Development (Page 4 of 18)

Managing Ruby Development Environments

One of the principles I like is being able to set up new development environments very quickly. The java space offers many libraries for managing your environment that means that each project works in a separate space. In contrast, in the .Net space an anti-pattern is one that often requires many installs to your GAC (Global Assembly Cache) often through the use of a “mouse-driven”-only installer.

Fortunately the ruby community offers a number of tools for managing both versions of ruby and the libraries that you use. The ones that I will often reach for, now include:

  • RVM – Ruby Version Manager. Allows you to have different versions of ruby, and to quickly switch between them
  • Bundler – Management of gems.

The ultimate acceptance test for this, is can developers simply “check-out” and go. The lead time to setup a new development environment should be very quick.

Note that there is now a competing tool for managing environments called RBenv although integrated tool support (like in RubyMine) is only starting to come through

Taming the Hippo (CMS) Beast

I eluded in a previous post our struggles dealing with the HippoCMS platform. It wasn’t our preferred path, but a choice handed down from above. Enough said about that. It’s useful to understand a little bit about the environment we were using it.

I believe the pressure to choose a CMS came from a deadline that required some choices about platform choice to be made in an organisation. At this time, the extent to what the actual product was unknown. Our experience working with other clients is that you should generally work out what you want to do before you pick a product, or the platform will often dictate and limit your ability to do things. My colleague Erik Dörnenburg has been writing more about this recently.

The premises of a CMS are alluring for organisations. We have content… therefore we need a content management system. The thought ensues, “Surely we can just buy one off the shelf.” Whether or not you should use a CMS is for another blog post, and you can read some of Martin Fowler’s thoughts on the subject here.

We wanted to protect our client’s ability to evolve their website beyond the restrictions of their CMS, so we architected a system where content managed in a CMS would sit behind a content service, and a separate part of the stack focused on the rendering side. It looks a little like this:

ContentService

The issues that we faced with HippoCMS included:

A small community based on the Java Content Repository

Hippo is based on the Java Content Repository (JCR) API, a specification for standardising the storage and access of content. Even as I write this blog, putting “JCR” or “Java Content Repository” I am forced to link to the wikipedia page because I spent three minutes trying to find the official Java site (it looks like the official site is hosted by Adobe here). If the standard is small, the community surrounding the products are naturally going to be smaller. Unlike users of spring, putting a stacktrace into google will generally show the sourcecode of the file rather than how someone got over it. I’d be happy living on the bleeding edge… if the technology was actually pretty decent.

Unfortunately a lot of the gripes I write about are the fact that the product itself is based on the the JCR specification. Some simple examples include:

  • A proprietary query syntax – You query the JCR with an xpath-like query language. It’s actually less useful than xpath, such as not implementing all functions available in xpath and some weird quirks
  • Connecting to the repository via two mechanisms – Either RMI (yuck! and inefficient) or in memory. This automatically limits your deployment options to the application container model. Forget fast feedback loops of changing, starting a java process and then retesting.

Hippo CMS UI generates a huge number of exceptions
One reason Hippo was selected was for the perceived separability of the CMS editor and the website component (referred to as the Hippo Site Toolkit). We didn’t want to tightly couple the publishing/rendering side to the same technology stack as the underlying CMS. Hippo allows you to do this by having separately deployed artefacts in the application container. Unfortunately, the Wicket-based UI (maybe because we used it without the Hippo Site Toolkit) generates exceptions like nobody’s business. We spent some effort trying to understand the exceptions and fix them, but there were frankly too many to mention.

Poor taxonomy plugin implementation
One of the reasons Hippo was allegedly picked was for the taxonomy plugin. Unfortunately this gave us no world of pain both in usability and in terms of maintaining it. In terms of the specific issues we faced with the maintenance included the multi-language support (it didn’t allow that) and then just simply getting it deployed without issues.

CMS UI lack of responsiveness
Our client’s usage of the site wasn’t very big. Less than 300 articles and, at the peak, about 10 concurrent users. Let’s just say that even with three people, the UI was sluggish and unresponsive. We tried some of the suggestions on this page, but it’s a bit of a worry that it can’t responsively support more than one user out of the box with standard configuration.

Configuration inside the JCR
Most of our projects take a pretty standard approach to implementing Continuous Delivery. We want to easily source control configuration, and script deployments so that releases into different environments are repeatable, reliable, rapid and consistent. Unfortunately a lot of the configuration for new document type involves “switching a flag to capture changes”, playing around with the UI for a new document type” and then exporting a bunch of XML that you must then load with some very proprietary APIs.

After several iterations, we were able to streamline this process as best we could but that took some time (I’m guessing about a developer two weeks full time).

Lack of testability
We spent quite a bit of effort trying to work out the best automated testing strategy. Some of the developers first tried replicating the JCR structure the UI would recreate but then I pointed out that would give us no feedback of if Hippo changed the way did its mapping. We ended up with some integration tests that drove the wicket-based UI (with a wonderfully consistent but horrid set of generated IDs) and then poked our content service for expected results.

A pair of developers worked out a great strategy for dealing with this, working out the dynamically generated APIs and driving the UI via Selenium Webdriver to generate the data we would query inside the proprietary XML-based data store.

Lack of real clustering
In “enterprise” mode, you can opt to pay for clustering support although it’s a little bit strange because you aren’t recommended to upgrade a single node within a cluster when other nodes are connected to the same datastore (in case the shared state is corrupted). This kind of makes seamless upgrades without complicated DB mirror/restore and switcheroo really difficult. We ended up architecting the system for a degraded service using caches on the content service as a compromise to the “clustered” CMS.

Summary
As much as I wish success for the Hippo group, I think many of the problems are around its inherent basis on the JCR. I do think that there are a couple more things that could be done to make life easier for developers including increasing the amount of documentation and thinking about how to better streamline automated, frequent deployments around the CMS.

Cheat Sheet for Javascript Testing with Jasmine

Jasmine is the default unit testing framework that I use when writing javascript, however my poor brain can’t always remember all the different ways of getting things to work. There are quite a number of cheat sheets out on the internet including:

They don’t quite cover all the examples as well. Here’s my contributions to demonstrate some of the common uses.

describe("jasmine", function () {

    describe("basic invocations", function () {

        var SampleDependency = function () {
            return {
                usefulMethod:function (firstParameter, secondParameter) {
                },
                anotherUsefulMethod:function () {
                }
            };
        };

        var Consumer = function (dependency) {
            return {
                run:function () {
                    dependency.usefulMethod("first", "second");
                },
                runSecondMethod:function () {
                    dependency.anotherUsefulMethod();
                },
                runWithRequiredCallback:function(callback) {
                    callback("an argument");
                }
            };
        };

        it("should spy on an existing function", function () {
            // given
            var dependency = new SampleDependency();
            spyOn(dependency, "usefulMethod");
            var consumer = new Consumer(dependency);

            // when
            consumer.run();

            // then
            expect(dependency.usefulMethod).toHaveBeenCalled();
            expect(dependency.usefulMethod).toHaveBeenCalledWith("first", "second");
            expect(dependency.usefulMethod).toHaveBeenCalledWith(jasmine.any(String), jasmine.any(String));
            expect(dependency.usefulMethod.callCount).toEqual(1);
            expect(dependency.usefulMethod.mostRecentCall.args).toEqual(["first", "second"]);
        });

        it("should demonstrate resetting of the spy", function () {
            // given
            var dependency = new SampleDependency();
            spyOn(dependency, "usefulMethod");
            dependency.usefulMethod();
            dependency.usefulMethod.reset();

            // when
            dependency.usefulMethod();

            // then
            expect(dependency.usefulMethod).toHaveBeenCalled();
            expect(dependency.usefulMethod.callCount).toEqual(1);
        });

        it("should demonstrate creating a spy object with prepopulated methods", function () {
            // given
            var dependency = jasmine.createSpyObj("dependency", ["usefulMethod", "anotherUsefulMethod"]);
            var consumer = new Consumer(dependency);

            // when
            consumer.run();
            consumer.runSecondMethod();

            // then
            expect(dependency.usefulMethod).toHaveBeenCalled();
            expect(dependency.anotherUsefulMethod).toHaveBeenCalled();
        });

        it("should demonstrate creating a stub object", function () {
            // given
            var dependency = jasmine.createSpyObj("dependency", ["usefulMethod", "anotherUsefulMethod"]);
            var consumer = new Consumer(dependency);
            var stubbedCallback = jasmine.createSpy("stub callback");


            // when
            consumer.runWithRequiredCallback(stubbedCallback);

            // then
            expect(stubbedCallback).toHaveBeenCalled();
            expect(stubbedCallback).toHaveBeenCalledWith("an argument");
        });
    });


    describe("returning a value", function () {
        var Dependency = function () {
            return {
                getMultiplier:function () {
                    return 10;
                }
            };
        };

        var Consumer = function (dependency) {
            return {
                calculateSomethingWithMultiplier:function (number) {
                    return number * dependency.getMultiplier();
                }
            };
        };

        it('should be correct', function () {
            // given
            var dependency = new Dependency();
            spyOn(dependency, "getMultiplier").andReturn(40);
            var consumer = new Consumer(dependency);

            // when
            var result = consumer.calculateSomethingWithMultiplier(3);

            // then
            expect(result).toEqual(120);
        });
    });


    it("should demonstrate creating a stub that returns a value", function () {
    });

    describe("creating a stub that calls a fake", function () {
        var Dependency = function () {
            return {
                request:function (callback) {
                }
            };
        };
        var Consumer = function (dependency) {
            var capturedValue = "";
            return {
                hardAtWork:function () {
                    dependency.request(function (value) {
                        capturedValue = value;
                    });
                },
                getCapturedValue:function () {
                    return capturedValue;
                }
            };
        };

        it('should demonstrate creating a stub function that does something interesting', function () {
            // given
            var dependency = new Dependency();
            spyOn(dependency, "request").andCallFake(function (callback) {
                callback("Controlled return value from callback");
            });
            var consumer = new Consumer(dependency);

            // when
            consumer.hardAtWork();

            // then
            expect(consumer.getCapturedValue()).toEqual("Controlled return value from callback");
        });

    });
});

RequireJS is the Spring Framework of Javascript

I’ve been working on setting up the infrastructure for a mostly javascript based project, and we’ve been putting RequireJS into the codebase to help us manage the file dependencies instead of having to declare them within the page that is using them. As a concept, RequireJS is helping us keep different javascript modules apart in different files and let’s us assemble them.

RequireJS works by declaring dependencies and having the framework pull them in when you need them.

define(["aDependency"], function(theDependency) {
  // now I can do something with theDependency
  theDependency.aMethodOnIt();
})

This is pretty much how spring works, but the issue I have is that RequireJS manages the lifecycle of the javascript objects, so when you want to pass in a substitute for a test, you end up in a dilemma.

define(["aDependency"], function(theDependency) { // how do I get inject a different instance?
  // now I can do something with theDependency
  theDependency.aMethodOnIt();
})

Unsurprisingly a number of people wrote libraries such as testr which allow you to override the requirejs to inject different versions. Although very reasonable approaches, I find this approach a little bit smelly as you’re effectively patching a library you don’t own. The ruby community know the dangers of monkey patching too much, particularly those parts of a code base you cannot control and the potential issues you face when you try to upgrade.

Our current approach involves using RequireJS to manage the file/name dependencies, but for us to write javascript that allows us to control the instances of the objects that we want. Here’s an example:

dependency.js

define([], function () {
    return function () {
        return {
            doSomeWork:function () {
            }
        };
    };
});

consumer.js

define([], function () {
    return function (aDependency) {
        var dependency = aDependency;
        return {
            start:function () {
                dependency.doSomeWork();
            }
        };
    };
});

And then we control the lifecycle of the components and instances in the application using the following code.

main.js

define(["consumer", "dependency"], function (Consumer, Dependency) {
    var dependency = Dependency();
    var consumer = Consumer(dependency);
    consumer.start();
});

And our jasmine tests get to look like this:

requirejs = require('requirejs');

describe("consumer", function() {
    it("should ensure the dependency does some work", function() {
        // given
        var dependency = jasmine.createSpyObj("dependency", ["doSomeWork"]);
        var consumer = requirejs("consumer")(dependency);

        // when
        consumer.start();

        // then
        expect(dependency.doSomeWork).toHaveBeenCalled();
    });
});

This approach has been working out well, forcing us to manage the dependency and global hell that javascript global functions can quickly become. Thoughts? Please leave a comment.

Looking back at a year with a client

Over the last twelve months, I’ve worked with a client to rebuild a digital platform and a team to deliver it. It’s now time for us to leave, and a perfect time to reflect on how things went. The digital platform rebuild was part of a greater transformation programme that also involved the entire business changing alongside at almost all levels of people in the organisation. The programme also outlined, before we arrived, outlined a complete change in all technology platforms as well (CRM, CMS, website) to be rebuilt for a more integrated and holistic service offering.

Our part in this program turned into building the new web digital platform, working against a very high level roadmap, and a hard marketing deadline. We ended up building the site using Ruby on Rails serving content driven by a 3rd party decisioning platform (much like Amazon recommendations) guided by the business vision of better tailored content for end users. We didn’t have much input into the final choice of several products. I’m very proud of the end result, particularly given the tense and short-timed framed environment in which we worked. Here are some examples of constraints we worked with:

  • 4 Product Owners over the span of 11 months – From January this year, through to the end of October, the business was onto its fourth Product Owner for the digital platform. Building a consistent product is pretty much nigh impossible with changing product hands, and trying to bridge work from one Product Owner to the next was definitely a challenge.
  • Constant churn in the business – The 4 product owners is one instance, but we would often be working with people in the business to work out what should be done, only to find that the following week they were no longer with the business.
  • 3 Design Agencies engaged resulting in “reskinning” approved by the board before the 6 month public launch – We saw several “design changes” done by firms well stocked with people capable of generating beautifully-rendered PDFs that were signed off. However often these would imply new functional work, or be impractical to the web medium.
  • Marketing deadlines announced well before the development team had been engaged – A common pattern in the business was marketing launching a press release announcing a date, well before the people involved in delivering it were made aware, or even consulted on it.
  • PM Explosion – At one point, it felt like we had more Project Managers on the group planning out work with budgets and timelines that would be approved well before the development team had been approached.

Even with these constraints we’ve been able to deploy to production 37 times in the last three months and more since the original MVP go-live in July. Part of what I’m particularly proud of is the team where we were able to achieve the following:

  • Building an Evolvable Architecture – We questioned the original choice and need for a CMS but with a constraint that a decision had been made on buying these tools, we architected a solution that would hide the implementation details of the CMS via a content service. With our TW experience and pain dealing with CMSes that are shadowed by business need, we wanted something that would not constrain what the business could achieve (hence the decoupling). We even had a chance to prove this out when the business requirements quickly hit the limit of the CMS’s built in categorisation module.
  • Responding to Change – The business roadmaps seems to change on a daily basis, and our team was able to quickly tack to accommodate these business changes. We changed the team structure as the team size increased, changed the team structure as we went live, and again as people in the business changed. Whilst our process felt similar, it would look nothing like a textbook XP, Scrum or Kanban process.
  • Improving the Process – Our team has been constantly trying to change the process not only internally to the development team, but also helping people in the business find ways of improving their own way of working. Progress has been slow as the change that starts falters as people leave. Retrospectives have been a key tool but also has the ability for the team to feel empowered with recommending and pursuing improvements they see fit.
  • Setting an example of transparency – Showcases are key to the business, and we would offer fortnightly showcases to the features built to the entire organisation. Huge numbers of people came along and I found it fascinating that it was one place where people had an opportunity to talk across silos. This sometimes slowed down our ability to show what we had actually done, but I felt exposed missing communication structures that people still needed.

At a technical level, I’m really proud of some of the principles I wanted to achieve at the start and that the team lived throughout (I’d love to hear what their experience is). Some of these include:

  • Fast developer setup – Getting started on each new machine should be fast without complicated installation processes
  • Developers rotating through operations – There’s nothing like supporting the code you wrote to help developers understand the importance of logging, test cases that are missed and just experiencing what production support is like
  • DevOps culture – Everyone on the team is capable of navigating puppet, knowing where to look for configuration changes and ensuring that applications are configurable enough to be deployed without special builds across environments.
  • Continuous Delivery – Our second product owner (the first transitioned out the day we went live) actually asked for us to release less often (i.e. it is a business decision to go-live) so that they could work with the rest of the business to ensure they had their non-IT dependencies in place.
  • Devolved Authority to Feature Leads – I blogged previously about Feature Leads who could help shape the technical solution and drive the knowledge for the project.
  • Metrics Driven Requirements – Though not completely successful, we were able to stop the business from implementing some feature by showing them production metrics. In this case, we were able to avoid building a complex search algorithm to show that we could achieve the same result by adding to a list of synonyms on search.
  • Everyone grows – If I look back at the project, I think everyone on the team has experienced and grown a significant amount in different ways. I think we struck a good balance between being able to work towards individuals goals and find ways they could help the project at the same time.

Other particular things I’m proud of the team:

  • Taming the Hippo – Worthy of its own post, Hippo CMS has been one of the least developer friendly tools I’ve had to deal with for some time. The team managed to work out how to run an effective functional test around its poor UI as well as deploy and upgrade the beast in different environments without the 12 step manual process outlined on their wiki.
  • Rapid team size – Management wanted the entire team to start at the same time. Even being able to push back, we ended up with a very aggressive ramp up and we still managed to deliver well.
  • Diverse but co-operative – We had something like 17 people and 14 different nationalities and it’s one of the strongest teams I’ve seen who were able to work through their differences and forge ahead.

Things that I would like to be different:

  • Find a way to code a lot more – Due to the circumstances, many elements drew me away from coding. At best, I can remember pairing with someone for at most two days a week (for a short time) and I would like to find a way to do that more.
  • Implement more validated learning – Although dependent on a product owner willing to do this, I would have liked to work more on trying to build and experiment a lot more.
  • Have a stronger relationship with decision makers with authority – I believe development teams work best when they are connected to people who can make decisions, not just organisational proxies who provide answers. Unfortunately I felt most of this cascaded very far up the organisation and due to the constant flux and organisational change, this wasn’t possible in the timeframe. I’m hopeful that as the business matures and more permanent people find their place, this will be more possible.

Goto Aarhus 2012

This year was my first time to both attend and present at Goto Aarhus. Over the years, many of my colleagues have said that it’s one of the best conferences with topics in lots of different areas. This year focused on topics such as NoSQL, Big Data, Humans at Work, Javascript, Continuous Delivery, Cloud and many more areas.

Two of the best presentations I attended, both for content and delivery were Sam Newman and Jez Humble, author of Continuous Delivery (Disclaimer: They are my colleagues after all). What I enjoyed about their talks were both their talk about real world examples, as well as important advice as well as the delivery. Getting the balance right is really difficult to do.

I also really liked the keynote from Dirk Duellmann from CERN who talked about the big data challenges they have storing information. Although it took a while to get to the meaty part of the data, storage details I think it’s a very interesting outlook they have with architectural choices such as the view that they cannot design for hardware or devices today as these will be obsolete as time goes forward. Being able to retrieve historical information is important as it the ability to store all of the data in a format others can read. They have realised the importance of the scale of the work they are doing, so they are focusing on doing something good (storing and making available data) and working with other groups to do the analysis.

There were loads of highlights such as meeting many new people and connecting with old ones as well as some interesting side conversations.

I gave my talk (above) and was very happy with the results. The Trifork team behind the conference are awesome at getting feedback to presenters for quickly and I was very happy with the results. The conference uses a simple voting system for feedback (red, yellow, green) and they keep track of the number of walk outs. I ended up with 90 green, 26 yellow, 1 red and only 2 walkouts. I have no idea how that compares with other speakers but I’m pretty happy with the results. What I also appreciated were the people who came up afterwards to talk to me about how the topic is really important and what some people got out of it (affirmation they are doing the right thing, new ideas to take back, new books to read, more things to focus on, or a good idea of how to prepare as they step into the role).

Spike Showcase

A key concern of mine, as a Technical Leader is ensuring that knowledge is distributed on a team. Working on a large team makes that a challenge because so many changes happen at the same time, but you’re also dealing with multiple learning and communication styles. This means that one technique doesn’t work as well as another. Due to my passion for learning, I try to keep this in mind and try to ensure we use multiple channels for getting information to everyone.

One practice we’ve been experimenting on our project is one we call, “The Spike Showcase”. Spikes come from the Extreme Programming methodology and represent an investigation into an area the team doesn’t have knowledge of. We create one of these when we need to generate options for the business, or when we are dealing with a new integration point and want to assess the quality, testability, or best designs. That knowledge is normally take on by a pair and remains dangerously siloed on a fairly large team.


Image sourced from drubuntu’s flickr stream

The pair normally writes up their outcome on the wiki (for long term purposes) and they have an area where they can check in their code for reference, yet documentation is not very engaging and I know that most people on the team won’t look at the code unless they are going to work in that area because they are busy doing other things. Pair programming solves this problem to a degree, but on a large team would take a long time to distribute the information.

Our solution has been to hold a “Spike Showcase” where the pair who completed the spike hold a session with the entire development team, talking about what the problem space is, what they tried, and running through the design and solution. Depending on the type of problem being solved, the pair will use a white board to diagram the logical interactions, or show some screenshots of what they were trying to achieve from a business sense and then they will demonstrate the spike solution in action before finally showing the code and how it all works. We then run some question and answers with the team (allowing people to pull knowledge) before finishing up.

We have run several “Spike Showcases” now and I feel they are invaluable to ensuring a large team keeps abreast of various solutions going on.

Reflections on Agile 2012

Another year, another agile conference. It’s time for reflecting on the conference and uncovering lessons learned. Dallas, Texas hosted this year’s Agile Conference. More accurately, the Gaylord Texan Resort in Grapevine hosted this year’s Agile Conference. Loved by many at the conference (notably less so by Europeans) the resort reminds me of the Eden Project and a weird biosphere (see picture below) that is self-contained and fully air-conditioned. Although maybe this wasn’t such a bad thing with a West Nile virus outbreak in Dallas.

Needless to say that I stepped out quite a bit to try to get some fresh, if not, refreshingly humid air.

Onto the conference. It was very well organised, very well run and even rapidly responded to feedback (such as moving rooms when demand proved too much for some of the anticipated sessions. Food came out very promptly in the different breaks. We didn’t have to queue too long and the variety was pretty good. The only breakdown was probably the Tuesday lunchtime where it wasn’t clear we had to get our own food and with a limited number of on-site restaurants in our self-enclosed bubble world, proved to be a bit of a tight squeeze in schedule.

The people at the conference seemed to be a bit of a mix. Mainly lots of consultants like myself sharing their experiences, but as one person noted, an extraordinary number of agile coaches all apparently looking for work. On the other extreme there seemed to be lots of companies adopting agile and lots of people selling tools and training to help them.

Lots of parallel tracks meant lots of choice for many people but I often found it hard to find things that worked for me. I’m less interested in “enterprise agile adoption”, and more interested in the practices pushing the boundaries, or the deep insight offered by people. The few technical sessions I went seemed to be aimed at a bit more of an introductory audience. I particularly avoided any of the “do this with scrum” or “do this with kanban” as these appeared by be pushing.

In terms of keynotes, I thought they did a great job of assembling some diverse and interesting sessions. Although Bob Sutton (No A**hole Rule author) felt like he didn’t do much preparation for his keynote from the text heavy slides that jumped at different paces, he had some good anecdotes and stories to share. My biggest takeaway from that session was thinking about taking away practices just as much as adding practices, something that I think I do implicitly but should try to do more explicitly. The other keynotes were pretty inspiring as well, with Dr. Sunita Maheshwari behind Telerad talking about her accidental experiment moving into doing remote radiology to support the night-shift need of hospitals in the US and the interesting growth of their business. The other really inspirational keynote was by Joe Justice, the guy behind the amazing Wikispeed project taking sets of agile practices and principles back into the car-making industry. I felt he really knew his stuff, and it’s amazing how you can tell someone who really understands the values and trying to live them in different ways and then translating them into a different world. Very cool stuff that you should check out.

In terms of other workshop sessions, I left half way through many of them as the ideas were either too slow, or not at all interesting (such as one on Agile Enterprise Architecture that spent 30 minutes trying to go back to the age-old debate of defining Enterprise Architecture.)

Two of my most favourite sessions was one by Linda Rising who gave a very heart-felt and personal Q&A session that left many people in tears. Her stories are always very personal, and I really admire her ability to look beyond someone’s words and really uncover the true question they are asking with a usually insightful answer as well! The other session was listening to the great work that Luke Hohmann of Innovation Games has been doing with the San Jose government to change the way they make decisions about where the money goes through the use of games and play. Very awesome stuff.

I had my session in the last possible slot on the Thursday and had a large number of well known people in competing slots such as Jeff Sutherland, Esther Derby and Diana Larsen. I’m very happy with the turn out as we had a lot of fun playing games from the Systems Thinking Playbook including a number of insightful conversations about systems thinking concepts and how they apply to our working life. One of my most favourite exercises (Harvest) that demonstrates the Tragedy of the Commons archectype played its course and we finished in just three years (iterations) only due to a constraint I added early into the game. I love this exercise for its potential for variation and the insightful conversations about how this applies to agile teams, organisations and functions.

You often can’t come away from conferences without new references, so here’s the list of books and web resources I noted down (but obviously my summary is without actually reading into it, so YMMV):

« Older posts Newer posts »

© 2024 patkua@work

Theme by Anders NorenUp ↑