The intersection of technology and leadership

Category: Testing (Page 1 of 3)

Automated Tests for Asynchronous Processes

It’s been a while since I’ve worked on a server-side application that had asynchronous behaviour that wasn’t already an event-driven system. Asynchronous behaviour is always an interesting challenge to design and test. In general, asynchronous behaviour should not be hard to unit test – after all, the behaviour of an action shouldn’t necessarily be coupled temporally (see forms of coupling).

TIP: If you are finding the need for async testing in your unit tests, you’re probably doing something wrong and need to redesign your code to decouple these concerns.

If your testing strategy only includes unit testing, you will miss a whole bunch of behaviour which are often caught at high level of testing like integration, functional or system tests – which is where I need asynchronous testing.

Asychronous testing, conceptually, is actually pretty easy. Like synchronous testing, you take an action and then look for a desired result. However unlike synchronous testing, your test cannot guarantee that the action has completed before you check for the side-effect or result.

There are generally two approaches to testing asynchronous behaviour:

  1. Remove the asynchronous behaviour
  2. Poll until you have the desired state

Remove the asynchronous behaviour

I used this approach when TDD-ing a thick client application many years ago, when writing applications in swing applications was still a common approach. Doing this required isolating the action invoking behaviour into a single place, that, instead of it occurring in a different thread would, during the testing process, occur in the same thread as the test. I even gave a presentation on it in 2006, and wrote this cheatsheet talking about the process.

This approach required a disciplined approach to design where toggling this behaviour was isolated in a single place.

Poll until you have the desired state

Polling is a much more common approach to this problem however this involves the common problem of waiting and timeouts. Waiting too long increases your overall test time and extends the feedback loop. Waiting too short might also be quite costly depending on the operation you have (e.g. hammering some integration point unnecessarily).

Timeouts are another curse of asynchronous behaviour because you don’t really know when an action is going to take place, but you don’t really want a test going forever.

The last time I had to do something, we would often end up writing our own polling and timeout hook, while relatively simple is now available as a very simple library. Fortunately other people have also encountered this problem in java-land and contributed a library to help make testing this easier in the form of Awaitility.

Here is a simple test that demonstrates how easy the library can make testing asynchronous behaviour:

package com.thekua.spikes.aysnc.testing;

import com.thekua.spikes.aysnc.testing.FileGenerator;
import org.junit.Before;
import org.junit.Test;

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import static java.util.concurrent.TimeUnit.SECONDS;
import static org.awaitility.Awaitility.await;
import static org.hamcrest.Matchers.startsWith;
import static org.junit.Assert.assertThat;

public class FileGeneratorTest {

    private static final String RESULT_FILE = "target/test/resultFile.txt";
    private static final String STEP_1_LOG = "target/test/step1.log";
    private static final String STEP_2_LOG = "target/test/step2.log";
    private static final String STEP_3_LOG = "target/test/step3.log";

    private static final List<String> FILES_TO_CLEAN_UP = Arrays.asList(STEP_1_LOG, STEP_2_LOG, STEP_3_LOG, RESULT_FILE);


    @Before
    public void setUp() {
        for (String fileToCleanUp : FILES_TO_CLEAN_UP) {
            File file = new File(fileToCleanUp);
            if (file.exists()) {
                file.delete();
            }
        }
    }


    @Test
    public void shouldWaitForAFileToBeCreated() throws Exception {
        // Given I have an aysnc process to run
        String expectedFile = RESULT_FILE;

        List<FileGenerator> fileGenerators = Arrays.asList(
                new FileGenerator(STEP_1_LOG, 1, "Step 1 is complete"),
                new FileGenerator(STEP_2_LOG, 3, "Step 2 is complete"),
                new FileGenerator(STEP_3_LOG, 4, "Step 3 is complete"),
                new FileGenerator(expectedFile, 7, "Process is now complete")
        );

        // when it is busy doing its work
        ExecutorService executorService = Executors.newFixedThreadPool(10);
        for (final FileGenerator fileGenerator : fileGenerators) {
            executorService.execute(new Runnable() {
                public void run() {
                    fileGenerator.generate();
                }
            });
        }

        // then I get some log outputs
        await().atMost(2, SECONDS).until(testFileFound(STEP_1_LOG));
        await().until(testFileFound(STEP_2_LOG));
        await().until(testFileFound(STEP_3_LOG));

        // and I should have my final result with the output I expect
        await().atMost(10, SECONDS).until(testFileFound(expectedFile));
        String fileContents = readFile(expectedFile);
        assertThat(fileContents, startsWith("Process"));

        // Cleanup
        executorService.shutdown();
    }

    private String readFile(String expectedFile) throws IOException {
        return new String(Files.readAllBytes(Paths.get(expectedFile)));

    }


    private Callable<Boolean> testFileFound(final String file) {
        return new Callable<Boolean>() {
            public Boolean call() throws Exception {
                return new File(file).exists();
            }
        };
    }
}

You can explore the full demo code on this public git repository.

The ThoughtWorks Anthology, Volume 2 Released

I’m pretty excited to announce the release of The ThoughtWorks Anthology, Volume 2, a varied collection of essays in the field of software development.

The ThoughtWorks Anthology, Volume 2

Alistair Jones and I contributed a chapter titled, “Extreme Performance Testing” that I have talked about in the past. In this chapter, we discuss how we apply over decades of agile experience (particularly Extreme Programming) to the field of performance testing. We describes techniques and approaches for faster feedback that you can immediately apply to performance testing.

The Karma Test Runner’s dreaded Script Error

I have been using Karma Test Runner for javascript testing and like using PhantomJS as the test browser as it runs headless and is super fast. However every so often, I encounter the dreaded line:

Error: Script error

The error often looks like this in the console:

Error: Script Error

Since I work often in small cycles of test-code-commit (or just code-commit), my solution is often to rollback and try again. I thought I’d spend some time trying to work out how to get better diagnostics. A bit of digging around and the suggested way is simply to change browsers that give you a view of the javascript problem.

You do this by changing your karma configuration. In my karma.conf.js file I would tweak the value as below:

Karma.conf.js for different browsers

I then run the test again, and look at Chrome’s in built javascript console and look for errors:

Missing resource

Voila! A HTTP Not Found (404) for a resource I had declared in one of my require scripts. In this example, I intentionally added it in to cause that error, but this technique is useful for when you just can’t work out why something is not working.

Agile Testing Days Summary

My last speaking conference for the year was the Agile Testing Days held in Potsdam on the outskirts of Berlin. In terms of conference history, this was its fourth year of running and attracts a lot of the big names in agile testing circles, as one would expect from the name. For me, as a test-infected developer , I found it fascinating to see what concerns the testing audience and I felt many common themes around whether or not testing was important, the difference between an agile tester and a normal tester, and of course a focus on collaboration and working iwth other roles.

They had three keynotes a day, a pretty overwhelming number considering that there were multiple tracks and sessions. I don’t think I would do much justice trying to summarise them all but I’ll share a number of highlights that I took away. Jurgen Appelo spoke about the stories behind his books. He’s an entertaining speaker, very well prepared and I feel very in agreement with his “all models are broken but some are useful” approach to collecting different software methodologies and giving a go at lots of different things. Just as the community is continuing to expand beyond just software development, his focus is also pointing upwards into the upper hierarchies of management – to those with the money, budget and decisioning making authority. He’s invented something he’s calling the “management workout” although I shudder at this after seeing it associated (read: abused) by a very hierarchical organisation very much into command and control. Ask me about it one day if you see me in person.

I also appreciated Gojko’s keynote presentation challenging existing thoughts on collecting information, metrics and that all we are doing is improving the process of software development, not necessarily the product. His argument feels very similar to the lean start up movement that why bother putting quality around features we aren’t even sure are useful by anyone, or not validated. He quoted a number of clients he’s seen who throw away automated testing suite in favour of measuring impact on business metrics and trends – using that to work out regressions. It’s an interesting approach that requires a) businesses to actually identify their real business metrics (very rare in my opinion) and b) link features to these metrics and c) measure them very carefully for changes. I guess this also goes along with his new book on Impact Mapping to work out where to put the effort.

He also criticised the testing quadrants, argued that we’re collecting the wrong data, points out that exploratory testing is still testing the process (not the product) and that most organisations are missing feedback once they go live. He also came up with his own adaptation on the Maslo’s hierarchy of needs in terms of software. It starts off with building deployable/functional software, then it should meet performance/security needs, then be usable, followed by useful and then followed by successful. He also recommended a book new to me, called “The Learning Alliance: Systems Thinking in Human Resource Development”

I ran my session, TDDing Javascript Front Ends shortly after a BDD session that complemented the idea very well. The previous session focused on why/what and not the how, where mine was a good depth into how you do it from a hands on practitioner point of view. I had a number of good questions and people after the talk that gave me some great feedback and encouraged me to do more. The only shame was that the session was limited to 45minutes and the tutorial that I run with is normally achievable. I look forward to people taking the online tutorial (found here) and then passing in even more feedback.

Reflections on Agile 2012

Another year, another agile conference. It’s time for reflecting on the conference and uncovering lessons learned. Dallas, Texas hosted this year’s Agile Conference. More accurately, the Gaylord Texan Resort in Grapevine hosted this year’s Agile Conference. Loved by many at the conference (notably less so by Europeans) the resort reminds me of the Eden Project and a weird biosphere (see picture below) that is self-contained and fully air-conditioned. Although maybe this wasn’t such a bad thing with a West Nile virus outbreak in Dallas.

Needless to say that I stepped out quite a bit to try to get some fresh, if not, refreshingly humid air.

Onto the conference. It was very well organised, very well run and even rapidly responded to feedback (such as moving rooms when demand proved too much for some of the anticipated sessions. Food came out very promptly in the different breaks. We didn’t have to queue too long and the variety was pretty good. The only breakdown was probably the Tuesday lunchtime where it wasn’t clear we had to get our own food and with a limited number of on-site restaurants in our self-enclosed bubble world, proved to be a bit of a tight squeeze in schedule.

The people at the conference seemed to be a bit of a mix. Mainly lots of consultants like myself sharing their experiences, but as one person noted, an extraordinary number of agile coaches all apparently looking for work. On the other extreme there seemed to be lots of companies adopting agile and lots of people selling tools and training to help them.

Lots of parallel tracks meant lots of choice for many people but I often found it hard to find things that worked for me. I’m less interested in “enterprise agile adoption”, and more interested in the practices pushing the boundaries, or the deep insight offered by people. The few technical sessions I went seemed to be aimed at a bit more of an introductory audience. I particularly avoided any of the “do this with scrum” or “do this with kanban” as these appeared by be pushing.

In terms of keynotes, I thought they did a great job of assembling some diverse and interesting sessions. Although Bob Sutton (No A**hole Rule author) felt like he didn’t do much preparation for his keynote from the text heavy slides that jumped at different paces, he had some good anecdotes and stories to share. My biggest takeaway from that session was thinking about taking away practices just as much as adding practices, something that I think I do implicitly but should try to do more explicitly. The other keynotes were pretty inspiring as well, with Dr. Sunita Maheshwari behind Telerad talking about her accidental experiment moving into doing remote radiology to support the night-shift need of hospitals in the US and the interesting growth of their business. The other really inspirational keynote was by Joe Justice, the guy behind the amazing Wikispeed project taking sets of agile practices and principles back into the car-making industry. I felt he really knew his stuff, and it’s amazing how you can tell someone who really understands the values and trying to live them in different ways and then translating them into a different world. Very cool stuff that you should check out.

In terms of other workshop sessions, I left half way through many of them as the ideas were either too slow, or not at all interesting (such as one on Agile Enterprise Architecture that spent 30 minutes trying to go back to the age-old debate of defining Enterprise Architecture.)

Two of my most favourite sessions was one by Linda Rising who gave a very heart-felt and personal Q&A session that left many people in tears. Her stories are always very personal, and I really admire her ability to look beyond someone’s words and really uncover the true question they are asking with a usually insightful answer as well! The other session was listening to the great work that Luke Hohmann of Innovation Games has been doing with the San Jose government to change the way they make decisions about where the money goes through the use of games and play. Very awesome stuff.

I had my session in the last possible slot on the Thursday and had a large number of well known people in competing slots such as Jeff Sutherland, Esther Derby and Diana Larsen. I’m very happy with the turn out as we had a lot of fun playing games from the Systems Thinking Playbook including a number of insightful conversations about systems thinking concepts and how they apply to our working life. One of my most favourite exercises (Harvest) that demonstrates the Tragedy of the Commons archectype played its course and we finished in just three years (iterations) only due to a constraint I added early into the game. I love this exercise for its potential for variation and the insightful conversations about how this applies to agile teams, organisations and functions.

You often can’t come away from conferences without new references, so here’s the list of books and web resources I noted down (but obviously my summary is without actually reading into it, so YMMV):

Testing logging with Logback

On my current project, we’re using the logback framework (behind SL4J) to do logging. For some parts of our system, it was particularly important some information made their way into the log files, and so we wanted to not test the correct output. Rather than do it with interaction based tests, I followed the pattern that I described in a previous post.

Here’s a test I might write (note that I’m writing the test in a way to actually test the appender behaviour in this case because my domain class doesn’t nothing special):

package com.thekua.spikes;

import org.junit.After;
import org.junit.Test;
import static org.hamcrest.Matchers.is;
import static org.junit.Assert.assertThat;

public class LogbackCapturingAppenderTest {
    @After
    public void cleanUp() {
        LogbackCapturingAppender.Factory.cleanUp();
    }

    @Test
    public void shouldCaptureAGivenLog() throws Exception {
        // Given
        LogbackCapturingAppender capturing = LogbackCapturingAppender.Factory.weaveInto(OurDomainWithLogger.LOG);
        OurDomainWithLogger domainClass = new OurDomainWithLogger();

        // when
        domainClass.logThis(&quot;This should be logged&quot;);

        // then
        assertThat(capturing.getCapturedLogMessage(), is(&quot;This should be logged&quot;));
    }

    @Test
    public void shouldNotCaptureAGiveLogAfterCleanUp() throws Exception {
        // Given
        LogbackCapturingAppender capturing = LogbackCapturingAppender.Factory.weaveInto(OurDomainWithLogger.LOG);
        OurDomainWithLogger domainClass = new OurDomainWithLogger();
        domainClass.logThis(&quot;This should be logged at info&quot;);
        LogbackCapturingAppender.Factory.cleanUp();

        // when
        domainClass.logThis(&quot;This should not be logged&quot;);

        // then
        assertThat(capturing.getCapturedLogMessage(), is(&quot;This should be logged at info&quot;));
    }
}

And the corresponding Logback appender used in tests.

package com.thekua.spikes;

import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.Logger;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.Appender;
import ch.qos.logback.core.AppenderBase;

import java.util.ArrayList;
import java.util.List;

public class LogbackCapturingAppender extends AppenderBase&lt;ILoggingEvent&gt; {
    public static class Factory {
        private static List&lt;LogbackCapturingAppender&gt; ALL = new ArrayList&lt;LogbackCapturingAppender&gt;();

        public static LogbackCapturingAppender weaveInto(org.slf4j.Logger sl4jLogger) {
            LogbackCapturingAppender appender = new LogbackCapturingAppender(sl4jLogger);
            ALL.add(appender);
            return appender;
        }

        public static void cleanUp() {
            for (LogbackCapturingAppender appender : ALL) {
                appender.cleanUp();
            }
        }
    }

    private final Logger logger;
    private ILoggingEvent captured;

    public LogbackCapturingAppender(org.slf4j.Logger sl4jLogger) {
        this.logger = (Logger) sl4jLogger;
        connect(logger);
        detachDefaultConsoleAppender();
    }

    private void detachDefaultConsoleAppender() {
        Logger rootLogger = getRootLogger();
        Appender&lt;ILoggingEvent&gt; consoleAppender = rootLogger.getAppender(&quot;console&quot;);
        rootLogger.detachAppender(consoleAppender);
    }

    private Logger getRootLogger() {
        return logger.getLoggerContext().getLogger(&quot;ROOT&quot;);
    }

    private void connect(Logger logger) {
        logger.setLevel(Level.ALL);
        logger.addAppender(this);
        this.start();
    }

    public String getCapturedLogMessage() {
        return captured.getMessage();
    }

    @Override
    protected void append(ILoggingEvent iLoggingEvent) {
        captured = iLoggingEvent;
    }

    private void cleanUp() {
        logger.detachAppender(this);

    }
}

I thought it’d be useful to share this and I’ve created a github project to host the code.

General testing approach for logging in Java

I’ve seen a number of different patterns for testing logging and we just ran into this again at work, so I thought it’d be worth writing up a couple of them.

Simple logging
Many of the java based logging frameworks allow you to log and then choose to separate what you log from what needs to be done by attaching appenders to them. The common pattern here is to declare a static log instance inside each class, typically using the fully qualified class name. Most logging frameworks then treat this as a hierarchy of loggers that allow you to configure what type of logging at different levels.

I find the best strategy for testing this style of logging is to add an in memory appender that will capture the output that is sent to the logging framework. I will post an example with given frameworks in a second, but here are a few different concerns you need to think about:

  • Getting a reference to the log – Most loggers are made private static. You can choose to break encapsulation slightly by weakening the access to package local just so that a test can get access to the log. I find that injecting it in as a dependency in the constructor appears to complicated, and dislike setter based dependency injection in an attempt to keep fields more immutable.
  • Creating an appender to capture the output – This is where you’ll have to go to whichever logging framework and find out how appenders work. Most have a console appender, or something that you can simply extend and capture a logging event
  • Make the appender capture the output – This is pretty easy. You must make a choice whether or not you want only your appender to capture the log events, or if you want it to go to other appenders as well.
  • Clean up – The consequence of adding an appender with state is not a problem in a single test. When wired up across a long test suite, you potentially increase the amount of memory being consumed to the point where you’ll get out of memory errors. I find that it’s important to make sure that your appenders remove themselves from the logger at the end of the test to avoid side effects and to make sure they get garbage collected by the JVM. In a following post, you’ll see the pattern I tend to use.

More complex logging services
If you’re application requires more than just file based logging, and other actions need to be performed, you may still consider injecting a type of logger into the class that uses it. At this point, testing it becomes like any other interaction-based test and you can probably use mocking or stubbing to test the correct things are passed to to it.

By default, I wouldn’t really go through the motions of setting up a logging service unless there was some real need for it. The standard style of loggers and configuration of behaviour gives you quite a lot already.

Running Scalatra Tests with JUnit (in a maven or ant build)

I’ve been playing around with Scalatra and one of the things that wasn’t quite obvious was when I had it running as part of a test run build (using the JUnit) runner, these tests weren’t getting picked up.

The key to this was ensuring you annotate the class with the @RunWith(classOf[JUnitRunner]) attribute as part of the build.

package com.thekua.scala.example

import org.scalatra.test.scalatest.ScalatraFunSuite
import org.scalatest.matchers.ShouldMatchers
import org.scalatest.junit.{JUnitRunner, JUnitSuite}
import org.junit.runner.RunWith

@RunWith(classOf[JUnitRunner])
class JsonScalatraTest extends ScalatraFunSuite with ShouldMatchers {
  val servletHolder = addServlet(classOf[JsonifiedServlet], &quot;/*&quot;)
  servletHolder.setInitOrder(1) // force load on startup

  test(&quot;JSON support test&quot;) {
    get(&quot;/&quot;) {
      status should equal (200)
    }
  }
}

We do “TDD.” Oh really?

During Agile 2011 I tried to speak to people about what technical practices they did, and to what degree as well. Unfortunately I left the conference slightly disappointed and now understand why Uncle Bob champions the Software Craftsmanship movement as a way of addressing a balance the agile community at large has lost.

Don’t get me wrong. I appreciate that we need to focus on people and system issues in software development just as well. However, it’s laughable to think that software development is the easy part. I’m not the only one who feels this way. In order to succeed at real agility, you need just as much discipline and attention to detail to the technical practices, tools and methods just as you do everything else. It seemed like people were trying to be a good sports team, by finding the best people, hiring the best coaches, and then forgetting to schedule time for the drills and training.

One particular conversation stuck with me. We had this in the airport lounge at Salt Lake City before departing for Chicago. It kind of went something like this:

Other: We only tend to TDD the happy path
Me: Oh? Out of interest, what sort of test coverage do you end up with?
Other: About 20 or 30%. Why, how about you?
Me: Depends on what sort of application. Adding up the different types of tests (unit, integration, acceptance), I’d probably guess in the high 80s to 90s.
Other: Wow. That’s really high.

This conversation got me thinking about whether the other person’s team really benefits from doing TDD. I’m guessing not. They are probably getting some value from the tests, but probably equal value by doing test after. If you really wanted to be a pedant, you could argue, how do you do TDD without actually driving (most of) the code with tests? I’m not a TDD zealot. I’ve even blogged about when to skip TDD, however one of the biggest benefits of writing tests is the feedback they give and the way that it changes the way that you design code. I’m guessing by skipping the “unhappy paths”, there’s plenty of feedback they miss out on.

Ruby Script to Capture HTTP traffic

I spent some time early this week trying to debug why some test was failing. It was using RESTeasy and I wanted to find out exactly what HTTP packets it was sending. If I was on a windows machine, I’d look towards Fiddler, but I couldn’t find anything easy on mac, so I wrote a little ruby script that I could point the client at and dump out all headers and request body.

It looks like this:

require 'webrick'
include WEBrick

class Simple &amp;lt; WEBrick::HTTPServlet::AbstractServlet
  def do_POST(request, response)
    puts &amp;quot;Body: &amp;quot; + request.body
    puts &amp;quot;Header: &amp;quot; + request.raw_header.to_s

    response.status = 200
  end
end

server = HTTPServer.new(:Port =&amp;gt; 5899)
server.mount &amp;quot;/&amp;quot;, Simple

['TERM', 'INT'].each do |signal|
trap(signal){ server.shutdown }
end

server.start

Update
Upon publishing this, I got a couple of alternatives worth looking into for next time.

  • Jim suggested using tcptrace; and
  • Gaz suggested using the following command sudo tcpdump -vvv -A -s 0 -i en1 tcp ‘port 5899’

This is just yet another example about why it’s important to have diversity when problem solving. I didn’t even know about these tools. Now I do and you do too.

« Older posts

© 2024 patkua@work

Theme by Anders NorenUp ↑