The intersection of technology and leadership

Category: Java (Page 1 of 5)

Automated Tests for Asynchronous Processes

It’s been a while since I’ve worked on a server-side application that had asynchronous behaviour that wasn’t already an event-driven system. Asynchronous behaviour is always an interesting challenge to design and test. In general, asynchronous behaviour should not be hard to unit test – after all, the behaviour of an action shouldn’t necessarily be coupled temporally (see forms of coupling).

TIP: If you are finding the need for async testing in your unit tests, you’re probably doing something wrong and need to redesign your code to decouple these concerns.

If your testing strategy only includes unit testing, you will miss a whole bunch of behaviour which are often caught at high level of testing like integration, functional or system tests – which is where I need asynchronous testing.

Asychronous testing, conceptually, is actually pretty easy. Like synchronous testing, you take an action and then look for a desired result. However unlike synchronous testing, your test cannot guarantee that the action has completed before you check for the side-effect or result.

There are generally two approaches to testing asynchronous behaviour:

  1. Remove the asynchronous behaviour
  2. Poll until you have the desired state

Remove the asynchronous behaviour

I used this approach when TDD-ing a thick client application many years ago, when writing applications in swing applications was still a common approach. Doing this required isolating the action invoking behaviour into a single place, that, instead of it occurring in a different thread would, during the testing process, occur in the same thread as the test. I even gave a presentation on it in 2006, and wrote this cheatsheet talking about the process.

This approach required a disciplined approach to design where toggling this behaviour was isolated in a single place.

Poll until you have the desired state

Polling is a much more common approach to this problem however this involves the common problem of waiting and timeouts. Waiting too long increases your overall test time and extends the feedback loop. Waiting too short might also be quite costly depending on the operation you have (e.g. hammering some integration point unnecessarily).

Timeouts are another curse of asynchronous behaviour because you don’t really know when an action is going to take place, but you don’t really want a test going forever.

The last time I had to do something, we would often end up writing our own polling and timeout hook, while relatively simple is now available as a very simple library. Fortunately other people have also encountered this problem in java-land and contributed a library to help make testing this easier in the form of Awaitility.

Here is a simple test that demonstrates how easy the library can make testing asynchronous behaviour:

package com.thekua.spikes.aysnc.testing;

import com.thekua.spikes.aysnc.testing.FileGenerator;
import org.junit.Before;
import org.junit.Test;

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import static java.util.concurrent.TimeUnit.SECONDS;
import static org.awaitility.Awaitility.await;
import static org.hamcrest.Matchers.startsWith;
import static org.junit.Assert.assertThat;

public class FileGeneratorTest {

    private static final String RESULT_FILE = "target/test/resultFile.txt";
    private static final String STEP_1_LOG = "target/test/step1.log";
    private static final String STEP_2_LOG = "target/test/step2.log";
    private static final String STEP_3_LOG = "target/test/step3.log";

    private static final List<String> FILES_TO_CLEAN_UP = Arrays.asList(STEP_1_LOG, STEP_2_LOG, STEP_3_LOG, RESULT_FILE);


    @Before
    public void setUp() {
        for (String fileToCleanUp : FILES_TO_CLEAN_UP) {
            File file = new File(fileToCleanUp);
            if (file.exists()) {
                file.delete();
            }
        }
    }


    @Test
    public void shouldWaitForAFileToBeCreated() throws Exception {
        // Given I have an aysnc process to run
        String expectedFile = RESULT_FILE;

        List<FileGenerator> fileGenerators = Arrays.asList(
                new FileGenerator(STEP_1_LOG, 1, "Step 1 is complete"),
                new FileGenerator(STEP_2_LOG, 3, "Step 2 is complete"),
                new FileGenerator(STEP_3_LOG, 4, "Step 3 is complete"),
                new FileGenerator(expectedFile, 7, "Process is now complete")
        );

        // when it is busy doing its work
        ExecutorService executorService = Executors.newFixedThreadPool(10);
        for (final FileGenerator fileGenerator : fileGenerators) {
            executorService.execute(new Runnable() {
                public void run() {
                    fileGenerator.generate();
                }
            });
        }

        // then I get some log outputs
        await().atMost(2, SECONDS).until(testFileFound(STEP_1_LOG));
        await().until(testFileFound(STEP_2_LOG));
        await().until(testFileFound(STEP_3_LOG));

        // and I should have my final result with the output I expect
        await().atMost(10, SECONDS).until(testFileFound(expectedFile));
        String fileContents = readFile(expectedFile);
        assertThat(fileContents, startsWith("Process"));

        // Cleanup
        executorService.shutdown();
    }

    private String readFile(String expectedFile) throws IOException {
        return new String(Files.readAllBytes(Paths.get(expectedFile)));

    }


    private Callable<Boolean> testFileFound(final String file) {
        return new Callable<Boolean>() {
            public Boolean call() throws Exception {
                return new File(file).exists();
            }
        };
    }
}

You can explore the full demo code on this public git repository.

Crashplan on Mac OSX not compatible with Java 1.7

Last year, I decided to backup my data in the cloud. I liked the idea of Crashplan because it encrypts stuff before shipping it off to the cloud. It runs in the background, and as long as you have a reasonable upload speed, backing up things like SLR photos aren’t so painful.

Unfortunately I rebooted my machine today, and found that my backup service was no longer working. I scoured their twitter stream, their website to see status, but it all looked good. I figured something must have changed on my machine. I forgot that I had installed JDK7 earlier in the week on my machine, but didn’t really link the two events because I barely need to restart the mac.

Fortunately this post told me how to reconfigure crashplan to run on Java 1.6 again. Thanks interwebs.

Taming the Hippo (CMS) Beast

I eluded in a previous post our struggles dealing with the HippoCMS platform. It wasn’t our preferred path, but a choice handed down from above. Enough said about that. It’s useful to understand a little bit about the environment we were using it.

I believe the pressure to choose a CMS came from a deadline that required some choices about platform choice to be made in an organisation. At this time, the extent to what the actual product was unknown. Our experience working with other clients is that you should generally work out what you want to do before you pick a product, or the platform will often dictate and limit your ability to do things. My colleague Erik Dörnenburg has been writing more about this recently.

The premises of a CMS are alluring for organisations. We have content… therefore we need a content management system. The thought ensues, “Surely we can just buy one off the shelf.” Whether or not you should use a CMS is for another blog post, and you can read some of Martin Fowler’s thoughts on the subject here.

We wanted to protect our client’s ability to evolve their website beyond the restrictions of their CMS, so we architected a system where content managed in a CMS would sit behind a content service, and a separate part of the stack focused on the rendering side. It looks a little like this:

ContentService

The issues that we faced with HippoCMS included:

A small community based on the Java Content Repository

Hippo is based on the Java Content Repository (JCR) API, a specification for standardising the storage and access of content. Even as I write this blog, putting “JCR” or “Java Content Repository” I am forced to link to the wikipedia page because I spent three minutes trying to find the official Java site (it looks like the official site is hosted by Adobe here). If the standard is small, the community surrounding the products are naturally going to be smaller. Unlike users of spring, putting a stacktrace into google will generally show the sourcecode of the file rather than how someone got over it. I’d be happy living on the bleeding edge… if the technology was actually pretty decent.

Unfortunately a lot of the gripes I write about are the fact that the product itself is based on the the JCR specification. Some simple examples include:

  • A proprietary query syntax – You query the JCR with an xpath-like query language. It’s actually less useful than xpath, such as not implementing all functions available in xpath and some weird quirks
  • Connecting to the repository via two mechanisms – Either RMI (yuck! and inefficient) or in memory. This automatically limits your deployment options to the application container model. Forget fast feedback loops of changing, starting a java process and then retesting.

Hippo CMS UI generates a huge number of exceptions
One reason Hippo was selected was for the perceived separability of the CMS editor and the website component (referred to as the Hippo Site Toolkit). We didn’t want to tightly couple the publishing/rendering side to the same technology stack as the underlying CMS. Hippo allows you to do this by having separately deployed artefacts in the application container. Unfortunately, the Wicket-based UI (maybe because we used it without the Hippo Site Toolkit) generates exceptions like nobody’s business. We spent some effort trying to understand the exceptions and fix them, but there were frankly too many to mention.

Poor taxonomy plugin implementation
One of the reasons Hippo was allegedly picked was for the taxonomy plugin. Unfortunately this gave us no world of pain both in usability and in terms of maintaining it. In terms of the specific issues we faced with the maintenance included the multi-language support (it didn’t allow that) and then just simply getting it deployed without issues.

CMS UI lack of responsiveness
Our client’s usage of the site wasn’t very big. Less than 300 articles and, at the peak, about 10 concurrent users. Let’s just say that even with three people, the UI was sluggish and unresponsive. We tried some of the suggestions on this page, but it’s a bit of a worry that it can’t responsively support more than one user out of the box with standard configuration.

Configuration inside the JCR
Most of our projects take a pretty standard approach to implementing Continuous Delivery. We want to easily source control configuration, and script deployments so that releases into different environments are repeatable, reliable, rapid and consistent. Unfortunately a lot of the configuration for new document type involves “switching a flag to capture changes”, playing around with the UI for a new document type” and then exporting a bunch of XML that you must then load with some very proprietary APIs.

After several iterations, we were able to streamline this process as best we could but that took some time (I’m guessing about a developer two weeks full time).

Lack of testability
We spent quite a bit of effort trying to work out the best automated testing strategy. Some of the developers first tried replicating the JCR structure the UI would recreate but then I pointed out that would give us no feedback of if Hippo changed the way did its mapping. We ended up with some integration tests that drove the wicket-based UI (with a wonderfully consistent but horrid set of generated IDs) and then poked our content service for expected results.

A pair of developers worked out a great strategy for dealing with this, working out the dynamically generated APIs and driving the UI via Selenium Webdriver to generate the data we would query inside the proprietary XML-based data store.

Lack of real clustering
In “enterprise” mode, you can opt to pay for clustering support although it’s a little bit strange because you aren’t recommended to upgrade a single node within a cluster when other nodes are connected to the same datastore (in case the shared state is corrupted). This kind of makes seamless upgrades without complicated DB mirror/restore and switcheroo really difficult. We ended up architecting the system for a degraded service using caches on the content service as a compromise to the “clustered” CMS.

Summary
As much as I wish success for the Hippo group, I think many of the problems are around its inherent basis on the JCR. I do think that there are a couple more things that could be done to make life easier for developers including increasing the amount of documentation and thinking about how to better streamline automated, frequent deployments around the CMS.

Thoughts from Øredev 2011

Keynote 1: Alexis Ohanian

The first keynote titled, “Only your mom wants to use your website” came from Alexis Ohanian, a geek who helped create Reddit, Hipmunk and a few other sites. He’s passionate about users and you can really see how that manifests itself and very appropriate for a conference with a theme of Userverse. He told the audience, “As geeks, we’re at an advantage. There are so many bad websites out there so if you focus on creating an awesome experience, it’s very easy to compete.” It just came back to treating your customers and really delighting your customers.

He uses some really great examples about how he engaged users with a couple of his websites. For example, with Reddit, there’s the mascot in the top hand corner of the page and talks about doing a 30-day animation series that really connected with dedicated reddit users who were so concerned when, during one the days, the mascot went missing and they emailed in constantly to find out where he went.

With hipmunk, he recounts the story of personally stuffing envelopes with handmand hipmunk travel stuff to send off to some of his users. For no good reason other than to surprise them. In return, people sent photos of the hipmunk in all sorts of places and travelling around. It’s the little things that really delight.

Keynote 2: Neal Ford

Neal’s a really awesome speaker and would highly recommend any technical person to watch his very strong presentations. Fortunately it looks like JFokus just published the same speech Neal gave at Øredev so you can see. The focus of his topic was really about Abstraction Distractions and is a really important key message for us technical folks. It also really relates well to this XKCD comic.

The whole premise about his talk is that users don’t really want to hear the techncial reasons why something does or doesn’t work. You have to make them undersatnd the impact of what it has. He seeds the presentations with lots of pro-tips including things like, “Don’t confuse the abstraction with the real thing” giving the example of wanting to store a recipe and concerned about how to store a recipe that will last many technologies, that even its representation isn’t quite the same thing as the recipe itself.

The ImageThink graphic facilitators had trouble keeping up with the pace that Neal speaks at. He’s definitely the sort of high energy, many idea kind of guys.

Keynote 3: Dan North

Dan is a great an entertaining speaker than everyone really enjoys. He spoke on “Embracing Uncertainty – the Hardest Pattern of All.” I guess a lot of his entertaining anecdotes and stories were really focused around our human bias for closure.

Keynote 4: Jeff Atwood

I’m glad to hear Jeff present this keynote, “Social Software for the Anti-Social Part II: Electric Boogaloo” as he handed over one of his speaking slots to an employee of his in a talk on a previous day that turned out to be a bit of a disappointment for many people. His keynote carried on from a previous talk, carrying on with lots of lessons learned, particularly about how they built Stackoverflow with game mechanics in mind.

It’ll probably be online soon, but it’s one definitely worth watching as it’s an interesting balance between democracy, openness yet some directing behaviour thrown in.

About the conference

I’m constantly impressed by the organisation and the the quality of the conference. I’m really surprised it doesn’t attract more people from Europe and what I call, a little bit of a hidden gem. It has some really wonderful speakers, great topics and the venue itself is pretty good (although there’s poor noise isolation between the different rooms). There’s plenty of interesting events in the evenings and a great place to chat to people both during and after the conference, although I think the “unofficial” Øredev pub needs to grow a bit to accomodate so many geeks.

Other talks of significance

I went to quite a number of talks but will write up some of the more interesting ones (for me).

  • Copenhagen Suborbitals – This was a bit of a surprise talk. It was very late in the day, ending almost at about 9pm or 10pm and was a guy based in Copenhagen who’s attempting to build his own spaceship to launch him into suborbital. It’s a really amazing tale and one I can appreciate for a guy who’s serious about following his passion. The talk started off quite entertainingly about how building a spaceship was a bit ambitious, so he started off by building a submarine! He’s a really engaging speaker and I don’t want to ruin too many of his good stories. I suggest going over to his blog (he’s still building his spaceship) and seeing where he is. He relies on donations to keep this project running and I love the fact it’s such an open-source project as well with people offering their advice and expertise in many different areas. He’s got lots of great lessons to share that are completely relevant to everyone.
  • Aaron Parecki on his startup story for Geoloqi – I listened to this guy talk about his startup, and similarly along the same lines at the Orbitals, he told the tale of what started off as a hobby eventually turned into a real startup opportunity and shared a lot of his lessons along the way. It’s an interesting start up that you can read more about on gizmodo here
  • Jeff Patton – Jeff had a great session introducing people to the UX stage and trying to set the stage for lots of the other speakers. Jeff has a wealth of wisdom and experience to share and what was really powerful was him sharing images and stories about the different roles and techniques people use to build useful software and integrating it into agile processes. Really powerful stuff that I think every developer should really go through.

Reflections on my talk
Titled, “Collaboration by better understanding yourself”, I presented on the idea that we have lots of in built reactions as developers that really hold us back from collaborating more effectively. My goal was for people to go away, thinking more about the things that effect them and why they don’t collaborate as much as they should. I got some great feedback and was particularly nervous because not only did I have a good number of people but I had many other presenters I really respected in attendance including Portia Tung, Doc List, Johanna Rothman, Jean Tabaka, Jim Benson and more.

Although I’d practiced, there’s a few more tweaks I would make to this, but was very happy with some of the people who came up to me throughout the conference who felt that they really connected to the topic and felt really enthused to do something about it. Exactly what I wanted. 🙂

Testing logging with Logback

On my current project, we’re using the logback framework (behind SL4J) to do logging. For some parts of our system, it was particularly important some information made their way into the log files, and so we wanted to not test the correct output. Rather than do it with interaction based tests, I followed the pattern that I described in a previous post.

Here’s a test I might write (note that I’m writing the test in a way to actually test the appender behaviour in this case because my domain class doesn’t nothing special):

package com.thekua.spikes;

import org.junit.After;
import org.junit.Test;
import static org.hamcrest.Matchers.is;
import static org.junit.Assert.assertThat;

public class LogbackCapturingAppenderTest {
    @After
    public void cleanUp() {
        LogbackCapturingAppender.Factory.cleanUp();
    }

    @Test
    public void shouldCaptureAGivenLog() throws Exception {
        // Given
        LogbackCapturingAppender capturing = LogbackCapturingAppender.Factory.weaveInto(OurDomainWithLogger.LOG);
        OurDomainWithLogger domainClass = new OurDomainWithLogger();

        // when
        domainClass.logThis(&quot;This should be logged&quot;);

        // then
        assertThat(capturing.getCapturedLogMessage(), is(&quot;This should be logged&quot;));
    }

    @Test
    public void shouldNotCaptureAGiveLogAfterCleanUp() throws Exception {
        // Given
        LogbackCapturingAppender capturing = LogbackCapturingAppender.Factory.weaveInto(OurDomainWithLogger.LOG);
        OurDomainWithLogger domainClass = new OurDomainWithLogger();
        domainClass.logThis(&quot;This should be logged at info&quot;);
        LogbackCapturingAppender.Factory.cleanUp();

        // when
        domainClass.logThis(&quot;This should not be logged&quot;);

        // then
        assertThat(capturing.getCapturedLogMessage(), is(&quot;This should be logged at info&quot;));
    }
}

And the corresponding Logback appender used in tests.

package com.thekua.spikes;

import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.Logger;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.Appender;
import ch.qos.logback.core.AppenderBase;

import java.util.ArrayList;
import java.util.List;

public class LogbackCapturingAppender extends AppenderBase&lt;ILoggingEvent&gt; {
    public static class Factory {
        private static List&lt;LogbackCapturingAppender&gt; ALL = new ArrayList&lt;LogbackCapturingAppender&gt;();

        public static LogbackCapturingAppender weaveInto(org.slf4j.Logger sl4jLogger) {
            LogbackCapturingAppender appender = new LogbackCapturingAppender(sl4jLogger);
            ALL.add(appender);
            return appender;
        }

        public static void cleanUp() {
            for (LogbackCapturingAppender appender : ALL) {
                appender.cleanUp();
            }
        }
    }

    private final Logger logger;
    private ILoggingEvent captured;

    public LogbackCapturingAppender(org.slf4j.Logger sl4jLogger) {
        this.logger = (Logger) sl4jLogger;
        connect(logger);
        detachDefaultConsoleAppender();
    }

    private void detachDefaultConsoleAppender() {
        Logger rootLogger = getRootLogger();
        Appender&lt;ILoggingEvent&gt; consoleAppender = rootLogger.getAppender(&quot;console&quot;);
        rootLogger.detachAppender(consoleAppender);
    }

    private Logger getRootLogger() {
        return logger.getLoggerContext().getLogger(&quot;ROOT&quot;);
    }

    private void connect(Logger logger) {
        logger.setLevel(Level.ALL);
        logger.addAppender(this);
        this.start();
    }

    public String getCapturedLogMessage() {
        return captured.getMessage();
    }

    @Override
    protected void append(ILoggingEvent iLoggingEvent) {
        captured = iLoggingEvent;
    }

    private void cleanUp() {
        logger.detachAppender(this);

    }
}

I thought it’d be useful to share this and I’ve created a github project to host the code.

General testing approach for logging in Java

I’ve seen a number of different patterns for testing logging and we just ran into this again at work, so I thought it’d be worth writing up a couple of them.

Simple logging
Many of the java based logging frameworks allow you to log and then choose to separate what you log from what needs to be done by attaching appenders to them. The common pattern here is to declare a static log instance inside each class, typically using the fully qualified class name. Most logging frameworks then treat this as a hierarchy of loggers that allow you to configure what type of logging at different levels.

I find the best strategy for testing this style of logging is to add an in memory appender that will capture the output that is sent to the logging framework. I will post an example with given frameworks in a second, but here are a few different concerns you need to think about:

  • Getting a reference to the log – Most loggers are made private static. You can choose to break encapsulation slightly by weakening the access to package local just so that a test can get access to the log. I find that injecting it in as a dependency in the constructor appears to complicated, and dislike setter based dependency injection in an attempt to keep fields more immutable.
  • Creating an appender to capture the output – This is where you’ll have to go to whichever logging framework and find out how appenders work. Most have a console appender, or something that you can simply extend and capture a logging event
  • Make the appender capture the output – This is pretty easy. You must make a choice whether or not you want only your appender to capture the log events, or if you want it to go to other appenders as well.
  • Clean up – The consequence of adding an appender with state is not a problem in a single test. When wired up across a long test suite, you potentially increase the amount of memory being consumed to the point where you’ll get out of memory errors. I find that it’s important to make sure that your appenders remove themselves from the logger at the end of the test to avoid side effects and to make sure they get garbage collected by the JVM. In a following post, you’ll see the pattern I tend to use.

More complex logging services
If you’re application requires more than just file based logging, and other actions need to be performed, you may still consider injecting a type of logger into the class that uses it. At this point, testing it becomes like any other interaction-based test and you can probably use mocking or stubbing to test the correct things are passed to to it.

By default, I wouldn’t really go through the motions of setting up a logging service unless there was some real need for it. The standard style of loggers and configuration of behaviour gives you quite a lot already.

String XML interpolation in Scala

In java, if you’re formatting a small XML document, it might be tempting to simply do a String.format to substitute it directly into a string. Transitioning into scala, this is then easy to convert into one of their XML representations with XML.loadString. You might very well end up with code that looks like this:

val myValue = "will be substituted"
val xml = XML.loadString(String.format("<node>%s</node>", node))

You can actually just do this inline with scala directly, and end up with this instead

val myValue = "will be substituted"
val xml = <node>{myValue}</node>

Neat-o!

RESTEasy could not find writer for content-type application/xml type…

I had been spiking around with RESTEasy in order to use some of the JAX-RS stuff for dealing with remote endpoints. We had a client library that already wrapped some of these end points and some DTOs that mapped to the XML the remote end point wanted. We wanted to use the JAX-B annotations to deal with the automatic conversions but kept getting the dreaded…

java.lang.RuntimeException: could not find writer for content-type application/xml type: [.... Dto]

The secret to getting this to work was to ensure that you have the correct runtime libraries in the classpath, or more specifically the following jar.

GroupID: org.jboss.resteasy
ArtifactID: resteasy-jaxb-provider

Thanks to this thread for the help on that.

Beware the Spring Globaltons In Integration Tests

As I mentioned in a previous post, we’ve ben trying to fix a whole slew of tests that flicker, and we’ve spent some time fixing a number of integration tests that leave side effects.

This is the first time I’ve encountered the testing pattern (not recommended!) where SpringJUnit4ClassRunner loads up a spring context, a bean is pulled out, and then mocks used to stub out services. It’s a really evil pattern.

For one thing, mocks are about interaction based testing, not really about stubbing. I typically use them to drive out class-based roles. However, in this case, they were used to take out a portion of the application.

Using the above JUnit Runner means that there is one spring context per test run, effectively, a global pool of objects. Of course, when you pull one out and start modifying the objects, it means you have plenty of side effects across other tests. Unfortunately the don’t always manifest themselves in obvious manners.

Our solution to fix this was to use reflection to look at all beans in the spring context, and fail the test if it found any that had any mocked instances. I’d still recommend you avoid the testing pattern altogether, but if you are forced down this route, you now have a solution to detect side-effects across tests.

« Older posts

© 2024 patkua@work

Theme by Anders NorenUp ↑