The intersection of technology and leadership

Category: Java (Page 4 of 5)

Coding styles leads to (or prevents) certain classes of bugs

I fixed a bug yesterday caused by a loop that terminated too early. It took me a while to understand the fact that it was terminating too early, and I realised that I’ve never written this type of bug because of the way that I tend to write code. Basically, the method looked like this:

public void someMethod() {
   for ( String key : keys ) {
       if (someCondition()) {
         return;
       }
       doSomeProcessingForThe(key);
   }
}

As you can see, the innocent looking return statement breaks out of the entire method, and what they really wanted was a continue statement instead. Note that TDD had been applied to this codebase, and it’s just another reminder that driving your code with tests isn’t enough (you also want to think of ensuring you write more comprehensive tests, particularly for loops).

I’m writing this post because I though it’s interesting that I realised a certain coding style removes (and possibly adds) a whole class of bugs. Had I been writing the code to start with, I probably would have avoided the use of the return (and/or continue) statement resulting in the following code:

public void someMethod() {
   for ( String key : keys ) {
       if (!someCondition()) {
         doSomeProcessingForThe(key);
       }       
   }
}

Prevention is easy with tools like checkstyle, PMD or findbugs (and maybe some of them already check for this). If not, I think it’d be easy to add.

Validating JVM Arguments in Code

I tried looking around on the Internet for a way from code, to get the arguments passed to the JVM itself, rather than environment properties, system properties, and program arguments. As of Java 5, apparently it got a whole lot easy in code than before with the java.lang.management.ManagementFactory.

Testing to see if you have a particular JVM argument is easy with a function like:

public boolean hasJvmArgument(String argument) {
  List arguments = ManagementFactory.getRuntimeMXBean().getInputArguments();
  return arguments.contains(argument);
}

Why would you want to do that?
For some systems, it’s critical the system operates under a specially configured environment. In our current system, for example, we have a health check that runs a series of tests to validate those assumptions still hold true. To achieve our performance targets, it’s integral our application starts with the correct JVM settings, and even with automation in place, we potentially have mistakes. Fortunately with the code above, we can simply add a JvmHasCorrectSettingsCheck to guarantee the application never gets deployed under the wrong circumstances.

Thanks to Saager Mhatre for pointing me in the right direction!

Sprouting inner classses

Since returning to a more hands-on technical role, I’ve noticed a few habits that I didn’t realise I had, or have more recently acquired. One of these particular habits is (credit to Michael Feathers for the pattern name) is my tendency to sprout tiny classes.

Perhaps it’s my aversion to writing too much procedural code in an object-oriented language or it’s my preference to write small, well encapsulated objects. So far, my general approach seems to be:

I first notice a particular class’ responsibility has grown too much. An easy temptation is to move it to a function, and if I need to share it, might even be tempted to make it static. Instead, I sprout a static inner class who now owns that responsibility. I move any state needed for that responsibility to the class, keeping the tell, don’t ask principle in tact as much as possible. As I interact with the object more, I refine the class name, seeking to understand its responsibilities in different contexts. I might push more responsibilities into it, or move some responsibilities away from it.

I like to try to keep the class private until I’m happy that the class makes sense and all the responsibilities relate to each other in some logical manner. When I’m confident that the class is mature enough, I elevate it to a top level class.

What approaches do other people tend to favour? What can be improved? What doesn’t make sense?

Appreciating language features

Developing systems takes a very different outlook than it does from developing libraries, and again very different from designing languages. Even though on projects when we have specific coding standards, there’s always often a benefit to supporting much more. I appreciate language designers needing to think at a much broader scale about the realm of possibilities than I normally need to for systems I develop.

My example of this appreciation is when I had to debug some java code via a remote terminal on a box where the unwanted behaviour emerged. I was actually thankful for being able to do an import java.util.* rather than having to specify every single class that I wanted to use.

Generating a single fat jar artifact from maven

When using the jar-with-dependencies descriptorRef for the maven-assembly-plugin, it creates two files by default, the normal default jar with any library dependencies excluded, and a second jar with all the libraries included appended with “jar-with-dependencies”. Since I find maven help guides unintuitive, it took a while before we found the appendAssemblyId option to turn it off. Here’s the snippet of the pom.xml to create just a single far jar.

<plugin>
  <artifactId>maven-assembly-plugin</artifactId>
  <configuration>
    <appendAssemblyId>false</appendAssemblyId>
    <descriptorRefs>
      <descriptorRef>jar-with-dependencies</descriptorRef>
    </descriptorRefs>
  </configuration>
  <executions>
    <execution>
      <id>make-assembly</id>
      <phase>package</phase>
      <goals>
        <goal>assembly</goal>
      </goals>
    </execution>
  </executions>
</plugin>

Maven-assembly-plugin ignoring manifestEntries?

We’re using this Maven plugin to generate a fat jar for a utility, effectively including all library dependencies un-jarred and re-jarred into a single distribution. The first part was easy, hooking the assembly goal of the maven-assembly-plugin onto the package goal in the maven build lifecycle. Our pom.xml had this entry in it

<plugin>
  <artifactId>maven-assembly-plugin</artifactId>
  <configuration>
    <descriptorRefs>
      <descriptorRef>jar-with-dependencies</descriptorRef>
    </descriptorRefs>
    <archive>
      <manifest>
        <mainClass>com.thekua.maven.ExampleProgram</mainClass>
      </manifest>
    </archive>
  </configuration>
  <executions>
    <execution>
      <phase>package</phase>
      <goals><goal>assembly</goal></goals>
    </execution>
  </executions>
</plugin>

We tried adding in our own entry into the manifest, the CruisePipelineLabel, with a value that should be set by Cruise. We added the new section so our pom.xml now looked like this:

<plugin>
  <artifactId>maven-assembly-plugin</artifactId>
  <configuration>
    <descriptorRefs>
      <descriptorRef>jar-with-dependencies</descriptorRef>
    </descriptorRefs>
    <archive>
      <manifest><mainClass>com.thekua.maven.ExampleProgram</mainClass></manifest>
        <manifestEntries>
          <CruisePipelineLabel>
            ${env.CRUISE_PIPELINE_LABEL}
          </CruisePipelineLabel>
        </manifestEntries>
    </archive>
  </configuration>
  <executions>
    <execution>
      <phase>package</phase>
      <goals><goal>assembly</goal></goals>
    </execution>
  </executions>
</plugin>

After running the target and inspecting the manifest.mf, I couldn’t see the additional property set. I did some searching, found a bug apparently fixed in the 2.2-beta-2 version. After some debugging, I found out that the plugin apparently does not include these additional entries if the value is not set. I tested this out by changing the line to:

<manifestEntries>
  <CruisePipelineLabel>aTestValue</CruisePipelineLabel>
</manifestEntries>

So the answer to whether or not maven-assembly-plugin ignores an element in the manifestEntries is to ensure the value is set before testing it. It looks like a null value is interpreted as “don’t include”.

Running tests on a specific OS under JUnit4

Our current development team is split, some working on windows machines, others on macs, with continuous integration tests running against both windows and linux environments. We’ve had a need to run operating specific JUnit4 tests and I got a little tired putting guard clauses into different @Before, @After and @Test methods to prevent a particular block of code running.

Using the new org.junit.runner.Runner annotation, and SystemUtils from commons-lang, here is the result:

package com.thekua.java.junit4.examples;

import org.apache.commons.lang.SystemUtils;
import org.junit.internal.runners.InitializationError;
import org.junit.internal.runners.JUnit4ClassRunner;
import org.junit.runner.notification.RunNotifier;

public class RunOnlyOnWindows extends JUnit4ClassRunner {

    public RunOnlyOnWindows(Class klass) throws InitializationError {
        super(klass);
    }

    @Override
    public void run(RunNotifier notifier) {
        if (SystemUtils.IS_OS_WINDOWS) {
            super.run(notifier);            
        }
    }
}

Our test classes, then look like this:

...
@RunWith(value=RunOnlyOnWindows.class)
public class WindowsOnlySpecificFooBarUnitTest {
    ...
}

Of course, you can use any other particular variations like SystemUtils.IS_OS_UNIX, or SystemUtils.IS_OS_MAC but we haven’t needed to yet.

Of course, this is easily turned into some sort of conditional runner abstract class but at least you get the basic idea.

java.io.File setReadonly and canWrite broken on Windows

File.setReadonly(true) doesn’t actually work on windows (at least on the JDK that we’re working on). According to the bug report filed here, it is just setting the DOS flag that prevents it from being deleted just not being written to.

Meanwhile, assuming you have mounted a read only disk partition in windows to, say X:, new File(“X:/”).canWrite() returns true when it really should be returning false. I’m still trying to find the bug reported to sun for this, or will later update this entry to include it. It seems to persist when running against both jdk1.5.0_15 and jdk1.6.0_07.

Executing native processes in Java on Windows

This is probably something that old timers in Java will probably know, so I’m posting this more for my reference. Trying to execute:

Runtime.getRuntime().exec("dir");

results in the following stack trace.

java.io.IOException: CreateProcess: dir error=2
  at java.lang.ProcessImpl.create(Native Method)
  at java.lang.ProcessImpl.(ProcessImpl.java:81)
  at java.lang.ProcessImpl.start(ProcessImpl.java:30)
  at java.lang.ProcessBuilder.start(ProcessBuilder.java:451)
  at java.lang.Runtime.exec(Runtime.java:591)
  at java.lang.Runtime.exec(Runtime.java:429)
  at java.lang.Runtime.exec(Runtime.java:326)

After doing some reading

error=2

apparently means file not found. The fix for something like this is to first pass everything to the windows command line shell (cmd.exe on windows xp). This seems to do the job better:

Runtime.getRuntime().exec("cmd /c dir");

The slash-C means “Carries out the command specified by string and then terminates”.

Maven FAQ

Maven only has 20 FAQs on its page here. I’ve been working with it on a project recently, and frankly I had plenty more than 20 to ask. Here’s my list of a few things that I hope will help people when they’re searching for answers not described well in the Maven documentation.

How do I get a plain file (*.txt) included into my deployment file (i.e. jar, war, ear)?
Maven’s default file structure looks like ${projectBase}/src/main. The compile target looks for production code under ${projectBase}/src/main/java. It looks for other files to include with your production deployment in ${projectBase}/src/main/resources. Adding your file to that directory will automatically include them in your deployment file.

When Maven downloads its dependencies, where does it store those files on a windows machine?
Look for files under C:\Documents and Settings\${userName}\.m2\repository.

Why doesn’t a feature from JDK5 or 6 work?
By default, Maven compiles against JDK1.3. Set the source and target JDK where you have your compile target:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <configuration>
        <source>1.5</source>
        <target>1.5</target>
    </configuration>
</plugin>

How can I create a single deployment file with all dependent libraries merged?
Instead of using the maven-jar-plugin plugin, use the maven-assembly-plugin. Here’s an example

<plugin>
     <artifactId>maven-assembly-plugin</artifactId>
     <configuration>
          <descriptorRefs>
               <descriptorRef>jar-with-dependencies</descriptorRef>
          </descriptorRefs>
          <archive>
               <manifest>
                    <mainClass>com.thekua.spikes.App</mainClass>
               </manifest>
          </archive>
     </configuration>
     <executions>
          <execution>
               <id>make-assembly</id>
               <phase>package</phase>
               <goals>
                    <goal>attached</goal>
               </goals>
          </execution>
     </executions>
</plugin>

« Older posts Newer posts »

© 2024 patkua@work

Theme by Anders NorenUp ↑