Posted by & filed under Java.

I’ve been familiarising myself with the new Java 8 language features. It’s great how much easier it is to work around language limitations now that we have lambdas.

One annoyance with Java is that try blocks cannot be used as expressions.

We can do it with “if” conditionals using the ternary operator.

    String result = condition ? "this" : "that";

But you cannot do the equivalent with a try block.

However, it’s fairly easy now that we have lambdas.

    @Test public void should_return_try_value() {
        String result = Try(() -> {
            return "try";
        }).Catch(NullPointerException.class, e -> {
            return "catch";
        }).apply();
 
        assertEquals("try", result);
    }

Code and more tests on github

Posted by & filed under XP.

There was a thread about pair programming on the London Java Community mailing list last week. I tried to contribute to the discussion there, but the mailing list doesn’t receive my replies. So I’ll post my thoughts here instead.

I have been pair programming most days in an XP team for the past 3 years. These are some of my experiences/thoughts. Obviously I enjoy pairing or I wouldn’t still be doing it, so I am biased. This is all entirely anecdotal (based on my experience).

Why work in pairs?

Code reviews are terrible

A common “alternative” to pairing is code reviews. One of the reasons I like pairing is that it fixes the problems I have experienced with code reviews.

Code reviews are commonly implemented with flaws including

  1. The feedback cycle is long

    A developer may work on a change for a day or two before submitting code for a review.

    This means they have invested a considerable amount of time and effort into a particular approach, which makes it hard not to take criticism personally.

    It can also waste a lot of time re-doing changes, or worse – leads to the temptation to let design issues slip through the code review because going back and re-doing the work would be too costly.

  2. Focus on micro code quality

    Sometimes only a diff is reviewed. This seems to promote nitpicking about formatting, naming conventions, guard clauses vs indentation and so on.

    These issues may be important, but I feel they’re less so than the affect of the changes on the design of the system as a whole. Does the changeset introduce more complexity into the model, or simplify it? Does it provide more insight into the model that suggests any significant refactorings?

    I find that pairing – when implemented well, can avoid these problems. The feedback loop couldn’t be any tighter, as your pair is right there with you as you write the code. The navigator is free to consider the bigger picture affect of the changes.

It’s social

Programming can be isolating. You sit in a room all day staring at a computer screen. With pair programming you get to write code all day and still talk to people at the same time.

It is faster

It can often seem slower than working alone, but in my experience timing tasks it actually ends up taking less time when pairing. Its easy to underestimate how long you can be blocked on small problems when working alone which doesn’t happen as often when pairing. Pairing also keeps you focused and stops you getting distracted by IRC or news sites.

It produces higher quality code

A lot of defects get noticed by the navigator that would have slipped through to a later stage. The temptation to take shortcuts or not bother with a refactoring is reduced because you’re immediately accountable to your pair. The caveat to this is that conceptual purity can be reduced by rotation.

How to pair well

Share the roles, swap regularly and be flexible

No-one likes to sit and watch while someone else types for hours on end. If you hog the keyboard, then your pair may lose concentration or not follow what is going on. Swapping roles helps to remain focused, and provides a change. It’s sometimes easier to swap roles when the navigator has an idea they want to explore or a suggestion that’s quicker to communicate through code than verbally.

Use with TDD.

TDD provides a natural rhythm that helps pairing. If you find that one person is driving too much then one way of restoring the natural rhythm is for one half of the pair to write a test, and the other to implement, and swap again for each refactoring stage. Its best to be flexible, not stick to this rigidly. Sometimes it makes sense to write two or three testcases at once rather than limiting yourself to one, while you’re discussing possible states/inputs.

TDD also helps to do just enough generalisation. When working in a pair it can be easy to talk yourself into doing unnecessary and unhelpful work to make an implementation ever more general and abstract. I find that often, when someone suggests a more general interface to enable potential future re-use, it turns out to never be used again. Abstraction for the sake of abstraction can also make the intent of the code less clear.

TDD’s Red/Green/Refactor stages help to refactor for immediately valuable re-use within the existing codebase, but the act of writing tests for features you don’t actually need helps you to consider carefully whether it really is worthwhile.

Communicate constantly

You need to be constantly talking about what you’re doing. Questioning and validating your approach, considering corner cases etc.

Rotate regularly

Swapping pairing partners regularly helps to share the knowledge of how things are implemented around the team. Rotation also often provides further incentive to improve the implementation, as a fresh set of eyes will see new issues, that the pair had been blind to. It also means you’re less likely to get fed up with working so closely with the same person for an extended period of time.

On more complex and or longer tasks, it can be useful for one person to remain working on the same task for 2 or 3 days, to ensure there is some continuity despite the rotation.

Use with shared ownership

Pairing enforces shared ownership as there’s never just one person who has worked on a particular feature. In order to rotate pairing partners, everyone needs to be free to work on any part of the codebase, regardless of who it was written by.

When not to pair

When spiking a solution

Pairing works well to ensure that a feature is implemented to a high standard, when both people have a reasonable idea how to go about implementing it. It is not good for exploring ways to implement something that is unfamiliar. It’s easier to find a solution to an unknown problem when working alone, where you can concentrate intensely and have uninterrupted thought. This does mean you need to break down tasks into an initial spike step, and a second implementation step.

When trying out new things

It’s important to have time when not pairing to allow for innovation and explore unconstrained ideas. Otherwise you can end up with groupthink, and constantly play it safe, using techniques that everyone has used before, and know work.

On trivial changes

Having two people work on making copy changes is probably a waste of resources. Deciding where to draw the line is tricky. I think if the change needs more than a couple of tests it is a good idea to pair on it.

If it’s unworkable for your team

There are lots of contexts in which pairing is simply not possible. e.g. Distributed teams in different timezones. Open source projects with sporadic contribution from a large number of people.

If you don’t enjoy it

Pairing is hard work, it’s certainly not for everyone.

It’s tiring

You have to be alert for long periods of time. You can’t drift off or distract yourself in the middle of a pairing session in the same way that you would when working alone.

It requires patience

It can often feel like progress is slower than it would be when working alone. If you’re not driving, it can be frustrating seeing people failing to use keyboard shortcuts or type slowly. If you are driving then you have to slow yourself down to constantly discuss what you’re doing and why.

It can reduce conceptual purity

This is more rotation than pairing itself – when one person has implemented a feature, you can often see a single vision for that feature when reading the code. Some of this seems to be lost with regular rotation. Just like a novel would be slightly odd if each chapter was written by a different author.

It can stop you doing things you want to do

It can be enjoyable having freedom to divert and work on things you feel important that aren’t really relevant to the task at hand. This tends to happen less when pairing because you’d both have to see the diversion as important.

There can be personality clashes

Is pairing worthwhile?

This comes up in any discussion on pairing. Having two developers working on a problem doubles the cost. Wouldn’t they have to work at double the speed in order for pairing to make sense?

Well, no.

Most of the cost of a feature occurs after it has been developed. In support/maintenance/resistance to future change etc. Any improvements you can make at the point of development to reduce defects, and make it easier to maintain the software in the future should yield big benefits in the future.

Then there’s other benefits to the team.

  • It takes less time for new developers to come up to speed with the codebase, and technologies in use, when they are pairing with developers who already know what they are doing.
  • Implementation details of any part of the system are known by at least 2 people, even if they have failed to communicate them to the rest of the team. This reduces the team’s bus factor, and makes it less painful when a team member decides to move on.
  • Shared ownership is unavoidable. There’s no single person to blame for any problem. Failures are team failures and fixing things is everyone’s responsibility. This means the team gets to focus on how to stop things going wrong in the future.

Summary

  • I enjoy pairing because it gives the tightest feedback loop, and it’s social.
  • Pairing is good for teams
  • Not all tasks are suitable for pairing.
  • Pairing well is hard
  • Pairing is not for everyone

Posted by & filed under Java.

One of the nice features of Nashorn is that you can write shell scripts in JavaScript.

It supports #!s, #comments, reading arguments, and everything there’s a Java library for (Including executing external processes obviously)

Here’s an example

#!/home/benji/nashorn7/bin/nashorn
#this is a comment
 
print('I am running from ' + __FILE__);
 
var name = arguments.join(' ');
print('Hello ' + name);
var runtime = Packages.java.lang.Runtime.getRuntime();
runtime.exec('xmessage hello ' + name);

Nashorn also comes with an interactive shell “jjs”, which is great for trying things out quickly.

If you want to run the scripts with Java7 instead of Java8 you’ll need to add Nashorn to the java boot classpath. Simply modify the “bin/nashorn” script and append

-J-Xbootclasspath/a:$NASHORN_HOME/dist/nashorn.jar

Posted by & filed under Java.

This weekend I have been playing with Nashorn, the new JavaScript engine coming in Java 8.

As an exercise I implemented a JUnit runner for JavaScript unit tests using Nashorn. Others have implemented similar wrappers, we even have one at work. None of the ones I have found do everything I want, and it was a fun project.

Because the JavaScript tests are JUnit tests they “Just work” with existing JUnit tools like Eclipse and as part of your build with ant/maven. The Eclipse UI shows every test function and a useful error trace (Line numbers only work with Nashorn).

There are also lots of reasons you wouldn’t want to do this – your tests have to work in a very Java-y way, and you miss out on great features of JavaScript testing tools. There’s also no DOM, so you may end up having to stub a lot if you are testing code that interacts with the DOM. This can be a good thing and encourage you not to couple code to the DOM.

Here’s what a test file looks like.

tests({
	thisTestShouldPass : function() {
		console.log("One == One");
		assert.assertEquals("One","One");
	},
 
	thisTestShouldFail : function() {
		console.log("Running a failing test");
		assert.fail();
	},
 
	testAnEqualityFail : function() {
		console.log("Running an equality fail test");
		assert.assertEquals("One", "Two");
	},
 
	objectEquality : function() {
		var a = { foo: 'bar', bar: 'baz' };
		var b = a;
		assert.assertEquals(a, b);
	},
 
	integerComparison : function() {
		jsAssert.assertIntegerEquals(4, 4);
	},
 
	failingIntegerComparison : function() {
		jsAssert.assertIntegerEquals(4, 5);
	}
});

You can easily extend the available test tools using either JavaScript or Java. In order to show the failure reason in JUnit tools you just need to ensure you throw a java AssertionError at some point.

The tests themselves are executed from Java by returning a list of Runnables from JavaScript.

var tests = function(testObject) {
	var testCases = new java.util.ArrayList();
	for (var name in testObject) {
		if (testObject.hasOwnProperty(name)) {
			testCases.add(new TestCase(name,testObject[name]));
		}
	}
	return testCases;
};

Where TestCase is a Java class with a constructor like:

  public TestCase(String name, Runnable testCase) {

Nashorn/Rhino will both convert a JavaScript function to a Runnable automatically 🙂

On the Java side we just create a Test Suite that lists the JavaScript files containing our tests, and tell JUnit we want to run it with a custom Runner.

@Tests({
	"ExampleTestOne.js", 
	"ExampleTestTwo.js",
	"TestFileUnderTest.js"
})
@RunWith(JSRunner.class)
public class ExampleTestSuite {
 
}

Our Runner has to create a heirarchy of JUnit Descriptions; Suite -> JS Test File -> JS Test Function

The Runner starts up a Nashorn or Rhino script engine, evaluates the JavaScript files to get a set of TestCases to run, and then executes them.

ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine nashorn = factory.getEngineByName("nashorn");
 
if (nashorn != null) return nashorn;
// Load Rhino if no nashorn.

You can quickly implement stubbing that also integrates with your Java JUnit tools.

Here’s the test code from the above screenshot.

 
load("src/main/java/uk/co/benjiweber/junitjs/examples/FileUnderTest.js");
 
var stub = newStub();
underTest.collaborator = stub;
 
tests({	
	doesSomethingImportant_ThisTestShouldFail: function() {
		underTest.doesSomethingImportant();
 
		stub.assertCalled({
			name: 'importantFunction',
			args: ['wrong', 'args']
		});
	},
	doesSomethingImportant_ShouldDoSomethingImportant: function() {
		underTest.doesSomethingImportant();
 
		stub.assertCalled({
			name: 'importantFunction',
			args: ['hello', 'world']
		});
	}
});

To implement the stub you can use __noSuchMethod__ to capture interactions and store them for later assertions.

var newStub = function() {
	return 	{
		called: [],
		__noSuchMethod__:  function(name, arg0, arg1, arg2, arg3, arg4, arg5) {
			var desc = {
				name: name,
				args: []
			};
			var rhino = arg0.length && typeof arg1 == "undefined";
 
			var args = rhino ? arg0 : arguments;
			for (var i = rhino ? 0 : 1; i < args.length; i++){
				if (typeof args[i] == "undefined") continue;
				desc.args.push(args[i]);
			}
			this.called.push(desc);
		},
 
		assertCalled: function(description) {
 
			var fnDescToString = function(desc) {
				return desc.name + "("+ desc.args.join(",") +")";
			};
 
			if (this.called.length < 1) assert.fail('No functions called, expected: ' + fnDescToString(description));
 
			for (var i = 0; i < this.called.length; i++) {
				var fn = this.called[i];
				if (fn.name == description.name) {
					if (description.args.length != fn.args.length) continue;
 
					for (var j = 0; j < description.args.length; j++) {
						if (fn.args[j] == description.args[j]) return;
					}
				}
			}
 
			assert.fail('No matching functions called. expected: ' + 
					'<' + fnDescToString(description) + ")>" +
					' but had ' +
					'<' + this.called.map(fnDescToString).join("|") + '>'
			);
		}
	};
};

The code is on Github

It is backwards compatible with Rhino (JavaScript Scripting engine in current and old versions of Java). Most things seem just as possible in Rhino, but it’s easier to work with Nashorn due to its meaningful error messages.

You can also run using Nashorn on Java7 using a backport and adding nashorn to the bootclasspath with -Xbootclasspath/a:$NASHORN_HOME/dist/nashorn.jar

Posted by & filed under openSUSE, Uncategorized.

The steam linux beta is now open to everyone. I just installed it on my openSUSE PC. Here’s how.

Update Andrew Wafaa pointed out that there’s an rpm package providing a much easier installation option that I could have found myself ¬_¬

It wasn’t entirely straightforward as there is only an Ubuntu package. These steps are unlikely to work on all setups, but they may help someone. I’m using 64bit openSUSE 12.2

Steam on Linux

1. Add tools repository

$ zypper ar http://download.opensuse.org/repositories/utilities/openSUSE_12.2/ alien

2. Install alien and steam’s dependencies

Alien is a tool that can convert debian packages to RPMs.

$ zypper in alien libpango-1_0-0-32bit libgtk-2_0-0-32bit  mozilla-nss-32bit  libgcrypt11-32bit  libopenal1-soft-32bit libpulse0-32bit libpng12-0-32bit

3. Download the steam deb package

$ wget http://media.steampowered.com/client/installer/steam.deb

4. Convert steam deb to an rpm

$ alien --to-rpm ./steam.deb

5. Install rpm

$ rpm -Uvh ./steam*.rpm

6. Run steam

$ SDL_AUDIODRIVER=alsa steam

The SDL_AUDIODRIVER=alsa was needed for me because I uninstalled pulseaudio because I like being able to play multiple audio streams at the same time.

Posted by & filed under openSUSE, webpin.

I have added support for searching by package names only. This was one of the most requested features.

I would like to make the normal search “just work” as much as possible and rank relevant search results highly. However, there do seem to be some good use cases for only searching package names.

You can do so by prefixing a search term with either name: to restrict your search to package names or exact: to find only packages that exactly match the specified name.

Compare name:amarok vs exact:amarok vs amarok

I’ve added a tips page to document these hidden features.

This week I’ve also

  • Added google chrome repository for openSUSE 12.2
  • Indexed more OBS repositories

Please do keep your suggestions and bug reports coming via email.

Posted by & filed under c#, Java.

Someone on IRC was asking whether it was possible to do catch multiple types of exceptions at the same time in c#.

In Java 7 there’s a feature from project coin called multi catch that enables the following syntax:

public class MultiCatch {
	public static void main(String... args) {
		try {
			throw new ExceptionA();	
		} catch (final ExceptionA | ExceptionB ex) {
			System.out.println("Got " + ex.getClass().getSimpleName());
		}
	}
}
 
class ExceptionA extends RuntimeException {}
class ExceptionB extends RuntimeException {}

Here the catch block catches either an ExceptionA or an ExceptionB. This can be useful if you want to handle several exceptions in the same way.

I don’t believe c# has a similar language feature, however since it has lambdas you can replicate something similar yourself like so:

using System;
 
public class MultiCatch {
  public static void Main() {
    Trier.Try(() => {
      throw new ExceptionA(" Hello A");
    }).Catch<ExceptionA, ExceptionB>(ex => {
      Console.WriteLine(ex.GetType() + ex.Message);
    });
 
    Trier.Try(() => {
      throw new ExceptionC(" Hello C");
    }).Catch<ExceptionA, ExceptionB, ExceptionC>(ex => {
      Console.WriteLine(ex.GetType() + ex.Message);
    });
  }
}

We create a method called Try and pass it an Action. We then chain a call to a Catch method which we pass the Exceptions we want to catch as Type arguments. As you can see we can vary the number of type arguments. This is something you can’t do in Java partly due to type erasure.

The Try method simply passes the action through to a Catcher

  public static Catcher Try(Action action) {
    return new Catcher(action);
  }

and the Catch method has overloads for any number of exceptions you want to support. We can restrict the type arguments to only Exception types by using the where clause.

  public void Catch<T,U>(Action<Exception> catchAction) where T : Exception where U : Exception {
    try {
      action();
    } catch (T t) {
      catchAction(t);
    } catch (U u) {
      catchAction(u);
    }
  }

Here’s the full code listing:

using System;
 
public class MultiCatch {
  public static void Main() {
    Trier.Try(() => {
      throw new ExceptionA(" Hello A");
    }).Catch<ExceptionA, ExceptionB>(ex => {
      Console.WriteLine(ex.GetType() + ex.Message);
    });
 
    Trier.Try(() => {
      throw new ExceptionC(" Hello C");
    }).Catch<ExceptionA, ExceptionB, ExceptionC>(ex => {
      Console.WriteLine(ex.GetType() + ex.Message);
    });
  }
}
 
class Trier {
  public static Catcher Try(Action action) {
    return new Catcher(action);
  }
}
 
class Catcher {
  private Action action;
  public Catcher(Action action) {
    this.action = action;
  }
 
  public void Catch<T,U>(Action<Exception> catchAction) where T : Exception where U : Exception {
    try {
      action();
    } catch (T t) {
      catchAction(t);
    } catch (U u) {
      catchAction(u);
    }
  }
 
   public void Catch<T,U,V>(Action<Exception> catchAction) where T : Exception where U : Exception where V : Exception {
    try {
      action();
    } catch (T t) {
      catchAction(t);
    } catch (U u) {
      catchAction(u);
    } catch (V v) {
      catchAction(v);
    }
  }
}
 
class ExceptionA : Exception {
  public ExceptionA(string message) : base(message) {}
}
 
class ExceptionB : Exception {
  public ExceptionB(string message) : base(message) {}
}
 
class ExceptionC : Exception {
  public ExceptionC(string message) : base(message) {}
}

This is why Java needs lambdas. Because they fix everything ¬_¬

For reference, the closest I could do in Java prior to 7 was something like this:

public class MultiCatchWithoutCoin {
	public static void main(String... args) {
		new TrierII<ExceptionA, ExceptionB>() {
			public void Try() {
				throw new ExceptionA();
			}
			public void Catch(Exception ex) {
				System.out.println("Got " + ex.getClass().getSimpleName());
			}
		};
	}
}

I had to use an abstract class instead of lambdas as Java has no lambdas. I also had to create a new class for each number of type arguments I wanted because you can’t overload with variable numbers of type arguments.

I also had to use Gafter’s gadget to access the types of the Exceptions to catch.

Here’s the full code listing.

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.util.ArrayList;
 
public class MultiCatchWithoutCoin {
	public static void main(String... args) {
		new TrierII<ExceptionA, ExceptionB>() {
			public void Try() {
				throw new ExceptionA();
			}
			public void Catch(Exception ex) {
				System.out.println("Got " + ex.getClass().getSimpleName());
			}
		};
	}
}
 
abstract class TrierII<T extends Exception, U extends Exception> {
	public abstract void Try();
	public abstract void Catch(Exception ex);
	public TrierII() throws T, U {
		try {
			Try();	
		} catch (Exception e) {
			if (getTypeArgument(1).isAssignableFrom(e.getClass()) || getTypeArgument(2).isAssignableFrom(e.getClass())) { 
				Catch(e);
			}
			throw e;
		} 
	}
 
	Class<? extends Exception> getTypeArgument(int num) {
		Type superclass = getClass().getGenericSuperclass();
		if (superclass instanceof Class) {
			throw new RuntimeException("Missing type parameter.");
        	}
        	Type type = ((ParameterizedType) superclass).getActualTypeArguments()[num - 1];
		if (type instanceof Class<?>) {
               		return (Class<? extends Exception>) type;
		} else {
			return (Class<? extends Exception>) ((ParameterizedType) type).getRawType();
		}
	}
 
}
 
class ExceptionA extends RuntimeException {}
class ExceptionB extends RuntimeException {}

Posted by & filed under openSUSE, webpin.

Thanks to everyone who has been sending me bug reports and suggestions for webpinstant

This evening I

Now that it has had some testing I’ll be making sure that all the remaining OBS repositories are indexed. I’ll also perform a complete re-index of all repositories to benefit from the indexing bugfixes.

Posted by & filed under openSUSE, webpin.

A few years ago I wrote a tool called webpin that allowed people to search for openSUSE packages across all openSUSE repositories, both by package names and their contents.

This was useful for finding where a package for a particular file was located. openSUSE had an awful lot of separate package repositories (There are now over 3,000 for 12.2).

 

It’s now back, and better, at http://webpinstant.com/

 

Webpinstant

I have now re-written this tool. It’s now much faster at both searching and indexing, meaning it can index a lot more and hopefully provide better results.

Sample Searches:

kde irc client

(Brings back konversation and quassel).

bzcat

(brings back bzip2 on debian)

Multiple Distributions

I have also added support for other distributions including Fedora, Debian, and Ubuntu. In fact any distribution that uses repo-md or debian style repositories can be easily added to the index.

Select Distribution

So far it has an index of repositories for:

  • openSUSE 12.1
  • openSUSE 12.2
  • Fedora 17
  • Ubuntu Quantal
  • Debian Squeeze

Let me know suggestions of more distributions you’d like to see indexed. RHEL, CENTOS, and SLES seem like obvious candidates.

Ubuntu Limitations

openSUSE build service repositories are detected automatically. I’d like to do the same for Ubuntu PPAs but haven’t yet worked out how to obtain a list of PPAs programmatically. If you have a suggestion please let me know. In the meantime please do suggest any important PPAs that it would be useful to index.

I also can’t find the Contents.gz files in Ubuntu PPAs that are available in other Debian repositories. These provide a lot more metadata for searching. Am I just failing to look well enough or are they missing from PPAs?

Search/Ranking

Some of the things indexed include:

  • Package Names
  • Summaries
  • Descriptions
  • RPM provides
  • Files within packages

Basically anything found in Primary.xml/FileLists.xml in repo-md repositories and anything found in the Packages and Contents lists in Debian repositories.

I have also put a lot of work into improving the search ranking based on cases that the original webpin performed poorly on. I suspect there will still be lots of things that are not found or not ranked highly. If you can’t find what you’re looking for with webpin then please let me know what your search terms were and if possible what you expected to appear as a result.

API and Command line app

I have added support for the old API that the command line webpin app by Pascal Bleser wrote.

You can try it out if you like. To update the tool to use the new webpin on openSUSE 12.2 you need to edit a couple of files. First add the following to distVersionMap in /usr/lib/python2.7/site-packages/webpin/const.py

'12.2': 'openSUSE_122',

Then change the server line in the same file as follows

server = 'webpinstant.com'

Future Features

Things I’d like to add other than more distros / repositories include:

* One Click Install support. Adding install links to the search results. This is something that the old webpin had.
* A better API desktop/command line clients

Feedback

Please do let me know bug reports / feature requests. In particular I’m interested in

  • What distributions should I index
  • What repositories should I index?
  • What searches result in badly ranked results
  • What future features should I prioritise?

Please let me know using email webpin at benjiweber.co.uk or find me on freenode (benjiman). or @benjiweber on twitter.

Posted by & filed under Uncategorized.

I thought I’d start posting some of my notes on tips for testing. Starting with some tips and tricks for Mockito.

Mocking/Stubbing frameworks like Mockito help to test units of code in isolation with minimal boilerplate.

A couple of guidelines I like to aim to follow when writing tests are:

  • Each test should assert/verify just one thing (or as few things as possible)
  • Minimise stubbing noise per test

Sometimes it can be hard to write consise tests for consise, readable code. It’s often tempting to compromise the simplicity of the code under test in order to make the tests easier. However, Mockito is flexible enough that this can usually be avoided.

Obviously we have to be careful. Often (perhaps even usually) code being hard to test is a smell, and it’s better to re-think how the code is written to make it more naturally testable.

Here are three things that can make tests more difficult:

  • Use of the Builder Pattern
  • Use of Real Objects (not everything is stubbed)
  • Methods that return arguments (e.g. put on caches)

Use of the Builder Pattern

Here’s the first example. We’d like to write some code like the following.

public class Example1 {
	FoxBuilder foxBuilder;
	Dog dog;
 
	public void someMethod() {
		Fox fox = foxBuilder
			.speed(QUICK)
			.colour(BROWN)
			.legs(4)
			.longTail()
			.gender(MALE)
			.build();
 
		fox.jumpOver(dog);
	}
}

If you have a basic familiarity with Mockito you might be tempted to write a test like the following. Unfortunately here

  • The stubbing is very verbose (and could be worse in a less trivial example)
  • We are testing several things in a single test, so lots of different things could break it
@RunWith(MockitoJUnitRunner.class)
public class Example1NaiveTest {
	@Mock FoxBuilder foxBuilder;
	@Mock Fox fox;
	@Mock Dog dog;
 
	@InjectMocks
	Example1 example = new Example1();
 
	@Test public void naiveTest() {
		when(foxBuilder.speed(QUICK)).thenReturn(foxBuilder);	
		when(foxBuilder.colour(BROWN)).thenReturn(foxBuilder);
		when(foxBuilder.legs(4)).thenReturn(foxBuilder);
		when(foxBuilder.longTail()).thenReturn(foxBuilder);
		when(foxBuilder.gender(MALE)).thenReturn(foxBuilder);
		when(foxBuilder.build()).thenReturn(fox);
 
		example.someMethod();
 
		verify(fox).jumpOver(dog);
	}
}

Omitting one of the when() stubbings from the above test will result in a NullPointerException.

Fortunately Mockito has a solution. When a method invocation on a mock has not been stubbed in the test, Mockito will fall back to the “default answer”. We can also specify what the default answer will be. So let’s create a default answer suitable for builders.

Here we create an Answer to make Mocks return themselves from any method invocation on them that has a compatible return type.

public class Return {
	public static Answer<?> Self = new Answer<Object>() {
		public Object answer(InvocationOnMock invocation) throws Throwable {
			if (invocation.getMethod().getReturnType().isAssignableFrom(invocation.getMock().getClass())) {
				return invocation.getMock();
			}
 
			return null;
		}
	};
}

Now our test can look like this. We only have to stub out the build invocation. Notice the instantiations of the Mocks now tell Mockito to use our new default Answer.

@RunWith(MockitoJUnitRunner.class)
public class Example1Test {
	FoxBuilder foxBuilder = mock(FoxBuilder.class, Return.Self);
	FoxBuilder quickFoxBuilder = mock(FoxBuilder.class, Return.Self);
 
	@Mock Fox fox;
	@Mock Dog dog;
 
	@InjectMocks
	Example1 example = new Example1();
 
	@Test public void whenSomeMethodCalled_aFox_shouldJumpOverTheLazyDog() {
		when(foxBuilder.build()).thenReturn(fox);
		example.someMethod();
		verify(fox).jumpOver(dog);
	}
}

“Ah”, you might say. “Now we’re no longer checking we build a fox of the right type”. Well, if that’s important to us we can put it in another test. That way we stick to one test per item of behaviour we want to assert.

We can assert that the speed method on the builder is called

@Test public void whenSomeMethodCalled_shouldCreateQuickFox() {
	when(foxBuilder.build()).thenReturn(fox);
	example.someMethod();
	verify(foxBuilder).speed(QUICK);
}

Or, to more properly check that the dog is jumped over by a fox-that-is-quick we could utilise two builders to represent a state transition:

@Test public void whenSomeMethodCalled_shouldJumpOverAFoxThatIsQuick() {
	when(foxBuilder.speed(QUICK)).thenReturn(quickFoxBuilder);
	when(quickFoxBuilder.build()).thenReturn(fox);
	example.someMethod();
	verify(fox).jumpOver(dog);
}

 

Real Objects

Now, suppose we decided that the dog should be instantiated within the method instead of a field on Example. Making it hard to test.

Here is the code we want to write

public class Example2 {
	FoxBuilder foxBuilder;
 
	public void someMethod() {
		Fox fox = foxBuilder
			.speed(QUICK)
			.colour(BROWN)
			.legs(4)
			.longTail()
			.gender(MALE)
			.build();
 
		Dog dog = new Dog();
 
		dog.setLazy();
 
		fox.jumpOver(dog);
	}
}

We could create a dogFactory and stub out the creation of the dog. However, this adds complexity and changes the implementation for the test.
We could use powermock to mock the Dog’s constructor.
However, there can be valid reasons not to mock objects like this. For example it’s good to avoid mocking Value Objects.

So, how can we test it with a real object? Use a Mockito utility called an ArgumentCaptor

Here we capture the real Dog object passed to the Fox mock, and can perform assertions on it afterwards.

@Test public void whenSomeMethodCalled_aRealFox_shouldJumpOverTheLazyDog() {
	when(foxBuilder.build()).thenReturn(fox);
	example.someMethod();
 
	ArgumentCaptor dogCaptor = ArgumentCaptor.forClass(Dog.class);
	verify(fox).jumpOver(dogCaptor.capture());
	assertTrue(dogCaptor.getValue().isLazy());		
}

Methods that Return Arguments

 

Now let’s make it harder again. Suppose the real dog passes through another object such as a cache that we’d like to stub.

public class Example3 {
	FoxBuilder foxBuilder;
	Cache cache;
 
	public void someMethod() {
		Fox fox = foxBuilder
			.speed(QUICK)
			.colour(BROWN)
			.legs(4)
			.longTail()
			.gender(MALE)
			.build();
 
		Dog dog = new Dog();
		dog.setLazy();
 
		dog = cache.put(dog);
 
		fox.jumpOver(dog);
	}
}

These presents some challenges to test if we have stubbed the cache. One approach would be to test both the dog being lazy and the cache-addition in the same test, re-using the ArgumentCaptor used above.

This is undesirable because there are two things being asserted in a single test.

@RunWith(MockitoJUnitRunner.class)
public class Example3Test {
	FoxBuilder foxBuilder = mock(FoxBuilder.class, Return.Self);
	FoxBuilder quickFoxBuilder = mock(FoxBuilder.class, Return.Self);
 
	@Mock
	Cache mockCache;
	@Mock
	Fox fox;
	@Mock 
	Dog dog;
 
	@InjectMocks
	Example3 example = new Example3();
 
	@Test public void bad_whenSomeMethodCalled_aRealFox_shouldJumpOverTheLazyDog() {
		when(foxBuilder.build()).thenReturn(fox);
		when(mockCache.put(any(Dog.class))).thenReturn(dog);
 
		example.someMethod();
 
		ArgumentCaptor dogCaptor = ArgumentCaptor.forClass(Dog.class);
		verify(mockCache).put(dogCaptor.capture());
		assertTrue(dogCaptor.getValue().isLazy());
 
		verify(fox).jumpOver(dog);
	}
}

The trick is to use a Mockito Answer again to create a custom stubbing rule. Here we define an Answer that will return an argument passed to the mock method invocation.

public static <T> Answer<T> argument(final int num) {
	return new Answer<T>() {
		@SuppressWarnings("unchecked")
		public T answer(InvocationOnMock invocation) throws Throwable {
			return (T) invocation.getArguments()[num - 1];
		}
	};
}

Using this, the test is only one line longer than it was before adding the cache. We could even move the cache stubbing (2nd line of test) to an @Before section to further declutter the test (as it’s generic cache behaviour)

@Test public void whenSomeMethodCalled_aRealFox_shouldJumpOverTheLazyDog() {
	when(foxBuilder.build()).thenReturn(fox);
	when(mockCache.put(any())).thenAnswer(argument(1));
 
	example.someMethod();
	ArgumentCaptor dogCaptor = ArgumentCaptor.forClass(Dog.class);
 
	verify(fox).jumpOver(dogCaptor.capture());
	assertTrue(dogCaptor.getValue().isLazy());
}

If we actually want to assert the caching happens we can write another test. This test is consise and if it fails we will know why.

If we cared about the specific properties of the dog being cached we could add the ArgumentCaptor back in.

@Test public void whenSomeMethodCalled_aFoxShouldBeCached() {
	when(foxBuilder.build()).thenReturn(fox);
	example.someMethod();
	verify(mockCache).put(any(Dog.class));
}

The examples used in this post are on github

There are lots of other things you can use Mockito Answers for. Take a look at the Answers enum for some of the default answers provided. RETURN_DEEP_STUBS can be useful, particularly for testing legacy code.