Posted by & filed under XP.

There are some tools that are considered almost a universal good amongst developers. Revision control systems tend to fall into this category. Everyone ought to be using one.

Continuous integration servers are also often bundled in to this category. It’s the kind of tool people ask you during interviews whether you use and judge you accordingly.

However, with continuous integration build servers, as with any tools really, it is worth considering the affects it has on your team. What are the trade-offs we are making for the benefits it provides?

Why use a CI build server?

There are many reasons that a CI build server might help a team. It forces you to have an automated build since an unattended system will be running it.

It also provides a way to ensure the tests are run regularly for each project.

It makes sure that every code push gets built and ensures people are integrating “atomic” changesets. In short it helps keep your code in an always-deployable state.

Mixed blessings

There are other benefits of build servers that have more complex effects on the team.

Your build becomes asynchronous, which means developers are not blocked while waiting for a build, and the full test suite, including any integration and performance tests to run.

Only the build server(s) need to have a fast, deterministic, working full build process. This saves you from maintaining it across potentially diverse development environments, tracking down elusive environment-specific non-determinism in tests, maintaining all dependencies for integration tests on all development machines.

There is also often tool support for things like build-dependencies – what order do projects have to be built in.

The curse of Asynchronicity

Using a CI server to make a build asynchronous is often a response to frustration at slow build times.

By making your build asynchronous you remove much of the pain from the build being slow, and even non-deterministic.

It can be very tempting for the team to ignore a steadily increasing build time because there are other things that are causing them more pain. What does it matter if it now takes 15 minutes instead of 2 to go from a commit to a deployed system? You can work on other things while it is building.

However, our goal is not to minimise development time, it is to maximise the value we are delivering to an organisation, and minimise the time to deliver increments of that value to get rapid feedback.

We don’t just want fast build times to save development time. It is also necessary to enable us to keep our feedback loop from customers and end users short.

It’s not sufficient to push several times a day and know that it will end up in a production or production-like environment eventually and be happy that you have integrated your changes.

We ought to care about what are are deploying. If we push a new user-visible change we can then go and get some immediate feedback on it from our customer. If it’s not a user-visible change we should still be caring about its affect on things like performance of the environment it is deployed to. We want feedback from our releases – otherwise what’s the point of releasing regularly.

We want to be able to change things in response to this feedback. That means that we actually do still care about the time it takes to build and deploy our code. If our build is asynchronous it’s tempting to start something else while it is happening and forget about getting that feedback, or have to context-switch to act on feedback once the build is done.

If we had a synchronous build we would have constant incentive to keep that build fast as we would feel the pain as it grows in length (particularly when it exceeds the duration of a coffee break). It is also difficult to ignore non-determinism in the build because it increases the build time.

Synchronicity forces us to keep the feedback loop for changes short, if we choose to make the build asynchronous we need to build in another mechanism to make slow and failing builds painful to the team.

Maintaining the build

Another somewhat subtle effect of build servers is that you have fewer environments from which you need the full test suite to run. For integration tests this means fewer dependencies to manage and environment configuration to maintain.

It is easy for the CI server to become the only place that all the tests can be run from and the only machine capable of building a deployable artifact.

This means the CI server is now critical infrastructure as you can’t work without it. This can happen because you lose some of the pressure to automate the configuration of the build environment because you only have to do it in a single place. It’s also easy to embed configuration needed for a build manually in non-standard configuration via a CI server web interface.

One way to ensure this doesn’t happen might be to throw away your CI server after every build and spin up a new one. That would ensure it’s very painful for you if you haven’t fully automated the configuration of your build system.

Consider the trade-offs

You can get the benefits of a CI server through a team being disciplined at building and deploying every change. Similarly, you can also avoid the potential associated traps if you build in some other feedback mechanism to encourage you to keep builds fast. There are ways of working effectively with or without a CI server.

My point is that it is always worth thinking through the effect introducing new tooling will have on your team. Even something that seems to be all beneficial may have subtle effects that are not immediately obvious. Think about what your goals are, and whether the tools help achieve those goals or make your life easier at the cost of your goals.

Posted by & filed under XP.

This week I have been reading Simon Brown’s leanpub book “Software Architecture for Developers”. It is an interesting read and has lots of useful advice for creating a shared technical vision and communicating technical concepts effectively via sketches.

One thing I was struck by while reading it is just how many of the responsibilities of the “Software Architect” as a person’s role are unnecessary in well functioning cross-functional team. A lot of the book is dedicated to why software architects are needed to work around what seem to be organisational antipatterns

Avoiding integration problems at delivery time

Development teams need to converse with technical stakeholders regularly just as with customers. Otherwise the project might be delayed towards the end of the delivery cycle due to not having the appropriate production environments set up, missing licences, or a separate operations team being unwilling to support it.

One way to avoid this is to regularly discuss your progress with technical stakeholders in the same way you would with your customers.

If the team is practising continuous delivery and deploying every feature they build to a production environment they will not run into this problem. The first thing the team does will be to create a project that can be automatically deployed to production. They provision the production environments and perform a deployment. They will deliver working increments of the software into production after every feature they build.

In this scenario “late in the delivery cycle” might be towards the end of the first day or maybe the second.

Many problems can be mitigated by applying the principle of minimising feedback loops. The more frequently we get feedback the less painful, and the more useful it is. It will be immediately relevant and we won’t have wasted time on a poorly chosen approach. This is one of the reasons pair programming is so useful.

Continuous delivery forces us to get feedback from all relevant stakeholders as frequently as possible. It won’t just be our customers looking at features, we can’t put off getting feedback from stakeholders who have input on operational setup, security, budgets for licences/servers and so on.

Where possible, I think it is best to bundle these specialities into the development team itself. Give the team all the skills needed to deliver the product effectively. Whenever the team has to hand over to other teams there will be communication issues, delays, and other sources of friction.

Owning the big picture

The book makes the point that “Somebody needs to own the big picture” and steer the direction of a software project. The problem with anything important that has “somebody” owning it is that it is an immediate bus-factor for your project. It’s best if more than one person is ensuring there is a shared big picture.

Not only does this allow for people leaving the team but means that there is more chance of picking up on important issues.

Simon actually makes this point “This problem goes away with truly self-organising teams”

Architecture is a responsibility of the team

It is important that thought is given to the structure of a system as a whole, how it interacts with other systems, key tradeoffs, meeting non-functional requirements, keeping the system maleable and avoiding it becoming resistant to change, ensuring a shared technical vision. These are some of the things that might be part of an architect’s role.

However, I don’t think this means you need a person whose role is the Architect on a team, any more than any other role. It is a responsibility of the team. In an ideal team, each member would be thinking about overlapping and complementary issues. If the team is lacking skills/experience in these areas and is dropping these responsibilities it may be useful to have a specialist. Just the same as with QA, Ops, UX, product managers or any other area where you might require specialists.

Decisions are better made by those affected

Decisions that affect the complexity and reliability of a production system are better made by those affected – i.e. people who will be responsible for keeping it running.

Similarly developers are in the best position to make decisions about tradeoffs that increase the complexity of implementation of maintenance – because they will be affected.

In an ideal team everyone will work on some aspects of implementation, maintenance, operations, and planning for the future. Consequently they will be in a good position to make decisions that trade-off cost in one area for benefit in another.

If technical decisions are made by someone with no skin in the game then they might not be the best decisions, and those affected may not buy in to them.

Fix the organisation first

To quote the book again

“In an ideal world, these cross-discipline team members would work together to run and deliver a software project, undertaking everything from requirements capture and architecture through to coding and deployment. Although many software teams strive to be self-organising, in the real world they tend to be larger, more chaotic and staffed only with specialists.”

Why not work on the organisational problems that mean we’re not in this “ideal world” and consequently reduce the need for specialist software architecture roles?

Posted by & filed under Java.

Method references in Java 8 will allow us to build much nicer APIs for interacting with databases.

For example when you combine method references with features we already had in Java it’s possible to create clean, typesafe queries without needing code generation.

Full examples and implementation available on github.

Here’s an example of what’s possible


    Optional<Person> person = 
            .select(personMapper, connectionFactory::openConnection);

This queries the database for a person with last name ‘weber’ and first name ending in ‘ji’, and returns a Person if found.

It generates the following SQL, fills in the parameters, and deserialises the result into a Person object for us.

    SELECT * FROM person WHERE first_name LIKE ? AND last_name = ?

We are even able to make it typecheck the comparisons. The following gives a compile-time error as “hello” is not an Integer.

    Optional<Person> result = 
            .equalTo(5) // This is fine.
            .equalTo("hello") // This line fails to compile as "hello" is not an Integer

Here’s how it works.

Firstly Optional<Person> uses Java8’s Optional type to indicate that the query may not have found any matching people.

Person::getFirstName is a method reference to an instance method on a Person. What this gives us is a function that takes an instance of Person and returns the result of calling getFirstName on said Person instance. In this instance a Function<Person, String>.

Passing this to our where() method gives us back a SelectComparison<Person, String> instance. SelectComparison is an interface which has comparison methods such as equalTo or notEqualTo that only accept values that match the second type parameter – in our case String (because our method reference had a String as its return type.

    public <U> SelectComparison<T,U> where(Function<T,U> getter)

This lets us build up valid queries. The next trick is to for the query builder to work out what the method name “getFirstName” actually is. We have just passed in a method reference, it doesn’t know what it is called.

To work around this we can use a Java dynamic proxy. When we call from(Person.class) we create a dynamic proxy that impersonates the Person type and simply records the names of any methods invoked on it. This is a similar approach to that used by mocking frameworks for tests.

The cglib library makes this really easy – and it now even works with Java 8.

    public class RecordingObject implements MethodInterceptor {
        private String currentPropertyName = "";
        public Object intercept(Object o, Method method, Object[] os, MethodProxy mp) throws Throwable {
            currentPropertyName = Conventions.toDbName(method.getName());
        public String getCurrentPropertyName() {
            return currentPropertyName;

With this in place when we call where(Person::getFirstName) the implementation invokes the passed Function<Person, String> against our dummy proxy-object and asks the proxy object for the name of the invoked method and keeps a note of it for query generation. We can convert the names to an alternative format for the database using a naming convention. My preference is to convert namesLikeThis to names_like_this.

    public <U> SelectComparison<T,U> where(Function<T,U> getter) {
        // Invoke against dummy object, find name
        // Ask the dummy object what the name is.
        String fieldName = recorder.getCurrentPropertyName(); 
        return new SelectComparison<T, U>() {
            public Select<T> equalTo(U value) {
                // Record the values for query generation/execution
                whereFieldNames.add(new FieldNameValue<>(fieldName, value, "="));
                return Select.this;


Now that we can generate these queries, we can use a similar approach to convert the ResultSet that we get from Jdbc back into a Person.

    Mapper<Person> personMapper = 

Here we are constructing a Mapper that is able to take a row from a ResultSet and convert it into a Person type. We construct it with a builder that first takes in a Supplier<T> – a factory method that can give us back an instance of the type we are creating, ready to populate. In this instance we are using Person::new which is a method reference to the constructor of Person.

Next we pass in references to the setters that we want to call to populate the Person with values from the database.

We are able to use a similar trick to building the queries above. Here, our method reference Person::setFirstName gives us a function that takes a Person and also another value that the setter itself accepts. Our set() method accepts a BiConsumer<T,U> in this case a BiConsumer<Person,String> for the first two setters and a BiConsumer<Person,Integer> for the last one.

Using the above dynamic proxy trick we are able to again record the names of the setters for later use in querying the resultset. We also store the setter functions themselves to invoke when populating the object.

The map(ResultSet) function then just involves

  1. Construct a new instance using the factory method Person::new
  2. For each method reference passed in
    a) Query the resultset for its name
    b) Invoke the method reference, passing in the instance from 1. and the value from the ResultSet.

We can extend this to more than just queries. We can create tables



Which generates

    CREATE TABLE IF NOT EXISTS person ( first_name text, last_name text, favourite_number INTEGER )

Inserting Values

For inserts we pass in an instance of Person rather than a Class<Person>. This means that when executing the insert statement we can invoke the passed method references against our Person instance to obtain the values for use in the insert statement.

    Person benji = new Person("benji","weber");

Which generates the following, populating it with values from the “benji” object.

    INSERT INTO person (first_name, last_name, favourite_number) VALUES ( ?, ?, ? )

We invoke the getter function twice. Once against our proxy to get the name, and once against our instance to get the value.

    public <U extends Serializable> Upsert<T> value(Function<T,U> getter) {
        U result = getter.apply(recorder.getObject());
        String fieldName = recorder.getCurrentPropertyName();
        setFieldNames.add(new FieldNameValue(fieldName, getter.apply(value)));
        return this;


We can of course also do updates. We just combine the approaches used for queries and inserts.


Which generates the following, as before – populating it with values from the “benji” object.

    UPDATE person SET first_name = ? WHERE last_name = ?

There’s lots to look forward to with Java 8. It will be interesting to see what framework developers start doing with these features. Now we just have to wait for Java 8 to actually be released.

In case you missed the links above, read the full code examples and implementation on github.

I'll just wait for Java 8 to be released

Posted by & filed under Java.

SQL gives us a “coalesce” function, which returns the first non-null argument.

This seems to be a common operation in Java too, since Java unfortunately burdens us with the concept of nulls. We have been able to do something similar with Java for some time using a varargs method like this:

    public static <T> T coalesce(T... ts) {
        for (T t : ts)
            if (t != null)
                return t;
        return null;
    String result = coalesce(somethingPossiblyNull(), somethingElse(), "defaultValue");

(There is a similar method on Guava’s Objects class);

It looks nice and avoids the need for ugly ifs in some places. However, unfortunately it means that somethingElse() would have to be evaluated even if somethingPossiblyNull() returned a non-null value. This is not what we want if these are expensive, so we had to fall back to something less clean.

In Java 8 thanks to lambdas & method references we can do this lazily.

    public static <T> T coalesce(Supplier<T>... ts) {
        return asList(ts)
            .map(t -> t.get())
            .filter(t -> t != null)
    public void should_return_first_non_null_value() {
        Person nullName = new Person(null);
        Person bob = new Person("bob");
        Person barbara = new Person("barbara");
        assertEquals("bob", coalesce(nullName::name, bob::name, barbara::name));

Here we pass in suppliers for the values rather than the values themselves. First we invoke the supplier to get the value, then filter it out if it is null (as we are looking for the first non-null value), and then return the first matching – meaning that we do not look farther through the list of passed values than we need to.

We can demonstrate that we do not invoke unnecessary methods

    public void should_be_lazy() {
        Person bob = new Person("bob");
        Person angryPerson = new Person("angry") {
            @Override public String name() {
                fail("Should not have asked for the angry person's name");
                return "angry";
        assertEquals("bob", coalesce(bob::name, angryPerson::name));

Here we never invoke the name method on angryPerson because bob had a non-null name.

If we want to do something more complicated than calling a Supplier-like method we can always use lambdas

    public void should_be_able_to_use_lambdas() {
        assertEquals("bob", coalesce(() -> new Person("bob").name(), () -> new Person("barbara").name()));

Or, if people ever stop returning nulls all over the place then you can of course do the same with Optionals

The AnotherSupplier interface is just to work around Type Erasure terribleness.

    public void should_be_able_to_use_optionals() {
                () -> Optional.<String>empty(),
                () -> Optional.of(new Person("bob").name()),
                () -> Optional.of(new Person("barbara").name())
    interface AnotherSupplier<T> extends Supplier<T> {}
    public static <T> Optional<T> coalesce(AnotherSupplier<Optional<T>>... ts) {
        return asList(ts)
                .map(t -> t.get())
                .filter(t -> t.isPresent())

By the way – C# provides an operator for doing this

Code on github

Posted by & filed under Java.

Today someone asked how to verify that only your stubbed interactions occur, and no others (when using Mockito).

I have heard this asked this quite often, especially by people used to JMock where mocks are strict by default.

I’d be interested in knowing if there’s an out of the box way of doing this.
The normal way to do this is to verify all the calls you expect to happen and then verify no more interactions occur.

This often seems like unnecessary duplication with the “when” stubbing. You end up with something like

    public void exampleOfRedundantVerify() throws Exception {
        Duck duck = mock(Duck.class);
        //Why do we need to do this?

If you omit the two verify lines then the test will fail, despite us stubbing quack with “when”. It would be nice to remove this duplication.

Now, at this point someone will probably point out that often verifyNoMoreInteractions, and even multiple verifications in a test can be a sign of an inflexible test that is easy to break with minor implementation changes. However, sometimes you really do want to assert that a collaborator is only used in a specific way. You might also temporarily want the mocks to be more vocal about how they are used, in order to help diagnose why a test is failing.

So how can we make this better with Mockito? Mockito provides an Answer.
To implement Answer you simply have to implement

    public T answer(InvocationOnMock invocation) throws Throwable

This is called when you invoke a method on a mock.

Answer is an interface that Mockito provides to allow you to specify the response to a method invocation on a Mock. It gives a bit more power than simply returning a value. For instance you can use it to capture method parameters passed to stubbed method calls for later assertions.

Answer is useful for solving this problem because we can specify a default Answer that a mock will use whenever a method invocation has not been stubbed. In our case we want the test to fail, so we can simply throw an Exception with some information about the unexpected invocation.

Here’s what we can enable

    public void exampleStubbedAll() throws Exception {
        Duck duck = strictMock(Duck.class);
    @Test(expected = NotStubbedException.class)
    public void exampleNotStubbedAll() throws Exception {
        Duck duck = strictMock(Duck.class);

Here the first test only performs stubbed operations on the mock, so passes. The second test calls an unstubbed method and an Exception is thrown.

How is it implemented? First we create a static method to create ourselves a mock. The second argument here is the default answer, Mockito will invoke the “answer” method on this handler for every unstubbed invocation.

    public static <T> T strictMock(Class<T> cls) {
        return Mockito.mock(cls, new StrictMockHandler());

The answer implementation simply throws an Exception, if it is in Strict mode. We need to provide a toggle for “strictness” so that our when(mock.quack()) invocation during stubbing does not cause an Exception to be thrown. I believe this is necessary without horrible hacks like looking back up the stack trace to see where it is called (Or does Mockito maintain some global state about the stubbing context?)

    public static class StrictMockHandler implements Answer {
        public boolean strict = false;
        public Object answer(InvocationOnMock invocation) throws Throwable {
            if (strict) throw new NotStubbedException(invocation.getMethod().getName());
            return null;

Finally, we provide a static method for toggling the strictness after our stubbings, this pulls out the default Answer from the mock using a utility that Mockito provides.

    public static void verifyNoUnstubbedInteractions(Object mock) {
        StrictMockHandler handler = ((StrictMockHandler) new MockUtil().getMockHandler(mock).getMockSettings().getDefaultAnswer());
        handler.strict = true;

I don’t believe this can deasily work with @Mock annotation based mock creation. You can pass in a different answer by doing @Mock(answer=Answers.RETURNS_SMART_NULLS), but this is limited to the values on the Answers enum.

To do this with annotations I think you’d have to create a JUnit @Rule or Runner that handles a new annotation, maybe @StrictMock instead of @Mock

See the implementation and sample tests on github.

Posted by & filed under Java.

As with try-as-expression, there are many other language features we can simulate with Lambdas.

Another example is removing the cast commonly needed with instanceof. Kotlin has a nice feature called smart casts, which allows you to do

    if (x instanceof Duck) {

Where x is “casted” to Duck within the block.

I previously blogged how to simulate this in older Java using dynamic proxies.

Now in Java 8 you can do it in a slightly less terrible way, and also make it an expression at the same time.

    public void should_return_value_when_input_object_is_instance() {
        Object foo = "ffoo";
        String result = when(foo).instanceOf(String.class)
                .then(s -> s.substring(1))
        assertEquals("foo", result);

More examples and implementation on github.

Posted by & filed under Java.

I’ve been familiarising myself with the new Java 8 language features. It’s great how much easier it is to work around language limitations now that we have lambdas.

One annoyance with Java is that try blocks cannot be used as expressions.

We can do it with “if” conditionals using the ternary operator.

    String result = condition ? "this" : "that";

But you cannot do the equivalent with a try block.

However, it’s fairly easy now that we have lambdas.

    @Test public void should_return_try_value() {
        String result = Try(() -> {
            return "try";
        }).Catch(NullPointerException.class, e -> {
            return "catch";
        assertEquals("try", result);

Code and more tests on github

Posted by & filed under XP.

There was a thread about pair programming on the London Java Community mailing list last week. I tried to contribute to the discussion there, but the mailing list doesn’t receive my replies. So I’ll post my thoughts here instead.

I have been pair programming most days in an XP team for the past 3 years. These are some of my experiences/thoughts. Obviously I enjoy pairing or I wouldn’t still be doing it, so I am biased. This is all entirely anecdotal (based on my experience).

Why work in pairs?

Code reviews are terrible

A common “alternative” to pairing is code reviews. One of the reasons I like pairing is that it fixes the problems I have experienced with code reviews.

Code reviews are commonly implemented with flaws including

  1. The feedback cycle is long

    A developer may work on a change for a day or two before submitting code for a review.

    This means they have invested a considerable amount of time and effort into a particular approach, which makes it hard not to take criticism personally.

    It can also waste a lot of time re-doing changes, or worse – leads to the temptation to let design issues slip through the code review because going back and re-doing the work would be too costly.

  2. Focus on micro code quality

    Sometimes only a diff is reviewed. This seems to promote nitpicking about formatting, naming conventions, guard clauses vs indentation and so on.

    These issues may be important, but I feel they’re less so than the affect of the changes on the design of the system as a whole. Does the changeset introduce more complexity into the model, or simplify it? Does it provide more insight into the model that suggests any significant refactorings?

    I find that pairing – when implemented well, can avoid these problems. The feedback loop couldn’t be any tighter, as your pair is right there with you as you write the code. The navigator is free to consider the bigger picture affect of the changes.

It’s social

Programming can be isolating. You sit in a room all day staring at a computer screen. With pair programming you get to write code all day and still talk to people at the same time.

It is faster

It can often seem slower than working alone, but in my experience timing tasks it actually ends up taking less time when pairing. Its easy to underestimate how long you can be blocked on small problems when working alone which doesn’t happen as often when pairing. Pairing also keeps you focused and stops you getting distracted by IRC or news sites.

It produces higher quality code

A lot of defects get noticed by the navigator that would have slipped through to a later stage. The temptation to take shortcuts or not bother with a refactoring is reduced because you’re immediately accountable to your pair. The caveat to this is that conceptual purity can be reduced by rotation.

How to pair well

Share the roles, swap regularly and be flexible

No-one likes to sit and watch while someone else types for hours on end. If you hog the keyboard, then your pair may lose concentration or not follow what is going on. Swapping roles helps to remain focused, and provides a change. It’s sometimes easier to swap roles when the navigator has an idea they want to explore or a suggestion that’s quicker to communicate through code than verbally.

Use with TDD.

TDD provides a natural rhythm that helps pairing. If you find that one person is driving too much then one way of restoring the natural rhythm is for one half of the pair to write a test, and the other to implement, and swap again for each refactoring stage. Its best to be flexible, not stick to this rigidly. Sometimes it makes sense to write two or three testcases at once rather than limiting yourself to one, while you’re discussing possible states/inputs.

TDD also helps to do just enough generalisation. When working in a pair it can be easy to talk yourself into doing unnecessary and unhelpful work to make an implementation ever more general and abstract. I find that often, when someone suggests a more general interface to enable potential future re-use, it turns out to never be used again. Abstraction for the sake of abstraction can also make the intent of the code less clear.

TDD’s Red/Green/Refactor stages help to refactor for immediately valuable re-use within the existing codebase, but the act of writing tests for features you don’t actually need helps you to consider carefully whether it really is worthwhile.

Communicate constantly

You need to be constantly talking about what you’re doing. Questioning and validating your approach, considering corner cases etc.

Rotate regularly

Swapping pairing partners regularly helps to share the knowledge of how things are implemented around the team. Rotation also often provides further incentive to improve the implementation, as a fresh set of eyes will see new issues, that the pair had been blind to. It also means you’re less likely to get fed up with working so closely with the same person for an extended period of time.

On more complex and or longer tasks, it can be useful for one person to remain working on the same task for 2 or 3 days, to ensure there is some continuity despite the rotation.

Use with shared ownership

Pairing enforces shared ownership as there’s never just one person who has worked on a particular feature. In order to rotate pairing partners, everyone needs to be free to work on any part of the codebase, regardless of who it was written by.

When not to pair

When spiking a solution

Pairing works well to ensure that a feature is implemented to a high standard, when both people have a reasonable idea how to go about implementing it. It is not good for exploring ways to implement something that is unfamiliar. It’s easier to find a solution to an unknown problem when working alone, where you can concentrate intensely and have uninterrupted thought. This does mean you need to break down tasks into an initial spike step, and a second implementation step.

When trying out new things

It’s important to have time when not pairing to allow for innovation and explore unconstrained ideas. Otherwise you can end up with groupthink, and constantly play it safe, using techniques that everyone has used before, and know work.

On trivial changes

Having two people work on making copy changes is probably a waste of resources. Deciding where to draw the line is tricky. I think if the change needs more than a couple of tests it is a good idea to pair on it.

If it’s unworkable for your team

There are lots of contexts in which pairing is simply not possible. e.g. Distributed teams in different timezones. Open source projects with sporadic contribution from a large number of people.

If you don’t enjoy it

Pairing is hard work, it’s certainly not for everyone.

It’s tiring

You have to be alert for long periods of time. You can’t drift off or distract yourself in the middle of a pairing session in the same way that you would when working alone.

It requires patience

It can often feel like progress is slower than it would be when working alone. If you’re not driving, it can be frustrating seeing people failing to use keyboard shortcuts or type slowly. If you are driving then you have to slow yourself down to constantly discuss what you’re doing and why.

It can reduce conceptual purity

This is more rotation than pairing itself – when one person has implemented a feature, you can often see a single vision for that feature when reading the code. Some of this seems to be lost with regular rotation. Just like a novel would be slightly odd if each chapter was written by a different author.

It can stop you doing things you want to do

It can be enjoyable having freedom to divert and work on things you feel important that aren’t really relevant to the task at hand. This tends to happen less when pairing because you’d both have to see the diversion as important.

There can be personality clashes

Is pairing worthwhile?

This comes up in any discussion on pairing. Having two developers working on a problem doubles the cost. Wouldn’t they have to work at double the speed in order for pairing to make sense?

Well, no.

Most of the cost of a feature occurs after it has been developed. In support/maintenance/resistance to future change etc. Any improvements you can make at the point of development to reduce defects, and make it easier to maintain the software in the future should yield big benefits in the future.

Then there’s other benefits to the team.

  • It takes less time for new developers to come up to speed with the codebase, and technologies in use, when they are pairing with developers who already know what they are doing.
  • Implementation details of any part of the system are known by at least 2 people, even if they have failed to communicate them to the rest of the team. This reduces the team’s bus factor, and makes it less painful when a team member decides to move on.
  • Shared ownership is unavoidable. There’s no single person to blame for any problem. Failures are team failures and fixing things is everyone’s responsibility. This means the team gets to focus on how to stop things going wrong in the future.


  • I enjoy pairing because it gives the tightest feedback loop, and it’s social.
  • Pairing is good for teams
  • Not all tasks are suitable for pairing.
  • Pairing well is hard
  • Pairing is not for everyone

Posted by & filed under Java.

One of the nice features of Nashorn is that you can write shell scripts in JavaScript.

It supports #!s, #comments, reading arguments, and everything there’s a Java library for (Including executing external processes obviously)

Here’s an example

#this is a comment
print('I am running from ' + __FILE__);
var name = arguments.join(' ');
print('Hello ' + name);
var runtime =;
runtime.exec('xmessage hello ' + name);

Nashorn also comes with an interactive shell “jjs”, which is great for trying things out quickly.

If you want to run the scripts with Java7 instead of Java8 you’ll need to add Nashorn to the java boot classpath. Simply modify the “bin/nashorn” script and append


Posted by & filed under Java.

This weekend I have been playing with Nashorn, the new JavaScript engine coming in Java 8.

As an exercise I implemented a JUnit runner for JavaScript unit tests using Nashorn. Others have implemented similar wrappers, we even have one at work. None of the ones I have found do everything I want, and it was a fun project.

Because the JavaScript tests are JUnit tests they “Just work” with existing JUnit tools like Eclipse and as part of your build with ant/maven. The Eclipse UI shows every test function and a useful error trace (Line numbers only work with Nashorn).

There are also lots of reasons you wouldn’t want to do this – your tests have to work in a very Java-y way, and you miss out on great features of JavaScript testing tools. There’s also no DOM, so you may end up having to stub a lot if you are testing code that interacts with the DOM. This can be a good thing and encourage you not to couple code to the DOM.

Here’s what a test file looks like.

	thisTestShouldPass : function() {
		console.log("One == One");
	thisTestShouldFail : function() {
		console.log("Running a failing test");;
	testAnEqualityFail : function() {
		console.log("Running an equality fail test");
		assert.assertEquals("One", "Two");
	objectEquality : function() {
		var a = { foo: 'bar', bar: 'baz' };
		var b = a;
		assert.assertEquals(a, b);
	integerComparison : function() {
		jsAssert.assertIntegerEquals(4, 4);
	failingIntegerComparison : function() {
		jsAssert.assertIntegerEquals(4, 5);

You can easily extend the available test tools using either JavaScript or Java. In order to show the failure reason in JUnit tools you just need to ensure you throw a java AssertionError at some point.

The tests themselves are executed from Java by returning a list of Runnables from JavaScript.

var tests = function(testObject) {
	var testCases = new java.util.ArrayList();
	for (var name in testObject) {
		if (testObject.hasOwnProperty(name)) {
			testCases.add(new TestCase(name,testObject[name]));
	return testCases;

Where TestCase is a Java class with a constructor like:

  public TestCase(String name, Runnable testCase) {

Nashorn/Rhino will both convert a JavaScript function to a Runnable automatically :)

On the Java side we just create a Test Suite that lists the JavaScript files containing our tests, and tell JUnit we want to run it with a custom Runner.

public class ExampleTestSuite {

Our Runner has to create a heirarchy of JUnit Descriptions; Suite -> JS Test File -> JS Test Function

The Runner starts up a Nashorn or Rhino script engine, evaluates the JavaScript files to get a set of TestCases to run, and then executes them.

ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine nashorn = factory.getEngineByName("nashorn");
if (nashorn != null) return nashorn;
// Load Rhino if no nashorn.

You can quickly implement stubbing that also integrates with your Java JUnit tools.

Here’s the test code from the above screenshot.

var stub = newStub();
underTest.collaborator = stub;
	doesSomethingImportant_ThisTestShouldFail: function() {
			name: 'importantFunction',
			args: ['wrong', 'args']
	doesSomethingImportant_ShouldDoSomethingImportant: function() {
			name: 'importantFunction',
			args: ['hello', 'world']

To implement the stub you can use __noSuchMethod__ to capture interactions and store them for later assertions.

var newStub = function() {
	return 	{
		called: [],
		__noSuchMethod__:  function(name, arg0, arg1, arg2, arg3, arg4, arg5) {
			var desc = {
				name: name,
				args: []
			var rhino = arg0.length && typeof arg1 == "undefined";
			var args = rhino ? arg0 : arguments;
			for (var i = rhino ? 0 : 1; i < args.length; i++){
				if (typeof args[i] == "undefined") continue;
		assertCalled: function(description) {
			var fnDescToString = function(desc) {
				return + "("+ desc.args.join(",") +")";
			if (this.called.length < 1)'No functions called, expected: ' + fnDescToString(description));
			for (var i = 0; i < this.called.length; i++) {
				var fn = this.called[i];
				if ( == {
					if (description.args.length != fn.args.length) continue;
					for (var j = 0; j < description.args.length; j++) {
						if (fn.args[j] == description.args[j]) return;
			}'No matching functions called. expected: ' + 
					'<' + fnDescToString(description) + ")>" +
					' but had ' +
					'<' +"|") + '>'

The code is on Github

It is backwards compatible with Rhino (JavaScript Scripting engine in current and old versions of Java). Most things seem just as possible in Rhino, but it’s easier to work with Nashorn due to its meaningful error messages.

You can also run using Nashorn on Java7 using a backport and adding nashorn to the bootclasspath with -Xbootclasspath/a:$NASHORN_HOME/dist/nashorn.jar