Posted by & filed under Java.

The builder patten is often used to construct objects with many properties. It makes it easier to read initialisations by having parameters named at the callsite, while helping you only allow the construction of valid objects.

Builder implementations tend to either rely on the constructed object being mutable, and setting fields as you go, or on duplicating all the settable fields within the builder.

Since Java 8, I find myself frequently creating lightweight builders by defining an interface for each initialisation stage.

Let’s suppose we have a simple immutable Person type like

static class Person {
    public final String firstName;
    public final String lastName;
    public final Centimetres height;
 
    private Person(String firstName, String lastName, Centimetres height) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.height = height;
    }
}

I’d like to be able to construct it using a builder, so I can see at a glance which parameter is which.

Person benji = person()
    .firstName("benji")
    .lastName("weber")
    .height(centimeters(182));

All that is needed to support this now is 3 single-method interfaces to define each stage, and a method to create the builder

The three interfaces are as follows. Each has a single method, so is compatible with a lambda, and each method returns another single method interface. The final interface returns our completed Person type.

interface FirstNameBuilder {
    LastNameBuilder firstName(String firstName);
}
interface LastNameBuilder {
    HeightBuilder lastName(String lastName);
}
interface HeightBuilder {
    Person height(Centimetres height);
}

Now we can create a person() method which creates the builder using lambdas.

public static FirstNameBuilder person() {
    return firstName -> lastName -> height -> new Person(firstName, lastName, height);
}

While it is still quite verbose, this builder definition is barely longer than simply adding getters for each of the fields.

Suppose we wanted to be able the give people’s height in millimetres as well as centimetres. We could simply add a default method to the HeightBuilder interface that does the conversion.

interface HeightBuilder {
    Person height(Centimetres height);
    default Person height(MilliMetres millis) {
        return height(millis.toCentimetres());
    }
}

We can use the same approach to present different construction “paths” without making our interfaces incompatible with lambdas (which is necessary to keep it concise)

Let’s look at a more complex example of a “Burger” type. We wish to allow construction of burgers, but if the purchaser is a vegetarian we would like to restrict the available choices to only vegetarian options.

The simple meat-eater case looks exactly like above

Burger lunch = burger()
    .with(beef())
    .and(bacon());
class Burger {
    public final Patty patty;
    public final Topping topping;
 
    private Burger(Patty patty, Topping topping) {
        this.patty = patty;
        this.topping = topping;
    }
 
    public static BurgerBuilder burger() {
        return patty -> topping -> new Burger(patty, topping);
    }
 
    interface BurgerBuilder {
        ToppingBuilder with(Patty patty);
    }
    interface ToppingBuilder {
        Burger and(Topping topping);
    }
}

Now let’s introduce a vegetarian option. It will be a compile failure to put meat into a vegetarian burger.

Burger lunch = burger()
    .vegetarian()
    .with(mushroom())
    .and(cheese());
 
Burger failure = burger()
    .vegetarian()
    .with(beef()) // fails to compile. Beef is not vegetarian.
    .and(cheese());

To support this we add a default method to our BurgerBuilder which returns a new VegetarianBuilder which dissallows meat.

interface BurgerBuilder {
    ToppingBuilder with(Patty patty);
    default VegetarianBuilder vegetarian() {
        return patty -> topping -> new Burger(patty, topping);
    }
}
interface VegetarianBuilder {
    VegetarianToppingBuilder with(VegetarianPatty main);
}
interface VegetarianToppingBuilder {
    Burger and(VegetarianTopping topping);
}

After you have expressed your vegetarian preference, the builder will no longer present you with the option of choosing meat.

Now, let’s add the concept of free toppings. After choosing the main component of the burger we can choose to restrict ourselves to free toppings. In this example Tomato is free but Cheese is not. It will be a compile option to add cheese as a free topping. Now the divergent option is not the first in the chain.

Burger lunch = burger()
    .with(beef()).andFree().topping(tomato());
 
Burger failure = burger()
    .with(beef()).andFree().topping(cheese()); // fails to compile. Cheese is not free

We can support this by adding a new default method to our ToppingBuilder, which in turn calls the abstract method, meaning we don’t have to repeat the entire chain of lambdas required to construct the burger again.

interface ToppingBuilder {
    Burger and(Topping topping);
    default FreeToppingBuilder free() {
        return topping -> and(topping);
    }
}
interface FreeToppingBuilder {
    Burger topping(FreeTopping topping);
}

Here’s the full code from the burger example, with all the types involved.

class Burger {
    public final Patty patty;
    public final Topping topping;
 
    private Burger(Patty patty, Topping topping) {
        this.patty = patty;
        this.topping = topping;
    }
 
    public static BurgerBuilder burger() {
        return patty -> topping -> new Burger(patty, topping);
    }
 
    interface BurgerBuilder {
        ToppingBuilder with(Patty patty);
        default VegetarianBuilder vegetarian() {
            return patty -> topping -> new Burger(patty, topping);
        }
    }
    interface VegetarianBuilder {
        VegetarianToppingBuilder with(VegetarianPatty main);
    }
    interface VegetarianToppingBuilder {
        Burger and(VegetarianTopping topping);
    }
    interface ToppingBuilder {
        Burger and(Topping topping);
        default FreeToppingBuilder andFree() {
            return topping -> and(topping);
        }
    }
    interface FreeToppingBuilder {
        Burger topping(FreeTopping topping);
    }
 
}
 
interface Patty {}
interface BeefPatty extends Patty {
    public static BeefPatty beef() { return null;}
}
interface VegetarianPatty extends Patty, Vegetarian {}
interface Tofu extends VegetarianPatty {
    public static Tofu tofu() { return null; }
}
interface Mushroom extends VegetarianPatty {
    public static Mushroom mushroom() { return null; }
}
 
interface Topping {}
interface VegetarianTopping extends Vegetarian, Topping {}
interface FreeTopping extends Topping {}
interface Bacon extends Topping {
    public static Bacon bacon() { return null; }
}
interface Tomato extends VegetarianTopping, FreeTopping {
    public static Tomato tomato() { return null; }
}
interface Cheese extends VegetarianTopping {
    public static Cheese cheese() { return null; }
}
 
interface Omnivore extends Vegetarian {}
interface Vegetarian extends Vegan {}
interface Vegan extends DietaryChoice {}
interface DietaryChoice {}

When would(n’t) you use this?

Often a traditional builder just makes more sense.

If you want your builder to be used to supply an arbitrary number of fields in an arbitrary order then this isn’t for you.

This approach restricts the order in which fields are initialised to a specific order. This is can be a feature – sometimes it’s useful to ensure that some parameters are supplied first. e.g. to ensure mandatory parameters are supplied without the boilerplate of the typesafe builder pattern. It’s also easier to make the order flexible in the future if you need to than the other way around.

If your builder forms part of a public API then this probably isn’t for you.

Traditional builders are easier to change without breaking existing uses. This approach makes it easy to change uses with refactoring tools, provided you own all the affected code and can make changes to it. To change behaviour without breaking consumers in this approach you would have to restrict yourself to adding default methods rather than modifying existing interfaces.

On the other hand, by being restrictive in what it allows to compile, this does help you help people using your code to use it in the way you intended.

Where I do find myself using this approach is building lightweight fluent interfaces both to make the code more readable, and to help out my future self by letting the IDE autocomplete required field/code blocks. For instance, when recently implementing some automated performance tests, where we needed a warmup and a rampup period, we used one of these to prevent us from forgetting to include them.

When things are less verbose, you end up using them more often, in places you might not have bothered otherwise.

Posted by & filed under Java.

Twice recently we have had “fun” trying to get things using HK2 (Jersey), to place nicely with code built using Guice and Spring. This has renewed my appreciation for code written without DI frameworks.

The problem with (many) DI frameworks.

People like to complain about Spring. It’s an easy target, but often the argument is a lazy “I don’t like XML, it’s verbose, and not fashionable, unlike JSON, or YAML, or the flavour of the month”. This conveniently ignores that it’s possible to do entirely XML-less Spring. With JavaConfig it’s not much different to other frameworks like Guice. (Admittedly, this becomes harder if you try to use other parts of Spring like MVC or AoP)

My issue with many DI frameworks is the complexity they can introduce. It’s often not immediately obvious what instance of an interface is being used at runtime without the aid of a debugger. You need to understand a reasonable amount about how the framework you are using works, rather than just the programming language. Additionally, wiring errors are often only visible at runtime rather than compile time, which means you may not notice the errors until a few minutes after you make them.

Some frameworks also encourage you to become very dependent on them. If you use field injection to have the framework magically make dependencies available for you with reflection, then it becomes difficult to construct things without the aid of the framework – for example in tests, or if you want to stop using that framework.

Even if you use setter or constructor injection, the ease with which the framework can inject a large number of dependencies for you allows you to ignore the complexity introduced by having excessive dependencies. It’s still a pain to construct an object with 20 dependencies without the framework in a test, even with constructor or setter injection. DI frameworks can shield us from the pain that is useful feedback that the design of our code is too complex.

What do I want when doing dependency injection? I have lots of desires but these are some of the most important to me

  • Safety – I would like it to be a compile time error to fail to satisfy a dependency
  • Testability – I want to be able to replace dependencies with test doubles where useful for testing purposes
  • Flexibility – I would like to be able to alter the behaviour of my program by re-wiring my object graph without having to change lots of code

It’s also nice to be able to build small lightweight services without needing to add lots of third party dependencies to get anything done. If we want to avoid pulling in a framework, how else could we achieve our desires? There are a few simple techniques we can use which only require pure Java, some of which are much easier in Java 8.

I’ve tried to come up with a simple example that might exist if we were building a monitoring system like Nagios. Imagine we have a class that is responsible for notifying the on call person for your team when something goes wrong in production.

Manual Constructor Injection

class IncidentNotifier {
    final Rota rota; 
    final Pager pager;
 
    IncidentNotifier(Pager pager, Rota rota) {
        this.pager = pager;
        this.rota = rota;
    }
 
    void notifyOf(Incident incident) {
        Person onCall = rota.onCallFor(incident.team());
        pager.page(onCall, "Oh noes, " + incident + " happened");
    }
}

I would expect that the Pager and Rota will have dependencies of their own. What if we want to construct this ourselves, it’s still fairly straightforward. Not much more verbose to to do explicitly in Java.

public static void main(String... args) {
    IncidentNotifier notifier = new IncidentNotifier(
        new EmailPager("smtp.example.com"),
        new ConfigFileRota(new File("/etc/my.rota"))
    );
}

The advantage of this over automatically injecting them with a framework and @Inject annotations or an XML configuration is that doing it manually like this allows the compiler to warn us about invalid configuration. To omit one of the constructor arguments is a compile failure. To pass a dependency that does not satisfy the interface is also a compile failure. We find out about the problem without having to run the application.

Let’s increase the complexity slightly. Suppose we want to re-use the ConfigFileRota instance, within several object instances that require access to the Rota. We can simply extract it as a variable and refer to it as many times as we wish.

public static void main(String... args) {
    Rota rota = new ConfigFileRota(new File("/etc/my.rota"));
    Pager pager = new EmailPager("smtp.example.com");
    IncidentNotifier incidentNotifier = new IncidentNotifier(
        pager,
        rota
    );
    OnCallChangeNotifier changeNotifier = new OnCallChangeNotifier(
        pager, 
        rota
    );
}

Now this will of course get very long when we start having a significant amount of code to wire up, but I don’t see this an argument not to do the wiring manually in code. The wiring of objects is just as much code as the implementation.

There is behaviour emergent from the way in which we wire up object graphs. Behaviour we might wish to test, and have as much as possible checked by the compiler.

Wanting this configuration to be separated from code into configuration files to make it easier to change may be a sign that you are not able to release code often enough. There is little need for configuration outside your deployable artifact if you are practising Continuous Deployment.

Remember, we have the full capabilities of the language to organise the resulting wiring code. We can create classes that wire up conceptually linked objects, or modules.

Method-Reference Providers

Now suppose we want to send the notification time in the page. We might do something like

pager.page(onCall, "Oh noes, " + incident + " happened at " + new DateTime());

Only now it is hard to test, because the time in the message will change each time the tests run. Previously we might have approached this problem by creating a Factory type for time, maybe a Clock type. However, in Java 8, thanks we can just use constructor method references which significantly reduces the boilerplate.

IncidentNotifier notifier = new IncidentNotifier(
    new EmailPager("smtp.example.com"),
    new ConfigFileRota(new File("/etc/my.rota")),
    DateTime::new
);
 
class IncidentNotifier {
    final Rota rota; 
    final Pager pager;
    final Supplier<DateTime> clock;
 
    IncidentNotifier(Pager pager, Rota rota, Supplier<DateTime> clock) {
        this.pager = pager;
        this.rota = rota;
        this.clock = clock;
    }
 
    void notifyOf(Incident incident) {
        Person onCall = rota.onCallFor(incident.team());
        pager.page(onCall, "Oh noes, " + incident + " happened at " + clock.get());
    }
}

Test Doubles

It’s worth pointing out at this point how easy Java 8 makes it to replace this kind of dependency with a test double. If your collaborators are single-method interfaces then we can cleanly stub out their behaviour in tests without using a mocking framework.

Here’s a test for the above code that asserts that it invokes the page method, and also checks the argument, both in the same test to simplify the example. The only magic is the 2 line static method on the Exception. I have used no dependencies other than JUnit.

@Test(expected=ExpectedInvocation.class)
public void should_notify_me_when_I_am_on_call() {
    DateTime now = new DateTime();
    Person benji = person("benji");
    Rota rota = regardlessOfTeamItIs -> benji;
    Incident incident = incident(team("a team"), "some incident");
 
    Pager pager = (person, message) -> ExpectedInvocation.with(() ->
        assertEquals("Oh noes, some incident happened at " + now, message)
    );
 
    new IncidentNotifier(pager, rota, () -> now).notifyOf(incident);
}
static class ExpectedInvocation extends RuntimeException{
    static void with(Runnable action) {
        action.run();
        throw new ExpectedInvocation();
    }
}

As you can see the stubbings are quite consise thanks to most collaborators being single method interfaces. This probably isn’t going to remove your need to use Mockito or JMock, but stubbing with lambdas is handy where it works.

Partial Application

You might have noticed that the dependencies that we inject in the constructor could equally be passed to our notify method, we could pass them around like this. It can even be a static method in this case.

Incident incident = incident(team("team name"), "incident name");
FunctionalIncidentNotifier.notifyOf(
    new ConfigFileRota(new File("/etc/my.rota")),
    new EmailPager("smtp.example.com"),
    DateTime::new,
    incident
);
 
class FunctionalIncidentNotifier {
    public static void notifyOf(Rota rota, Pager pager, Supplier<DateTime> clock, Incident incident) {
        Person onCall = rota.onCallFor(incident.team());
        pager.page(onCall, "Oh noes, " + incident  + " happened at " + clock.get());
    }
}

If we have to pass around all dependencies to every method call like this it’s going to make our code difficult to follow, but if we structure the code like this we can partially apply the function to give us a function with all the dependencies satisfied.

Incident incident = incident(team("team name"), "incident name");
 
Notifier notifier = notifier(Partially.apply(
    FunctionalIncidentNotifier::notifyOf,
    new ConfigFileRota(new File("/etc/my.rota")),
    new EmailPager("smtp.example.com"),
    DateTime::new,
    _
));
 
notifier.notifyOf(incident);

There’s just a couple of helpers necessary to make this work. First, a Notifier interface that can be created from a generic Consumer<T>. Static methods on interfaces come to the rescue here.

interface Notifier {
    void notifyOf(Incident incident);
    static Notifier notifier(Consumer<Incident> notifier) {
        return incident -> notifier.accept(incident);
    }
}

Then we need a way of doing the partial application. There’s no support for this built into Java as far as I am aware, but it’s trivial to implement. We just declare a method that accepts a reference to a consumer with n arguments, and also takes the arguments you wish to apply. I am using an underscore to represent missing values that are still unknown. We could add overloads to allow other parameters to be unknown.

class Partially {
    static <T,U,V,W> Consumer<W> apply(
        QuadConsumer<T,U,V,W> f, 
        T t, 
        U u, 
        V v, 
        MatchesAny _) {
            return w -> f.apply(t,u,v,w);
    }
}
interface QuadConsumer<T,U,V,W> {
    void apply(T t, U u, V v, W w);
}
class MatchesAny {
    public static MatchesAny _;
}

Mixins

Going back to the object-oriented approach. Suppose we want our Pager to send emails in production but just print messages to the console if we are running it on our workstation. We can create two implementations of Pager, an EmailPager and a ConsolePager, but we want to use an implementation based on what environment we are running in. We can do this by creating an EnvironmentAwarePager which decides which implementation to use at Runtime.

class EnvironmentAwarePager implements Pager {
    final Pager prodPager;
    final Pager devPager;
 
    EnvironmentAwarePager(Pager prodPager, Pager devPager) {
        this.prodPager = prodPager;
        this.devPager = devPager;
    }
 
    public void page(Person onCall, String message) {
        if (isProduction()) prodPager.page(onCall, message);
        else devPager.page(onCall, message);
    }
 
    boolean isProduction() { ... }
}

But what if we want to test the behaviour of this environment aware pager, to be sure that it calls the production pager when in production. To do this we need to extract the responsibility of checking whether we are running in the production environment. We could make a collaborator, but there is another option (which I’ll use for want of a better example) – We can mix in functionality using an interface.

interface EnvironmentAware {
    default boolean isProduction() {
        // We could check for a machine manifest here
        return false;
    }
}

Now our Pager becomes

class EnvironmentAwarePager implements Pager, EnvironmentAware {
    final Pager prodPager;
    final Pager devPager;
 
    EnvironmentAwarePager(Pager prodPager, Pager devPager) {
        this.prodPager = prodPager;
        this.devPager = devPager;
    }
 
    public void page(Person onCall, String message) {
        if (isProduction()) prodPager.page(onCall, message);
        else devPager.page(onCall, message);
    }
}

We can use isProduction without implementing it.

Now let’s write a test that checks that the production pager is called in the production environment. Here we extend EnvironmentAwarePager to override its production-awareness by mixing in the AlwaysInProduction interface. We stub the dev pager to fail the test because we don’t want that to be called, and stub the prod pager to fail the test if not invoked.

@Test(expected = ExpectedInvocation.class)
public void should_use_production_pager_when_in_production() {
    class AlwaysOnProductionPager 
        extends EnvironmentAwarePager 
        implements AlwaysInProduction {
        AlwaysOnProductionPager(Pager prodPager, Pager devPager) {
            super(prodPager, devPager);
        }
    }
 
    Person benji = person("benji");
    Pager prod = (person, message) -> ExpectedInvocation.with(() -> {
        assertEquals(benji, person);
        assertEquals("hello", message);
    });
    Pager dev = (person, message) -> fail("Should have used the prod pager");
 
    new AlwaysOnProductionPager(prod, dev).page(benji , "hello");
 
}
 
interface AlwaysInProduction extends EnvironmentAware {
    default boolean isProduction() { return true; }
}

Using mixins here is a bit contrived, but I struggled to come up with an example that was both not contrived and sufficiently brief to illustrate the point.

Cake Pattern

I mention mixins partly because it leads onto the Cake Pattern

The cake pattern can make it a bit easier to wire up more complex graphs, when compared to manual constructor injection. While it does also add quite a lot of complexity in itself, we do at least retain a lot of compile time checking. Failing to satsify a dependency will be a compilation failure.

Here’s what an example application using cake would look like that sends an page about an incident specified with command line args.

public class Example {
    public static void main(String... args) {
        ProductionApp app = () -> asList(args);
        app.main();
    }
}
 
interface ProductionApp extends
        MonitoringApp,
        DefaultIncidentNotifierProvider,
        EmailPagerProvider,
        ConfigFileRotaProvider,
        DateTimeProvider {}
 
interface MonitoringApp extends
        Application,
        IncidentNotifierProvider {
    default void main() {
        String teamName = args().get(0);
        String incidentName = args().get(1);
        notifier().notifyOf(incident(team(teamName), incidentName));
    }
}
 
interface Application {
    List<String> args();
}

Here we’re using interfaces to specify the components we wish to use in our application. We have a MonitoringApp interface that specifies the entry point behaviour. It sends a notification using the command line arguments. We also have a ProductionApp interface that specifies which components we want to use in this application.

If we want to replace a component – for example to print messages to the console instead of sending an email when running it on our workstation it’s just a matter of replacing that component -

interface WorkstationApp extends
        MonitoringApp,
        DefaultIncidentNotifierProvider,
        ConsolePagerProvider, // This component is different
        ConfigFileRotaProvider,
        DateTimeProvider  {}

This is checked compile time. If we were to not specify a PagerProvider at all we’d get a compile failure in our main method when we try to instantiate the WorkstationApp. Admittedly, not a very informative message if you don’t know what’s going on (WorkstationApp is not a functional interface, multiple non-overriding abstract methods found in com.benjiweber.WorkstationApp.)

For each thing that we want to inject, we declare a provider interface, which can itself rely on other providers like

interface DefaultIncidentNotifierProvider extends 
        IncidentNotifierProvider, 
        PagerProvider, 
        RotaProvider, 
        ClockProvider {
    default IncidentNotifier notifier() { 
        return new IncidentNotifier(pager(), rota(), clock()); 
    }
}

PagerProvider has multiple mixin-able implementations

interface EmailPagerProvider extends PagerProvider {
    default Pager pager() { return new EmailPager("smtp.example.com"); }
}
interface ConsolePagerProvider extends PagerProvider {
    default Pager pager() { 
        return (Person onCall, String message) -> 
            System.out.println("Stub pager says " + onCall + " " + message); 
    }
}

As I mentioned above, this pattern starts to add too much complexity for my liking, however neat it may be. Still, it can be a useful technique for using sparingly for parts of your application where manual constructor injection is becoming tedious.

Summary

There are various techniques for doing the kinds of things that we often use DI frameworks to do, just using pure Java. It’s worth considering the hidden costs of using the framework.

The code for the examples used in this post is on Github

Posted by & filed under Java.

This is a follow up to Pattern matching in Java, where I demonstrated pattern matching on type and structure using Java 8 features. The first thing most people asked is “does it support matching on nested structures?”

The previous approach did not, at least not without creating excessive boilerplate constructors. So here’s another approach that does.

Let’s suppose we have a nested structure representing a customer like this. Illustrated as a JavaScript object literal for clarity.

{
  firstName: "Benji",
  lastName: "Weber",
  address: {
    firstLine: {
      houseNumber: 123,
      roadName: "Some Street"
    },
    postCode: "AB123CD"
  }
}

What if we want to match customers with my name and pull out my house number, road name, and post code? With pattern matching it becomes straightforward.

First we’ll create Java types to represent it such that we can create the above representation like:

Customer customer = customer(
    "Benji", 
    "Weber", 
    address(
        firstLine(123,"Some Street"), 
        "AB123CD"
    )
);

I’ll use the value object pattern I described previously to create these.

Now we just need a way to build up a structure to match against, which retains the properties we want to extract for the pattern matching we previously implemented.

Here’s what we can do. We use underscores to indicate properties we wish to extract rather than match. All other properties are matched.

// Using the customer instance from above
String address = customer.match()
    .when(a(Customer::customer).matching(
        "Benji",
        "Weber",
        an(Address::address).matching(
            a(FirstLine::firstLine).matching(_,_),
            _
        )
    )).then((houseNo, road, postCode) -> houseNo + " " + road + " " + postCode)
    .otherwise("unknown");
 
assertEquals("123 Some Street AB123CD", address);

So how does it work? Well we get the .match() method by implementing the Case interface on our value types. This interface has a default method match which returns a match builder which we can use to specify our cases.

Last time we implemented overloads to the when(..) method such that we could match on types or instances. Now we can re-use that work and add overloads that take a Match reference. e.g.

public <A,B> BiMatchConstructorBuilder<T, A, B> when(BiMatch<T, A, B> matchRef) {
// Here we can know we are matching for missing properties of types A and B
// So we can expect a function to consume these properties that accepts an A and B
    return new BiMatchConstructorBuilder<T, A, B>() {
        public <R> MatchBuilderR<T, R> then(BiFunction<A, B, R> f) {
            // ...
        }
    };
}

The matchRef can capture method references to the properties we want to extract, and then we can apply these method references to the object we are matching against to check for a match.

Lastly we simply add a couple of static methods: a(constructor) and an(constructor) for building up our matches, which return a builder that accepts either the constructor arguments, or a wildcard underscore to indicate we want to match and extract that value.

Here are some more examples to help illustrate the idea

Posted by & filed under Java.

The absence of tuples in Java is often bemoaned. Tuples are collections of a variety of types. They might look like (“benji”,8,”weber”) – a 3-tuple of a string, a number, and another string. People often wish to use tuples to return more than one value from a method.

Often this is a smell that we need to create a domain type that is more meaningful than a tuple, but in some cases tuples are actually the best tool for the job.

We can pretty easily create our own tuple types, and now that we have Lambdas we can consume them pretty easily as well. There is a little unnecessary verbosity left over from type signatures, but we can avoid it in most cases.

If we want to return a tuple from a method, by statically importing an “tuple” method that is overloaded for each arity of tuple we can do

static TriTuple<String, Integer, String> me() {
    return tuple("benji",9001,"weber");
}

Other than the return type of the method it is fairly concise.

On the consuming side it would be nice to be able to use helpful names for each field in the tuple, rather than the ._1() or .one() accessors that are used in many tuple implementations.

This is easily realised thanks to lambdas. We can simply define a map method on the tuple that accepts a function with the same arity of the tuple, to transform the tuple into something else.

Using the above method now becomes

String name = me()
    .map((firstname, favouriteNo, surname) -> firstname + " " + surname);
 
assertEquals("benji weber", name);

We can even throw checked exceptions on the consuming side if we allow our map method to accept functions that throw exceptions.

String name = me()
    .map((firstname, favouriteNo, surname) -> {
        if (favouriteNo > 9000) throw new NumberTooBigException();
        return firstname;
    });

Of course we’d also like identical tuples to be equal to each other so that this is true.

assertEquals(tuple("hello","world"), tuple("hello", "world"));

The implementation looks like this, re-using the value-object pattern I described previously

public interface Tuple {
    //...
    static <A,B,C> TriTuple<A,B,C> tuple(A a, B b, C c) {
        return TriTuple.of(a, b, c);
    }
    //...
}
 
public interface TriTuple<A,B,C> {
    A one();
    B two();
    C three();
    static <A,B,C> TriTuple<A,B,C> of(A a, B b, C c) {
        abstract class TriTupleValue extends Value<TriTuple<A,B,C>> implements TriTuple<A,B,C> {}
        return new TriTupleValue() {
            public A one() { return a; }
            public B two() { return b; }
            public C three() { return c; }
        }.using(TriTuple::one, TriTuple::two, TriTuple::three);
    }
 
    default <R,E extends Exception> R map(ExceptionalTriFunction<A, B, C, R, E> f) throws E {
        return f.apply(one(),two(),three());
    }
 
    default <E extends Exception> void consume(ExceptionalTriConsumer<A,B,C,E> consumer) throws E {
        consumer.accept(one(), two(), three());
    }
}

Browse the full code on Github

Posted by & filed under Uncategorized, XP.

Can you afford not to do continuous deployment?

Continuous deployment is the practice of regularly (more than daily) deploying updated software to production.

Arguments in favour continuous deployment often focus on how it enables us to continually, regularly, and rapidly deliver value to the business, allowing us to move fast. It’s also often discussed how it reduces release-risk by making deployments an everyday event – with smaller, less risky changes, which are fully automated.

I want to consider another reason

Unforeseen Production Issues

It can be tempting to reduce the frequency of deployments in response to risk. If a deployment with a bug can result in losing a significant amount of money or catastrophic reputation damage, it’s tempting to shy away from the risk and do it less often. Why not plan to release once a month instead of every day? Then we’re only taking the risk monthly.

Let’s leave aside the many reasons why doing things less often is unlikely to reduce the risk.

There will always be things that can break your system in production that are outside your control, and are unlikely to be caught by your pre-deployment testing. If you hit one of these issues then you will need to perform a deployment to fix the issue.

Time-sensitive Bugs

How often do we run into bugs in software due to failing to anticipate some detail of time and dates. We hear stories of systems breaking on 29/02 nearly every leap year. Have you anticipated backwards time corrections from NTP? Are you sure you handle leap-seconds? What about every framework and third party service you are using?

Yes, most of these can be tested for in advance with sufficiently rigorous testing, but we always miss test cases.

Time means that we cannot be sure that software that worked when it was deployed will continue to work.

Capacity Problems

If you production system is overloaded due to unforeseen bottlenecks you may need to fix them and re-deploy. Even if you do capacity planning you may find the software needs to scale in ways that you did not anticipate when you wrote your capacity tests.

When you find an unexpected bottleneck in a production system, you’re going to need to make a change and deploy. How quickly depends on how much early warning your monitoring software gave you.

Third Party dependencies

That service that you depend on goes down for an extended period of time, suddenly you need to change your system to not rely on it.

Infrastructure

Do you handle all the infrastructure related issues that can go wrong? What happens if DNS is intermittently returning incorrect results? What if there’s a routing problem between two services in your system?

Third Party Influence

“Noisy Neighbours” in virtualised or shared hosting. Third party JavaScript in your page overwriting your variables or event listeners. In most environments, you can be affected by others over whom you have no control.

Response

Monitor Monitor Monitor

We cannot foresee all production issues and guard against them in our deployment pipeline. Even for those we could, in many cases the cost would not be warranted by the benefit.

Any of the above factors (and more) can cause your system to stop meeting its acceptance criteria at some point after it has been deployed.

The first response to this fact is that we need to monitor everything important in production. This could mean running your automated acceptance tests against your production environment. You can do this both for “feature” and “non-functional” requirement acceptance tests such as capacity.

If you have end-to-end tests feature tests for your system why not run them against your production app? If you’re worried about cluttering production with fake test data you can build in a cleanup mechanism, or a means of excluding your test data from affecting real users.

If you have capacity tests for your system why not run them against your production app? If you’re worried about the risk that it won’t cope at least you can control when the capacity tests run. Wouldn’t you rather your system failed during working hours due to an capacity test which you could just turn off, rather than failing in the middle of the night due to a real spike in traffic?

Be ready to deploy at any point

Tying this back to continuous delivery, I think our second response should be being ready to deploy at any point.

Some of these potential production issues you can prevent with sufficient testing, but will you have thought of every possible case? Some of these issues can be mitigated in production by modifying infrastructure configuration and or application configuration – but

  1. Should you be modifying those independently of the application? They’re just as likely to break your production system as code changes
  2. Not everything will (or should be) configurable

At the point you find yourself needing to do an emergency bug fix on production you have two scenarios when you have not been practising continuous delivery.

  1. You pull up the branch/tag of code last deployed to production a few weeks ago, make your changes, and deploy.
  2. You risk deploying what you have now. Weeks of changes which have not been deployed. Any of which may introduce new, subtle issues.

You may not currently even be at an integrable state. If you were not expecting to deploy why should you be? If you’re not then you need to

  1. Identify quickly what code is actually in production.
  2. Context-switch back to the code-base as it was then. Your mental model of the code will be as it is now, not as it was a few weeks ago.
  3. Create yourself re-integration pain when you have to merge your bugfix with the (potentially many) changes that have been made since. This may be fun if you have performed significant refactorings since.
  4. Perform a risky release. You haven’t deployed for a few weeks. Lots could have broken since then. Are your acceptance tests still working? Or do they have some of the same problems as your production environment? If you have fixed a time-sensitive bug in your tests since your last deploy you will now have to back-port that fix to the branch you are now making your changes on.

Summary

We’ll never be able to foresee everything that will happen in production. It’s eve more important to be able to notice and react quickly to production problems than to stop broken software getting into production.

Posted by & filed under Java, JavaScript.

Here’s a neat trick to transform JSON into Java objects that implement an interface with the same structure.

Java 8 comes with Nashorn – a JavaScript runtime that has a number of extensions.

One of these extensions allows you to pass a JavaScript object to a constructor of an interface to anonymously implement that interface. So for example we can do

jjs> var r = new java.lang.Runnable({ run: function() print('hello') });
jjs> r.run();
hello

n.b. the above uses a JavaScript 1.8 lambda expression, which is supported by Nashorn and allows us to omit the braces and “return” keyword in the function defintion. We could have also omitted the parentheses in the Runnable constructor call in this case.

Now let’s suppose we have a JSON file as follows

{
  "firstname":"Some",
  "lastname":"One",
  "petNames":["Fluffy","Pickle"],
  "favouriteNumber":5
}

and we want to to treat it as a Java interface as follows.

public interface Person {
  String firstname();
  String lastname();
  List<String> petNames();
  int favouriteNumber();
}

It’s trivial to convert the JSON to a JavaScript object with JSON.parse.

The only conceptual difference between the JavaScript object and the Java Interface is that in the interface the properties such as firstname and lastname are methods not literal strings. It’s easy to convert a JavaScript object such that each value is wrapped in a function definition.

We just need to define a function that iterates through each value on our JavaScript object and wraps each in a function.

// A function that takes a value and returns a function 
// which when invoked returns the original value
function createFunc(value) function() value 
// Wrap each property in a function
function iface(map) {
  var ifaceImpl = {}
  for (key in map) ifaceImpl[key] = createFunc(map[key]);
  return ifaceImpl;
}

Applying it to our JS object gives us the following

{
  "firstname":function() "Some",
  "lastname":function() "One",
  "petNames":function() ["Fluffy","Pickle"],
  "favouriteNumber":function() 5
}

This is now satisfies our Java Interface, so we can just pass it to the constructor. Putting it all together

var Person = Packages.Person; // Import Java Person Interface
var someoneElse = new Person(iface({ 
  "firstname":"Some",
  "lastname":"One",
  "petNames":["Fluffy","Pickle"],
  "favouriteNumber":5
}));
 
Person.print(someoneElse); // Pass to a Java method that accepts a Person instance.

If our JSON were originally in a text file we could use Nashorn’s scripting extensions to read the text file and convert it to an interface in the same way. This can be useful for bootstrapping a Java app without a main method – you can read a JSON config file, convert it to a typed Java interface, and start the app. This can free the Java app from dealing with JSON or argument parsing

#!/usr/bin/jjs
function createFunc(value) function() value
function iface(map) {
  var ifaceImpl = {}
  for (key in map) ifaceImpl[key] = createFunc(map[key]);
  return ifaceImpl;
}
var Settings = Packages.Settings;
var MyApplication = Packages.MyApplication;
 
// Backticks in Nashorn scripting mode works like in Bash
var settings = new Settings(iface(JSON.parse(`cat app_config.json`))); 
MyApplication.start(settings);
public interface Settings {
  String hostname();
  int maxThreads();
}
public class MyApplication {
  public static void start(Settings settings) {
    //
  }
}

One small annoyance with this is that jjs (Nashorn executable) does not seem able to accept a classpath parameter when used with a shebang #!/usr/bin/jjs . So for now you have to execute the JavaScript with

$ jjs -scripting -cp . ./example.js

There’s a complete example here

Posted by & filed under Java.

One of the language features many people miss in Java is pattern matching, and/or an equivalent of Scala case classes.

In Scala we can match on types and structure. We have “switch” in Java, but it’s much less powerful and it can’t even be used as an expression. It’s possible to simulate matching in Java with some degree of success.

Matching on Type

Here’s an example matching on type. Our description method accepts a shape which can be one of three types – Rectangle, Circle or Cube. It is a compile failure to not specify a handler for each of Rectangle/Circle/Cube.

@Test
public void case_example() {
    Shape cube = Cube.create(4f);
    Shape circle = Circle.create(6f);
    Shape rectangle = Rectangle.create(1f, 2f);
 
    assertEquals("Cube with size 4.0", description(cube));
    assertEquals("Circle with radius 6.0", description(circle));
    assertEquals("Rectangle 1.0x2.0", description(rectangle));
}
 
public String description(Shape shape) {
    return shape.match()
        .when(Rectangle.class, rect -> "Rectangle " + rect.width() + "x" + rect.length())
        .when(Circle.class, circle -> "Circle with radius " + circle.radius())
        .when(Cube.class, cube -> "Cube with size " + cube.size());
}
interface Shape extends Case3<Rectangle, Circle, Cube> { }
interface Cube extends Shape {
    float size();
    static Cube create(float size) {
        return () -> size;
    }
}
//...

If Java didn’t already have an Optional type we could use this to implement our own, although type-erasure causes us to need a hack to match on the raw (unparameterised) type – to convert from a Class<Some<String>> to a Class<Some> for matching.

@Test
public void some_none_match_example() {
    Option<String> exists = some("hello");
    Option<String> missing = none();
 
    assertEquals("hello", describe(exists));
 
    assertEquals("missing", describe(missing));
}
 
private String describe(Option<String> option) {
    return option.match()
        .when(erasesTo(Some.class), some -> some.value())
        .when(erasesTo(None.class), none -> "missing");
}
interface Option<T> extends Case2<Some<T>, None<T>>{ }

Matching on Structure

Another thing we might want to do is match on particular values. This means we’re be unable to ensure that all values are handled and need to provide a default case.

Here’s an example. The underscore character is used to denote any value, like in Scala.

@Test
public void constructor_matching_any() {
    Person so = person("Some", "One");
    Person an = person("Ann", "Other");
 
    String another = an.match()
        .when(person("Some", _), p -> "someone")
        .when(person(_, "Other"), p -> "another")
        ._("Unknown Person");
 
    assertEquals("another", another);
}

A more practical example might be handling command line arguments

@Test
public void parse_arguments_example() {
    applyArgument(arg("--help", "foo"));
    assertEquals("foo", this.helpRequested);
 
    applyArgument(arg("--lang", "English"));
    assertEquals("English", this.language);
 
    applyArgument(arg("--nonsense","this does not exist"));
    assertTrue(badArg);
}
 
private void applyArgument(Argument input) {
    input.match()
        .when(arg("--help", _), arg -> printHelp(arg.value()))
        .when(arg("--lang", _), arg -> setLanguage(arg.value()))
        ._(arg -> printUsageAndExit());
}

Decomposition

With a bit more effort we can even match on the structure of the type and pull out the bits we are interested in. Here’s an example. We match on certain attributes of Person and then consume the other attributes in the following function to build the result.

@Test
public void decomposition_variable_items_example() {
    Person a = person("Bob", "Smith", 18);
    Person b = person("Bill", "Smith", 28);
    Person c = person("Old", "Person", 90);
 
    assertEquals("first_Smith_18", matchExample(a));
    assertEquals("second_28", matchExample(b));
    assertEquals("unknown", matchExample(c));
}
 
String matchExample(Person person) {
    return person.match()
        .when(person("Bob", _, _), (surname, age) -> "first_" + surname + "_" + age)
        .when(person("Bill", "Smith", _), age -> "second_" + age)
        ._("unknown");
}

Making it Work

To implement this we create an interface for each number of cases we want to be able to match on, parameterised by the possible types it can take. This is effectively giving us the “Type A OR Type B” restriction. This interface provides a default method match() which returns a builder which lets you specify a handler for each and all of the cases.

When evaluated, match() compares its own type to the passed in Class<?>.

public interface Case3<T,U,V> {
    default MatchBuilderNone<T,U,V> match() {
        return new MatchBuilderNone<T, U, V>() {
            public <R> MatchBuilderOne<T, U, V, R> when(Class<T> clsT, Function<T, R> fT) {
                return (clsU, fU) -> (clsV, fV) -> {
                    if (clsT.isAssignableFrom(Case3.this.getClass())) return fT.apply((T)Case3.this);
                    if (clsU.isAssignableFrom(Case3.this.getClass())) return fU.apply((U)Case3.this);
                    if (clsV.isAssignableFrom(Case3.this.getClass())) return fV.apply((V)Case3.this);
 
                    throw new IllegalStateException("Match failed");
                };
            }
        };
    }
}

For the matching on parts of a value to work (as in the argument parsing example) we need equals/hashCode to work. This would be a lot easier with actual value type support.

We can extend the base Case interface to allow matching on value as well class, and then building on the autoEquals/Hashcode I explained previously add additional constructor functions that create value objects that ignore specific fields for equality checking.

This is still far more verbose than it should have to be, and it requires adding suitable constructors to each value type you want to use match() with, but I think it’s quite neat.

Here’s what a Person type that’s matchable on either firstname or lastname or both looks like.

interface Person extends Case<Person> {
    String firstname();
    String lastname();
 
    static Person person(String firstname, String lastname) {
        return person(firstname, lastname, Person::firstname, Person::lastname);
    }
    static Person person(String firstname, MatchesAny lastname) {
        return person(firstname, null, Person::firstname);
    }
    static Person person(MatchesAny firstname, String lastname) {
        return person(null, lastname, Person::lastname);
    }
    static Person person(String firstname, String lastname, Function<Person, ?>... props) {
        abstract class PersonValue extends Value<Person> implements Person {}
        return new Person() {
            public String firstname() { return firstname; }
            public String lastname() { return lastname; }
        }.using(props);
    }
}

Decomposition uses pretty much the same approach, but now the factory method has to record the properties that we want to capture as follows. .missing() is overloaded with different numbers of fields to return different types. This allows our .when() match method to also be overloaded based on how many fields we are extracting and therefore accept a lambda with the correct number of parameters. i.e. when(OneMissing, Function), when(TwoMissing, BiFunction)

static OneMissing<Person, String> person(String firstname, MatchesAny lastname, Integer age) {
    return person(firstname, null, age, Person::firstname, Person::age)
        .missing(Person::lastname);
}

Further reading

More examples and implementation on github

Posted by & filed under XP.

Tests, like any code, should be deleted when their cost exceeds their value.

We are often unduly reticent to delete test code. It’s easy to ignore the cost of tests. We can end up paying a lot for tests, long after they are written.

When embarking on a major re-factoring or new feature it can be liberating to delete all the tests associated with the old implementation and test-drive the new one. Especially if you have higher level tests that test important behaviour you want to retain.

This may seem obvious, but in practice it’s often hard to spot where tests should be deleted. Here’s some attributes to think about when evaluating the cost-effectiveness of tests.

Value

Let’s start with value. Good tests provide lots of value. They provide protection against regressions introduced as the software is refactored or altered. They give rapid feedback on unanticipated side-effects. They also provide executable documentation about the intent of code, why it exists, and which use cases were foreseen.

Redundancy

Unneeded Tests

If there multiple tests covering the same behaviours, perhaps you don’t need all of them. If there’s code that’s only referenced from tests, then you can probably delete that code along with its tests. Even if static analysis indicates code is reachable – is it actually being used in production? Or is it a dead or seldom used feature that can be pruned from your system.

Documentation and Duplicating Implementation

Some tests simply duplicate their implementation. This can happen both with very declarative code which has little behaviour to test, and with excessively mocked-out code where the tests become a specification of the implementation. Neither of these provide a lot of value as they tend to break during refactoring of the implementation as well as when the behaviour changes. Nor do they provide any significant documentation value if they are simply repeating the implementation.

Type System

Tests can also be made redundant by the type system. Simple tests on preconditions or valid inputs to methods can often be replaced by preconditions the type system can enforce.

e.g. if you have 3 methods that all expect an argument that is an integer between 1 and 11 why not make them accept an object of a type that can only be between 1 and 11? While you’re at it you can give it a more meaningful name than integer. onChange(VolumeLevel input) is more meaningful than onChange(int volumeLevel) and removes need for tests at the same time.

Risk

When evaluating the regression protection value that we get from a test we need to think about the risk of the behaviour under test being broken in production.

A system that processes money and could lose thousands of dollars a minute with a small tweak means a big risk of expensive regressions, even if they’re caught quickly in production.

Therefore tests on the behaviour of that system are going to have higher value than tests on a weekly batch log-analysis job that can just be re-run if it fails or produces the wrong results.

Outlived Usefulness

Is the test testing desired behaviour, or behaviour that’s really just a side effect of the chosen implementation approach? Some tests are really useful for “driving” a particular implementation design, but don’t provide much regression test value.

Does the tested behaviour still match what customers and or users want now? Usage patterns change over time as users change, their knowledge of the software changes, and new features are added and removed.

The desired behaviour at the time the test was written might no longer match the desired behaviour now.

Speed

Speed is one of the most important attributes of tests to consider when evaluating their value. Fast tests are valuable because they give us rapid feedback on unanticipated side effects of changes we make. Slow tests are less valuable because it takes longer for them to give you feedback.

This brings us nicely to…

Cost

Speed

Worse than being less valuable in and of themselves, slow tests in your test suite delay you from getting feedback from your other tests while you wait for them (Unless you parallelise every single test). This is a significant cost to having them in your test suite at all.

Slow tests can discourage you from running your suite of tests as regularly as you would otherwise, which can lead to you wasting time on changes that you later realise will break important functionality.

Test suites that take a long time also increase the time it takes to deploy changes to production. This reduces the effectiveness of another feedback loop – getting input from users & customers about your released changes. As test suites get slower it is inevitable that you will also release changes less frequently. Releases become bigger and scarier events that are more likely to go wrong.

In short, slow tests threaten continuous integration and continuous delivery.

There are ways of combating slow tests while keeping them. You can profile and optimise them – just like any other code. You can decompose your architecture into smaller, decoupled services so that you don’t have to run so many tests at a time. In some scenarios it’s appropriate to migrate the tests to be monitoring on your production environment instead of tests in the traditional sense.

However, if the cost imposed by slow tests is not outweighed by their value then don’t hesitate to remove them.

Brittleness

Have you ever worked on a codebase where seemingly any change you made broke hundreds of tests, regardless of whether it changed any behaviour? These brittle tests impose a significant cost on the development of any new features, performance improvements, or refactoring work.

There can be several causes of this. Poorly factored tests with lots of duplication between tests tend to be brittle – especially when assertions (or verifications of mock behaviour) are duplicated between tests. Excessive use of strict mocks (that fail on any unexpected interactions) can encourage this. Another common cause is tests that coupled to their implementation, such as user interface tests that have hard coded css selectors, copy, and x/y coordinates.

You can refactor tests to make them less brittle. You can remove duplication, split tests up so that each asserts only one piece of behaviour, and decouple tests from the implementation using a domain specific DSL or the page object pattern

Or, if the cost of continually fixing and or refactoring these tests is not outweighed by the value they’re providing you could just delete them.

Determinism

Non-deterministic tests have a particularly high cost. They cause us to waste time re-trying them when they fail. They reduce our faith in the test suite to protect us against regressions. They also tend to take a particularly long time to diagnose and fix.

Common causes include sleep() in the tests rather than waiting for and responding to an event. Any test that relies on code or systems that you do not control can be prone to non-determinism. How do your tests cope if your DNS server is slow or returns incorrect results? Do they even work without an internet connection?

Due to the high cost they impose, and the often high cost of fixing them, non-deterministic tests are often ideal candidates for deletion.

Measurement

These cost/value attributes of tests are fairly subjective. Different people will judge costs and risks differently. Most of the time this is fine and it’s not worth imposing overhead to make more data driven decisions.

Monitor Production

Code coverage, logging, or analytics from your production system can help you determine which features are used and which can be removed, along with their tests. As can feedback from your users.

Build Server

Some people will collect data from their build server test runs to record test determinism, test suite time and similar. This can be useful, but it ignores all the times the tests are run by developers on their workstations.

I favour a lightweight approach. Only measure things that are actually useful and you need to.

JUnit Rules

A method for getting feedback on tests that I have found useful is to use the test framework features themselves. JUnit provides a way of hooking in before and after test execution using Rules.

A simple Rule implementation can record the time a test takes to execute. It can record whether it’s non-deterministic by re-trying a failing test and logging if it passes a second time. It can record how often a test is run, how often a test fails, and information about why tests fails.

This data can then be logged and collected centrally for analysis with a log management tool, or published via email, dashboard or similar.

This approach means you can get visibility on how your tests are really being used on workstations, rather than how they behave when run in your controlled build server environment.

Final thoughts

Having so many tests that their speed and reliability becomes a big problem is a nice #firstworldproblem to have. You can also bear these cost/value attributes in mind when writing tests to help yourself write better tests.

Don’t forget, you can always get tests back from revision control if you need them again. There’s no reason to @Ignore tests.

So don’t be afraid to delete tests if their cost exceeds their value.

Posted by & filed under Java.

Java 8 not only gives us both default and static methods on interfaces. One of the consequences of this is you can create simple value objects using interfaces alone, without the need to define a class.

Here’s an example. We define a Paint type which is composed of an amount of red/green/blue paint. We can add operations that make use of these like mix which produces the result of mixing two paints together. We can also create a static factory method in lieu of a constructor to create us an instance of a paint.

interface Paint {
    int red();
    int green();
    int blue();
    default Paint mix(Paint other) {
        return create(red() + other.red(), green() + other.green(), blue() + other.blue());
    }
 
    static Paint create(int red, int green, int blue) {
        return new Paint() {
            public int red() { return red; }
            public int green() { return green; }
            public int blue() { return blue; }
        };
    }
}

This is what using it looks like

@Test
public void mixingPaint() {
    Paint red = Paint.create(100,0,0);
    Paint green = Paint.create(0,100,0);
 
    Paint mixed = red.mix(green);
 
    assertEquals(100, mixed.red());
    assertEquals(100, mixed.green());
}

While it may seem odd – there are advantages to doing this sort of thing. There’s a slight reduction in boilerplate due to not having to deal with fields. It’s a way of ensuring your value type is immutable because you can’t so easily introduce state. It also allows you to make use of multiple inheritance, because you can inherit from multiple interfaces.

There are also obvious disadvantages – we can only have public attributes and methods.

Adding equals/hashcode/toString is a bit harder because in an interface we cannot override methods defined on a class.

I’d like to be able to do the following, where equivalent paints are equal.

@Test
public void paintEquals() {
    Paint red = Paint.create(100,0,0);
    Paint green = Paint.create(0,100,0);
 
    Paint mixed1 = red.mix(green);
    Paint mixed2 = green.mix(red);
 
    assertEquals(mixed1, mixed2);
    assertNotEquals(red, green);
    assertNotEquals(red, mixed1);
}

The least-verbose approach I’ve managed so far (without resorting to reflection) requires us to override the equals/hashCode/toString methods in our anonymous inner class.

We can, however, avoid having to implement them there and move the implementation to some helper interfaces.

The only additional boilerplate required is implementing a props() method that returns the properties we want to include in our equals/hashcode/toString.

interface Paint extends EqualsHashcode<Paint>, ToString<Paint> {
    int red();
    int green();
    int blue();
    default Paint mix(Paint other) {
        return create(red() + other.red(), green() + other.green(), blue() + other.blue());
    }
 
    static Paint create(int red, int green, int blue) {
        return new Paint() {
            public int red() { return red; }
            public int green() { return green; }
            public int blue() { return blue; }
            @Override public boolean equals(Object o) { return autoEquals(o); }
            @Override public int hashCode() { return autoHashCode(); }
            @Override public String toString() { return autoToString(); }
        };
    }
 
    default List<Function<Paint,?>> props() {
        return asList(Paint::red, Paint::green, Paint::blue);
    }
 
}

This is still overly verbose. We can reduce it by moving the overrides to an abstract base class that we hide from the callers of our create() method. We simply move the equals/hashCode/toString overrides to a Value<T> base class, which provides a setter for the properties to use for the equals/hashCode.

This leaves us with the relatively consise

interface Paint {
    int red();
    int green();
    int blue();
    default Paint mix(Paint other) {
        return create(red() + other.red(), green() + other.green(), blue() + other.blue());
    }
 
    static Paint create(int red, int green, int blue) {
        abstract class PaintValue extends Value<Paint> implements Paint {}
        return new PaintValue() {
            public int red() { return red; }
            public int green() { return green; }
            public int blue() { return blue; }
        }.using(Paint::red, Paint::green, Paint::blue);
    }
}

You will notice that paint now extends both EqualsHashcode and ToString, where we place the implementation of auto(Equals|HashCode|ToString).

Let’s look at toString first as it’s simpler. We define a default method that takes the value of the properties returned by our props() method above, and concatenates them together.

interface ToString<T> {
    default String autoToString() {
        return "{" +
        props().stream()
            .map(prop -> (Object)prop.apply((T)this))
            .map(Object::toString)
            .collect(Collectors.joining(", ")) +
        "}";
    }
 
    List<Function<T,?>> props();
}

EqualsHashcode is similar. For equals we can apply the property functions to “this” and also the supplied object for comparison. We require all properties to match on both objects for equality. In the same way we can calculate a hashcode based on the supplied properties.

public interface EqualsHashcode<T> {
    default boolean autoEquals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        final T value = (T)o;
        return props().stream()
            .allMatch(prop -> Objects.equals(prop.apply((T) this), prop.apply(value)));
    }
 
    default int autoHashCode() {
        return props().stream()
            .map(prop -> (Object)prop.apply((T)this))
            .collect(ResultCalculator::new, ResultCalculator::accept, ResultCalculator::combine)
            .result;
    }
 
 
    static class ResultCalculator implements Consumer {
        private int result = 0;
        public void accept(Object value) {
            result = 31 * result + (value != null ? value.hashCode() : 0);
        }
        public void combine(ResultCalculator other) {
            result += other.result;
        }
    }
 
    List<Function<T,?>> props();
}

What other reasons are there this is a crazy idea? Is there a better way of implementing equals/hashCode?

Posted by & filed under Java.

Java 8′s default methods on interfaces means we can implement the decorator pattern much less verbosely.

The decorator pattern allows us to add behaviour to an object without using inheritance. I often find myself using it to “extend” third party interfaces with useful additional behaviour.

Let’s say we wanted to add a map method to List that allows us to convert from a list of one type to a list of another. There is already such a method on the Stream interface, but it serves as an example

We used to have to either

a) Subclass a concrete List implementation and add our method (which makes re-use hard), or
b) Re-implement the considerably large List interface, delegating to a wrapped List.

You can ask your IDE to generate these delegate methods for you, but with a large interface like List the boilerplate tends to obscure the added behaviour.



class MappingList<T> implements List<T> {
    private List<T> impl;
 
    public int size() {
        return impl.size();
    }
 
    public boolean isEmpty() {
        return impl.isEmpty();
    }
 
    // Many more boilerplate methods omitted for brevity
 
    // The method we actually wanted to add.
    public <R> List<R> map(Function<T,R> f) {
        return list.stream().map(f).collect(Collectors.toList());
    }
 
}

Guava gave us a third option

c) Extend the the Guava ForwardingList class. Unfortunately that meant you couldn’t extend any other class.

Java 8 gives us a fourth option

d) We can implement the forwarding behaviour in an interface, and then add our behaviour on top.

The disadvantage is you need a public method which exposes the underlying implementation. The advantages are you can keep the added behaviour separate, and it’s easier to compose them.

Our decorator can now be really short – something like

class MappableList<T> implements List<T>, ForwardingList<T>, Mappable<T> {
    private List<T> impl;
 
    public MappableList(List<T> impl) {
        this.impl = impl;
    }
 
    @Override
    public List<T> impl() {
        return impl;
    }
}

We can use it like this

// prints 3, twice.
new MappableList<String>(asList("foo", "bar"))
    .map(s -> s.length())
    .forEach(System.out::println);

The new method we added is declared in its own Mappable<T> interface which is uncluttered.

interface Mappable<T> extends ForwardingList<T> {
	default <R> List<R> map(Function<T,R> f) {
		return impl().stream().map(f).collect(Collectors.toList());
	}
}

The delegation boilerplate we can keep in its own interface, out of the way. Since it’s an interface we are free to extend other classes/interfaces in our decorator

interface ForwardingList<T> extends List<T> {
    List<T> impl();
 
    default int size() {
        return impl().size();
    }
 
    default boolean isEmpty() {
        return impl().isEmpty();
    }	
 
    // Other methods omitted for brevity
 
}

If we wanted to mix in some more functionality to our MappableList decorator class we could just implement another interface. In the above example we added a new method, so this time let’s modify one of the existing methods on List. Let’s make a List that always thinks it’s empty.

interface AlwaysEmpty<T> extends ForwardingList<T> {
    default boolean isEmpty() {
        return true;
    }
}
class MappableList<T> implements 
    List<T>, 
    ForwardingList<T>, 
    Mappable<T>, 
    AlwaysEmpty<T> { // Mix in the new interface
    // ...
}

Now our list always claims it’s empty

// prints true
System.out.println(new MappableList<String>(asList("foo", "bar")).isEmpty());