Posted by & filed under Java.

A common frustration with Java is the inability to overload methods when the method signatures differ only by type parameters.

Here’s an example, we’d like to overload a method to take either a List of Strings or a List of Integers. This will not compile, because both methods have the same erasure.

class ErasureExample {
 
    public void doSomething(List<String> strings) {
        System.out.println("Doing something with a List of Strings " );
    }
 
    public void doSomething(List<Integer> ints) { 
        System.out.println("Doing something with a List of Integers " );
    }
 
}

If you delete everything in the angle brackets the two methods will be identical, which is prohibited by the spec

public void doSomething(List<> strings) 
public void doSomething(List<> strings)

As with most Java things – if it’s not working, you’re probably not using enough lambdas. We can make it work with just one extra line of code per method.

class ErasureExample {
 
    public interface ListStringRef extends Supplier<List<String>> {}
    public void doSomething(ListStringRef strings) {
        System.out.println("Doing something with a List of Strings " );
    }
 
    public interface ListIntegerRef extends Supplier<List<Integer>> {}
    public void doSomething(ListIntegerRef ints) {
        System.out.println("Doing something with a List of Integers " );
    }
 
}

Now we call call the above as simply as the following, which will print “Doing something with a List of Strings” followed by “Doing something with a List of Integers”

public class Example {
 
    public static void main(String... args) {
        ErasureExample ee = new ErasureExample();
        ee.doSomething(() -> asList("aa","b"));
        ee.doSomething(() -> asList(1,2));
    }
}

Using the wrapped lists inside the method is straightforward. Here we print out the length of each string and print out each integer, doubled. It will cause the above main method to print “2124”.

class ErasureExample {
 
    public interface ListStringRef extends Supplier<List<String>> {}
    public void doSomething(ListStringRef strings) {
        strings.get().forEach(str -> System.out.print(str.length()));
    }
 
    public interface ListIntegerRef extends Supplier<List<Integer>> {}
    public void doSomething(ListIntegerRef ints) {
        ints.get().forEach(i -> System.out.print(i * 2));
    }
 
}

This works because the methods now have different erasure, in fact the method signatures have no generics in at all. The only additional requirement is prefixing each argument with “() ->” at the callsite, creating a lambda that is equivalent to a Supplier of whatever type your argument is.

Posted by & filed under Java.

What use is a method that just returns its input? Surprisingly useful. A surprising use is as a way to convert between types.

There’s a well known trick that’s often used to work around Java’s terrible array literals that you may have come across. If you have a method that takes an array as an argument

public static void foo(String[] anArray) { }

Invoking foo requires the ugly

foo(new String[]{"hello", "world"});

For some reason Java requires the redundant “new String[]” here, even though it can be trivially inferred. Fortunately we can work around this with the following method which at first glance might seem pointless.

public static <T> T[] array(T... input) {
    return input;
}

It just returns the input. However it is accepting an array of Type T in the form of a varargs, and returning that array. It becomes useful due as now Java will infer the types and create the array cleanly. We can now call foo like so.

foo(array("hello", "world"));

That is a neat trick, but it really becomes useful in Java 8 thanks to structural typing of lambdas/method references and implicit conversions between types. Here’s another example of a method that’s far more useful than it appears at first glance. It just accepts a function and returns the same function.

public static <T,R> Function<T,R> f(Function<T,R> f) {
    return f;
}

The reason it’s useful is we can pass it any structurally equivalent method reference and have it converted to a java.util.function.Function, which provides us some useful utility methods for function composition.

Here’s an example. Let’s say we have a list of Libraries (Collection of Collection of Books)

interface Library {
    List<Book> books();
    static Library library(Book... books) {
        return () -> asList(books);
    }
}
 
interface Book {
    String name();
    static Book book(String name) { return () -> name; }
}
 
List<Library> libraries = asList(
    library(book("The Hobbit"), book("LoTR")),
    library(book("Build Quality In"), book("Lean Enterprise"))
);

We can now print out the book titles thusly

libraries.stream()
    .flatMap(library -> library.books().stream()) // Stream of libraries to stream of books.
    .map(Book::name) // Stream of names
    .forEach(System.out::println);

But that flatMap call is upsetting, everything else is using a method reference, not a lambda expression. I’d really like to write the following, but it won’t compile because flatMap requires a function that returns a Stream, rather than a function that returns a List.

libraries.stream()
    .flatMap(Library::books) // Compile Error, wrong return type.
    .map(Book::name) 
    .forEach(System.out::println);

Here’s where our method that returns its input comes in again. This compiles fine.

libraries.stream()
    .flatMap(f(Library::books).andThen(Collection::stream)) 
    .map(Book::name) 
    .forEach(System.out::println);  
 
public static <T,R> Function<T,R> f(Function<T,R> f) {
    return f;
}

This works because Library::books is equivalent to a Function<Library, List<Book>>, so passing it to the f() method implicitly converts it to that type. java.util.function.Function provides an andThen method which returns a new function which composes the two functions.

Now in this trivial example it’s actually longer to write this than the equivalent lambda, but it can be useful when combining more complex examples.

We can do the same thing with other functional interfaces. For example to allow Predicate composition or negation.

Here we have a handy isChild() method implemented for us on Person, but we really want the inverse – isAdult() check to pass to the serveAlcohol method. This sort of thing comes up all the time.

interface Person {
    boolean isChild();
    static Person child() { return () -> true; }
    static Person adult() { return () -> false; }
}
 
public static void serveAlcohol(Person person, Predicate<Person> isAdult) {
    if (isAdult.test(person)) System.out.println("Serving alcohol");
}

If we want to reuse Person::isChild we can do the same trick. The p() method converts the method reference to a Predicate for us, and we can then easily negate it.

serveAlcohol(adult(), p(Person::isChild).negate());
 
public static <T> Predicate<T> p(Predicate<T> p) {
    return p;
}

Have you got any other good examples?

Posted by & filed under XP.

At work, we’ve always pair-programmed all our production code, so we’re already pretty bought into it being a good idea to have multiple people working on a single problem. I previously wrote about some of the reasons for pairing.

Mob Programming

Recently, having inspired by a talk by Woody Zuill, we decided to give mob-programming a go, and our experiences so far have been very positive.

Mob programming is having the whole team working on the same problem, at the same time, using a single workstation.

We’re not using it all the time right now. We’ve started doing Mob-Fridays as a way of regularly working together as a group instead of pairing. We’re still pretty new to it – only having done it for a few weeks, but I thought I’d post some of my observations thus far.

Setup

Here’s our setup. We all (4-6 of us) sit round in a semicircle, as we would when having a group disscusion. We have a big 124cm HD TV for everyone to see the code on, and a 76cm monitor for the person at the keyboard – positioned perpendicularly to the TV. This allows the driver to see the rest of the team. We also have a large whiteboard behind the team (just out of the photo) which we can scribble design ideas on.

We have been using strict 5 minute rotations for driving. Every 5 minutes the person with the keyboard relinquishes it and another team member takes over. This gives us a rhythm for continuous deployment (We try to deploy to production after between 5-10 rotations – i.e. at least once an hour). 5 minute rotations keep it very fast paced, and keeps everyone engaged.

We’ve also tried including team members with specialities in our mobbing sessions, including having them drive. We’ve had mob sessions with our product manager and UX specialist. I think it could be interesting to include our internal team customers in the future.

Why?

Efficiency

You may be thinking that this can’t possibly be efficient. Surely 5 or 6 people working individually can get more done than working together, constrained by the speed that one person can type and they can communicate. I think you might well be right, but the amount of stuff a team can get done (throughput) is not necessarily what you want to optimise. Often the speed at which we can get from where we are now to achieving a business goal is more important (latency). Anything we can do to get there faster is a good thing, even if it’s less efficient in terms of throughput.

Regardless of the efficiency of cranking out code, mobbing provides several efficiencies.

Mobbing eliminates a whole class of meetings – removing synchronisation points that slow down developers working independently. There’s no need for detailed design discussions in advance of starting on implementation, because everyone can contribute to the design while working on it. There’s also no need for traditional standup meetings to catch up on what is going on. When working together as a team everyone knows what everyone has been doing and is going to do.

There is also less time loss due to interruptions. People seem more reticent to interrupt a group session than an individual or pair – We pause to answer any questions from outside the team and have a break after each person in the team has had a driving session. It’s also less disruptive when someone’s phone rings or someone needs a toilet break. They can just nip out of the mob and let the mob continue. When pairing, work often stops when these small interruptions occur, and a lot of the context is lost.

A combination of the 5 minute cadence and having more people involved also seems to help avoid wasting time doing things that we don’t really need, which helps us move faster.

We’re also able to more rapidly adapt to what we discover in the course of implementation. Our preconceived ideas of how we might build a feature don’t always survive implementation. We often learn along the way that our original plans won’t work. When pairing we often convened team huddles to discuss these issues before continuing. When working as a mob we just press through them unfazed, without any delay waiting for input from others.

Communication

Mobbing seems great for making significant architectural changes to the system. Things that you need everyone on the team to be bought into, and ideally want as many pairs of eyes on to avoid problems. For instance, we have been mobbing on a new design for a system that processes money. It’s a core technology for us, that it’s important for the whole team to understand, and since it deals with processing money mistakes could be costly.

Mobbing also completely eliminates one of the problems I’ve observed with pair programming – that purity of design can be lost when you rotate out one of the developers from the pair and swap someone in. When mobbing everyone on the team gets to see designs through to the end.

Another reason for mobbing is that it’s great fun. Doing something together as a team makes us a better team. Mobbing is a teambuilding activity that’s actually achieving what we would be achieving if we were working individually.

Summary

Do try mob programming yourself. It’s great fun, should help you be a better team, and an effective way to build software.

Posted by & filed under Java.

The builder patten is often used to construct objects with many properties. It makes it easier to read initialisations by having parameters named at the callsite, while helping you only allow the construction of valid objects.

Builder implementations tend to either rely on the constructed object being mutable, and setting fields as you go, or on duplicating all the settable fields within the builder.

Since Java 8, I find myself frequently creating lightweight builders by defining an interface for each initialisation stage.

Let’s suppose we have a simple immutable Person type like

static class Person {
    public final String firstName;
    public final String lastName;
    public final Centimetres height;
 
    private Person(String firstName, String lastName, Centimetres height) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.height = height;
    }
}

I’d like to be able to construct it using a builder, so I can see at a glance which parameter is which.

Person benji = person()
    .firstName("benji")
    .lastName("weber")
    .height(centimeters(182));

All that is needed to support this now is 3 single-method interfaces to define each stage, and a method to create the builder

The three interfaces are as follows. Each has a single method, so is compatible with a lambda, and each method returns another single method interface. The final interface returns our completed Person type.

interface FirstNameBuilder {
    LastNameBuilder firstName(String firstName);
}
interface LastNameBuilder {
    HeightBuilder lastName(String lastName);
}
interface HeightBuilder {
    Person height(Centimetres height);
}

Now we can create a person() method which creates the builder using lambdas.

public static FirstNameBuilder person() {
    return firstName -> lastName -> height -> new Person(firstName, lastName, height);
}

While it is still quite verbose, this builder definition is barely longer than simply adding getters for each of the fields.

Suppose we wanted to be able the give people’s height in millimetres as well as centimetres. We could simply add a default method to the HeightBuilder interface that does the conversion.

interface HeightBuilder {
    Person height(Centimetres height);
    default Person height(MilliMetres millis) {
        return height(millis.toCentimetres());
    }
}

We can use the same approach to present different construction “paths” without making our interfaces incompatible with lambdas (which is necessary to keep it concise)

Let’s look at a more complex example of a “Burger” type. We wish to allow construction of burgers, but if the purchaser is a vegetarian we would like to restrict the available choices to only vegetarian options.

The simple meat-eater case looks exactly like above

Burger lunch = burger()
    .with(beef())
    .and(bacon());
class Burger {
    public final Patty patty;
    public final Topping topping;
 
    private Burger(Patty patty, Topping topping) {
        this.patty = patty;
        this.topping = topping;
    }
 
    public static BurgerBuilder burger() {
        return patty -> topping -> new Burger(patty, topping);
    }
 
    interface BurgerBuilder {
        ToppingBuilder with(Patty patty);
    }
    interface ToppingBuilder {
        Burger and(Topping topping);
    }
}

Now let’s introduce a vegetarian option. It will be a compile failure to put meat into a vegetarian burger.

Burger lunch = burger()
    .vegetarian()
    .with(mushroom())
    .and(cheese());
 
Burger failure = burger()
    .vegetarian()
    .with(beef()) // fails to compile. Beef is not vegetarian.
    .and(cheese());

To support this we add a default method to our BurgerBuilder which returns a new VegetarianBuilder which dissallows meat.

interface BurgerBuilder {
    ToppingBuilder with(Patty patty);
    default VegetarianBuilder vegetarian() {
        return patty -> topping -> new Burger(patty, topping);
    }
}
interface VegetarianBuilder {
    VegetarianToppingBuilder with(VegetarianPatty main);
}
interface VegetarianToppingBuilder {
    Burger and(VegetarianTopping topping);
}

After you have expressed your vegetarian preference, the builder will no longer present you with the option of choosing meat.

Now, let’s add the concept of free toppings. After choosing the main component of the burger we can choose to restrict ourselves to free toppings. In this example Tomato is free but Cheese is not. It will be a compile option to add cheese as a free topping. Now the divergent option is not the first in the chain.

Burger lunch = burger()
    .with(beef()).andFree().topping(tomato());
 
Burger failure = burger()
    .with(beef()).andFree().topping(cheese()); // fails to compile. Cheese is not free

We can support this by adding a new default method to our ToppingBuilder, which in turn calls the abstract method, meaning we don’t have to repeat the entire chain of lambdas required to construct the burger again.

interface ToppingBuilder {
    Burger and(Topping topping);
    default FreeToppingBuilder free() {
        return topping -> and(topping);
    }
}
interface FreeToppingBuilder {
    Burger topping(FreeTopping topping);
}

Here’s the full code from the burger example, with all the types involved.

class Burger {
    public final Patty patty;
    public final Topping topping;
 
    private Burger(Patty patty, Topping topping) {
        this.patty = patty;
        this.topping = topping;
    }
 
    public static BurgerBuilder burger() {
        return patty -> topping -> new Burger(patty, topping);
    }
 
    interface BurgerBuilder {
        ToppingBuilder with(Patty patty);
        default VegetarianBuilder vegetarian() {
            return patty -> topping -> new Burger(patty, topping);
        }
    }
    interface VegetarianBuilder {
        VegetarianToppingBuilder with(VegetarianPatty main);
    }
    interface VegetarianToppingBuilder {
        Burger and(VegetarianTopping topping);
    }
    interface ToppingBuilder {
        Burger and(Topping topping);
        default FreeToppingBuilder andFree() {
            return topping -> and(topping);
        }
    }
    interface FreeToppingBuilder {
        Burger topping(FreeTopping topping);
    }
 
}
 
interface Patty {}
interface BeefPatty extends Patty {
    public static BeefPatty beef() { return null;}
}
interface VegetarianPatty extends Patty, Vegetarian {}
interface Tofu extends VegetarianPatty {
    public static Tofu tofu() { return null; }
}
interface Mushroom extends VegetarianPatty {
    public static Mushroom mushroom() { return null; }
}
 
interface Topping {}
interface VegetarianTopping extends Vegetarian, Topping {}
interface FreeTopping extends Topping {}
interface Bacon extends Topping {
    public static Bacon bacon() { return null; }
}
interface Tomato extends VegetarianTopping, FreeTopping {
    public static Tomato tomato() { return null; }
}
interface Cheese extends VegetarianTopping {
    public static Cheese cheese() { return null; }
}
 
interface Omnivore extends Vegetarian {}
interface Vegetarian extends Vegan {}
interface Vegan extends DietaryChoice {}
interface DietaryChoice {}

When would(n’t) you use this?

Often a traditional builder just makes more sense.

If you want your builder to be used to supply an arbitrary number of fields in an arbitrary order then this isn’t for you.

This approach restricts the order in which fields are initialised to a specific order. This is can be a feature – sometimes it’s useful to ensure that some parameters are supplied first. e.g. to ensure mandatory parameters are supplied without the boilerplate of the typesafe builder pattern. It’s also easier to make the order flexible in the future if you need to than the other way around.

If your builder forms part of a public API then this probably isn’t for you.

Traditional builders are easier to change without breaking existing uses. This approach makes it easy to change uses with refactoring tools, provided you own all the affected code and can make changes to it. To change behaviour without breaking consumers in this approach you would have to restrict yourself to adding default methods rather than modifying existing interfaces.

On the other hand, by being restrictive in what it allows to compile, this does help you help people using your code to use it in the way you intended.

Where I do find myself using this approach is building lightweight fluent interfaces both to make the code more readable, and to help out my future self by letting the IDE autocomplete required field/code blocks. For instance, when recently implementing some automated performance tests, where we needed a warmup and a rampup period, we used one of these to prevent us from forgetting to include them.

When things are less verbose, you end up using them more often, in places you might not have bothered otherwise.

Posted by & filed under Java.

Twice recently we have had “fun” trying to get things using HK2 (Jersey), to place nicely with code built using Guice and Spring. This has renewed my appreciation for code written without DI frameworks.

The problem with (many) DI frameworks.

People like to complain about Spring. It’s an easy target, but often the argument is a lazy “I don’t like XML, it’s verbose, and not fashionable, unlike JSON, or YAML, or the flavour of the month”. This conveniently ignores that it’s possible to do entirely XML-less Spring. With JavaConfig it’s not much different to other frameworks like Guice. (Admittedly, this becomes harder if you try to use other parts of Spring like MVC or AoP)

My issue with many DI frameworks is the complexity they can introduce. It’s often not immediately obvious what instance of an interface is being used at runtime without the aid of a debugger. You need to understand a reasonable amount about how the framework you are using works, rather than just the programming language. Additionally, wiring errors are often only visible at runtime rather than compile time, which means you may not notice the errors until a few minutes after you make them.

Some frameworks also encourage you to become very dependent on them. If you use field injection to have the framework magically make dependencies available for you with reflection, then it becomes difficult to construct things without the aid of the framework – for example in tests, or if you want to stop using that framework.

Even if you use setter or constructor injection, the ease with which the framework can inject a large number of dependencies for you allows you to ignore the complexity introduced by having excessive dependencies. It’s still a pain to construct an object with 20 dependencies without the framework in a test, even with constructor or setter injection. DI frameworks can shield us from the pain that is useful feedback that the design of our code is too complex.

What do I want when doing dependency injection? I have lots of desires but these are some of the most important to me

  • Safety – I would like it to be a compile time error to fail to satisfy a dependency
  • Testability – I want to be able to replace dependencies with test doubles where useful for testing purposes
  • Flexibility – I would like to be able to alter the behaviour of my program by re-wiring my object graph without having to change lots of code

It’s also nice to be able to build small lightweight services without needing to add lots of third party dependencies to get anything done. If we want to avoid pulling in a framework, how else could we achieve our desires? There are a few simple techniques we can use which only require pure Java, some of which are much easier in Java 8.

I’ve tried to come up with a simple example that might exist if we were building a monitoring system like Nagios. Imagine we have a class that is responsible for notifying the on call person for your team when something goes wrong in production.

Manual Constructor Injection

class IncidentNotifier {
    final Rota rota; 
    final Pager pager;
 
    IncidentNotifier(Pager pager, Rota rota) {
        this.pager = pager;
        this.rota = rota;
    }
 
    void notifyOf(Incident incident) {
        Person onCall = rota.onCallFor(incident.team());
        pager.page(onCall, "Oh noes, " + incident + " happened");
    }
}

I would expect that the Pager and Rota will have dependencies of their own. What if we want to construct this ourselves, it’s still fairly straightforward. Not much more verbose to to do explicitly in Java.

public static void main(String... args) {
    IncidentNotifier notifier = new IncidentNotifier(
        new EmailPager("smtp.example.com"),
        new ConfigFileRota(new File("/etc/my.rota"))
    );
}

The advantage of this over automatically injecting them with a framework and @Inject annotations or an XML configuration is that doing it manually like this allows the compiler to warn us about invalid configuration. To omit one of the constructor arguments is a compile failure. To pass a dependency that does not satisfy the interface is also a compile failure. We find out about the problem without having to run the application.

Let’s increase the complexity slightly. Suppose we want to re-use the ConfigFileRota instance, within several object instances that require access to the Rota. We can simply extract it as a variable and refer to it as many times as we wish.

public static void main(String... args) {
    Rota rota = new ConfigFileRota(new File("/etc/my.rota"));
    Pager pager = new EmailPager("smtp.example.com");
    IncidentNotifier incidentNotifier = new IncidentNotifier(
        pager,
        rota
    );
    OnCallChangeNotifier changeNotifier = new OnCallChangeNotifier(
        pager, 
        rota
    );
}

Now this will of course get very long when we start having a significant amount of code to wire up, but I don’t see this an argument not to do the wiring manually in code. The wiring of objects is just as much code as the implementation.

There is behaviour emergent from the way in which we wire up object graphs. Behaviour we might wish to test, and have as much as possible checked by the compiler.

Wanting this configuration to be separated from code into configuration files to make it easier to change may be a sign that you are not able to release code often enough. There is little need for configuration outside your deployable artifact if you are practising Continuous Deployment.

Remember, we have the full capabilities of the language to organise the resulting wiring code. We can create classes that wire up conceptually linked objects, or modules.

Method-Reference Providers

Now suppose we want to send the notification time in the page. We might do something like

pager.page(onCall, "Oh noes, " + incident + " happened at " + new DateTime());

Only now it is hard to test, because the time in the message will change each time the tests run. Previously we might have approached this problem by creating a Factory type for time, maybe a Clock type. However, in Java 8, thanks we can just use constructor method references which significantly reduces the boilerplate.

IncidentNotifier notifier = new IncidentNotifier(
    new EmailPager("smtp.example.com"),
    new ConfigFileRota(new File("/etc/my.rota")),
    DateTime::new
);
 
class IncidentNotifier {
    final Rota rota; 
    final Pager pager;
    final Supplier<DateTime> clock;
 
    IncidentNotifier(Pager pager, Rota rota, Supplier<DateTime> clock) {
        this.pager = pager;
        this.rota = rota;
        this.clock = clock;
    }
 
    void notifyOf(Incident incident) {
        Person onCall = rota.onCallFor(incident.team());
        pager.page(onCall, "Oh noes, " + incident + " happened at " + clock.get());
    }
}

Test Doubles

It’s worth pointing out at this point how easy Java 8 makes it to replace this kind of dependency with a test double. If your collaborators are single-method interfaces then we can cleanly stub out their behaviour in tests without using a mocking framework.

Here’s a test for the above code that asserts that it invokes the page method, and also checks the argument, both in the same test to simplify the example. The only magic is the 2 line static method on the Exception. I have used no dependencies other than JUnit.

@Test(expected=ExpectedInvocation.class)
public void should_notify_me_when_I_am_on_call() {
    DateTime now = new DateTime();
    Person benji = person("benji");
    Rota rota = regardlessOfTeamItIs -> benji;
    Incident incident = incident(team("a team"), "some incident");
 
    Pager pager = (person, message) -> ExpectedInvocation.with(() ->
        assertEquals("Oh noes, some incident happened at " + now, message)
    );
 
    new IncidentNotifier(pager, rota, () -> now).notifyOf(incident);
}
static class ExpectedInvocation extends RuntimeException{
    static void with(Runnable action) {
        action.run();
        throw new ExpectedInvocation();
    }
}

As you can see the stubbings are quite consise thanks to most collaborators being single method interfaces. This probably isn’t going to remove your need to use Mockito or JMock, but stubbing with lambdas is handy where it works.

Partial Application

You might have noticed that the dependencies that we inject in the constructor could equally be passed to our notify method, we could pass them around like this. It can even be a static method in this case.

Incident incident = incident(team("team name"), "incident name");
FunctionalIncidentNotifier.notifyOf(
    new ConfigFileRota(new File("/etc/my.rota")),
    new EmailPager("smtp.example.com"),
    DateTime::new,
    incident
);
 
class FunctionalIncidentNotifier {
    public static void notifyOf(Rota rota, Pager pager, Supplier<DateTime> clock, Incident incident) {
        Person onCall = rota.onCallFor(incident.team());
        pager.page(onCall, "Oh noes, " + incident  + " happened at " + clock.get());
    }
}

If we have to pass around all dependencies to every method call like this it’s going to make our code difficult to follow, but if we structure the code like this we can partially apply the function to give us a function with all the dependencies satisfied.

Incident incident = incident(team("team name"), "incident name");
 
Notifier notifier = notifier(Partially.apply(
    FunctionalIncidentNotifier::notifyOf,
    new ConfigFileRota(new File("/etc/my.rota")),
    new EmailPager("smtp.example.com"),
    DateTime::new,
    _
));
 
notifier.notifyOf(incident);

There’s just a couple of helpers necessary to make this work. First, a Notifier interface that can be created from a generic Consumer<T>. Static methods on interfaces come to the rescue here.

interface Notifier {
    void notifyOf(Incident incident);
    static Notifier notifier(Consumer<Incident> notifier) {
        return incident -> notifier.accept(incident);
    }
}

Then we need a way of doing the partial application. There’s no support for this built into Java as far as I am aware, but it’s trivial to implement. We just declare a method that accepts a reference to a consumer with n arguments, and also takes the arguments you wish to apply. I am using an underscore to represent missing values that are still unknown. We could add overloads to allow other parameters to be unknown.

class Partially {
    static <T,U,V,W> Consumer<W> apply(
        QuadConsumer<T,U,V,W> f, 
        T t, 
        U u, 
        V v, 
        MatchesAny _) {
            return w -> f.apply(t,u,v,w);
    }
}
interface QuadConsumer<T,U,V,W> {
    void apply(T t, U u, V v, W w);
}
class MatchesAny {
    public static MatchesAny _;
}

Mixins

Going back to the object-oriented approach. Suppose we want our Pager to send emails in production but just print messages to the console if we are running it on our workstation. We can create two implementations of Pager, an EmailPager and a ConsolePager, but we want to use an implementation based on what environment we are running in. We can do this by creating an EnvironmentAwarePager which decides which implementation to use at Runtime.

class EnvironmentAwarePager implements Pager {
    final Pager prodPager;
    final Pager devPager;
 
    EnvironmentAwarePager(Pager prodPager, Pager devPager) {
        this.prodPager = prodPager;
        this.devPager = devPager;
    }
 
    public void page(Person onCall, String message) {
        if (isProduction()) prodPager.page(onCall, message);
        else devPager.page(onCall, message);
    }
 
    boolean isProduction() { ... }
}

But what if we want to test the behaviour of this environment aware pager, to be sure that it calls the production pager when in production. To do this we need to extract the responsibility of checking whether we are running in the production environment. We could make a collaborator, but there is another option (which I’ll use for want of a better example) – We can mix in functionality using an interface.

interface EnvironmentAware {
    default boolean isProduction() {
        // We could check for a machine manifest here
        return false;
    }
}

Now our Pager becomes

class EnvironmentAwarePager implements Pager, EnvironmentAware {
    final Pager prodPager;
    final Pager devPager;
 
    EnvironmentAwarePager(Pager prodPager, Pager devPager) {
        this.prodPager = prodPager;
        this.devPager = devPager;
    }
 
    public void page(Person onCall, String message) {
        if (isProduction()) prodPager.page(onCall, message);
        else devPager.page(onCall, message);
    }
}

We can use isProduction without implementing it.

Now let’s write a test that checks that the production pager is called in the production environment. Here we extend EnvironmentAwarePager to override its production-awareness by mixing in the AlwaysInProduction interface. We stub the dev pager to fail the test because we don’t want that to be called, and stub the prod pager to fail the test if not invoked.

@Test(expected = ExpectedInvocation.class)
public void should_use_production_pager_when_in_production() {
    class AlwaysOnProductionPager 
        extends EnvironmentAwarePager 
        implements AlwaysInProduction {
        AlwaysOnProductionPager(Pager prodPager, Pager devPager) {
            super(prodPager, devPager);
        }
    }
 
    Person benji = person("benji");
    Pager prod = (person, message) -> ExpectedInvocation.with(() -> {
        assertEquals(benji, person);
        assertEquals("hello", message);
    });
    Pager dev = (person, message) -> fail("Should have used the prod pager");
 
    new AlwaysOnProductionPager(prod, dev).page(benji , "hello");
 
}
 
interface AlwaysInProduction extends EnvironmentAware {
    default boolean isProduction() { return true; }
}

Using mixins here is a bit contrived, but I struggled to come up with an example that was both not contrived and sufficiently brief to illustrate the point.

Cake Pattern

I mention mixins partly because it leads onto the Cake Pattern

The cake pattern can make it a bit easier to wire up more complex graphs, when compared to manual constructor injection. While it does also add quite a lot of complexity in itself, we do at least retain a lot of compile time checking. Failing to satsify a dependency will be a compilation failure.

Here’s what an example application using cake would look like that sends an page about an incident specified with command line args.

public class Example {
    public static void main(String... args) {
        ProductionApp app = () -> asList(args);
        app.main();
    }
}
 
interface ProductionApp extends
        MonitoringApp,
        DefaultIncidentNotifierProvider,
        EmailPagerProvider,
        ConfigFileRotaProvider,
        DateTimeProvider {}
 
interface MonitoringApp extends
        Application,
        IncidentNotifierProvider {
    default void main() {
        String teamName = args().get(0);
        String incidentName = args().get(1);
        notifier().notifyOf(incident(team(teamName), incidentName));
    }
}
 
interface Application {
    List<String> args();
}

Here we’re using interfaces to specify the components we wish to use in our application. We have a MonitoringApp interface that specifies the entry point behaviour. It sends a notification using the command line arguments. We also have a ProductionApp interface that specifies which components we want to use in this application.

If we want to replace a component – for example to print messages to the console instead of sending an email when running it on our workstation it’s just a matter of replacing that component –

interface WorkstationApp extends
        MonitoringApp,
        DefaultIncidentNotifierProvider,
        ConsolePagerProvider, // This component is different
        ConfigFileRotaProvider,
        DateTimeProvider  {}

This is checked compile time. If we were to not specify a PagerProvider at all we’d get a compile failure in our main method when we try to instantiate the WorkstationApp. Admittedly, not a very informative message if you don’t know what’s going on (WorkstationApp is not a functional interface, multiple non-overriding abstract methods found in com.benjiweber.WorkstationApp.)

For each thing that we want to inject, we declare a provider interface, which can itself rely on other providers like

interface DefaultIncidentNotifierProvider extends 
        IncidentNotifierProvider, 
        PagerProvider, 
        RotaProvider, 
        ClockProvider {
    default IncidentNotifier notifier() { 
        return new IncidentNotifier(pager(), rota(), clock()); 
    }
}

PagerProvider has multiple mixin-able implementations

interface EmailPagerProvider extends PagerProvider {
    default Pager pager() { return new EmailPager("smtp.example.com"); }
}
interface ConsolePagerProvider extends PagerProvider {
    default Pager pager() { 
        return (Person onCall, String message) -> 
            System.out.println("Stub pager says " + onCall + " " + message); 
    }
}

As I mentioned above, this pattern starts to add too much complexity for my liking, however neat it may be. Still, it can be a useful technique for using sparingly for parts of your application where manual constructor injection is becoming tedious.

Summary

There are various techniques for doing the kinds of things that we often use DI frameworks to do, just using pure Java. It’s worth considering the hidden costs of using the framework.

The code for the examples used in this post is on Github

Posted by & filed under Java.

This is a follow up to Pattern matching in Java, where I demonstrated pattern matching on type and structure using Java 8 features. The first thing most people asked is “does it support matching on nested structures?”

The previous approach did not, at least not without creating excessive boilerplate constructors. So here’s another approach that does.

Let’s suppose we have a nested structure representing a customer like this. Illustrated as a JavaScript object literal for clarity.

{
  firstName: "Benji",
  lastName: "Weber",
  address: {
    firstLine: {
      houseNumber: 123,
      roadName: "Some Street"
    },
    postCode: "AB123CD"
  }
}

What if we want to match customers with my name and pull out my house number, road name, and post code? With pattern matching it becomes straightforward.

First we’ll create Java types to represent it such that we can create the above representation like:

Customer customer = customer(
    "Benji", 
    "Weber", 
    address(
        firstLine(123,"Some Street"), 
        "AB123CD"
    )
);

I’ll use the value object pattern I described previously to create these.

Now we just need a way to build up a structure to match against, which retains the properties we want to extract for the pattern matching we previously implemented.

Here’s what we can do. We use underscores to indicate properties we wish to extract rather than match. All other properties are matched.

// Using the customer instance from above
String address = customer.match()
    .when(a(Customer::customer).matching(
        "Benji",
        "Weber",
        an(Address::address).matching(
            a(FirstLine::firstLine).matching(_,_),
            _
        )
    )).then((houseNo, road, postCode) -> houseNo + " " + road + " " + postCode)
    .otherwise("unknown");
 
assertEquals("123 Some Street AB123CD", address);

So how does it work? Well we get the .match() method by implementing the Case interface on our value types. This interface has a default method match which returns a match builder which we can use to specify our cases.

Last time we implemented overloads to the when(..) method such that we could match on types or instances. Now we can re-use that work and add overloads that take a Match reference. e.g.

public <A,B> BiMatchConstructorBuilder<T, A, B> when(BiMatch<T, A, B> matchRef) {
// Here we can know we are matching for missing properties of types A and B
// So we can expect a function to consume these properties that accepts an A and B
    return new BiMatchConstructorBuilder<T, A, B>() {
        public <R> MatchBuilderR<T, R> then(BiFunction<A, B, R> f) {
            // ...
        }
    };
}

The matchRef can capture method references to the properties we want to extract, and then we can apply these method references to the object we are matching against to check for a match.

Lastly we simply add a couple of static methods: a(constructor) and an(constructor) for building up our matches, which return a builder that accepts either the constructor arguments, or a wildcard underscore to indicate we want to match and extract that value.

Here are some more examples to help illustrate the idea

Posted by & filed under Java.

The absence of tuples in Java is often bemoaned. Tuples are collections of a variety of types. They might look like (“benji”,8,”weber”) – a 3-tuple of a string, a number, and another string. People often wish to use tuples to return more than one value from a method.

Often this is a smell that we need to create a domain type that is more meaningful than a tuple, but in some cases tuples are actually the best tool for the job.

We can pretty easily create our own tuple types, and now that we have Lambdas we can consume them pretty easily as well. There is a little unnecessary verbosity left over from type signatures, but we can avoid it in most cases.

If we want to return a tuple from a method, by statically importing an “tuple” method that is overloaded for each arity of tuple we can do

static TriTuple<String, Integer, String> me() {
    return tuple("benji",9001,"weber");
}

Other than the return type of the method it is fairly concise.

On the consuming side it would be nice to be able to use helpful names for each field in the tuple, rather than the ._1() or .one() accessors that are used in many tuple implementations.

This is easily realised thanks to lambdas. We can simply define a map method on the tuple that accepts a function with the same arity of the tuple, to transform the tuple into something else.

Using the above method now becomes

String name = me()
    .map((firstname, favouriteNo, surname) -> firstname + " " + surname);
 
assertEquals("benji weber", name);

We can even throw checked exceptions on the consuming side if we allow our map method to accept functions that throw exceptions.

String name = me()
    .map((firstname, favouriteNo, surname) -> {
        if (favouriteNo > 9000) throw new NumberTooBigException();
        return firstname;
    });

Of course we’d also like identical tuples to be equal to each other so that this is true.

assertEquals(tuple("hello","world"), tuple("hello", "world"));

The implementation looks like this, re-using the value-object pattern I described previously

public interface Tuple {
    //...
    static <A,B,C> TriTuple<A,B,C> tuple(A a, B b, C c) {
        return TriTuple.of(a, b, c);
    }
    //...
}
 
public interface TriTuple<A,B,C> {
    A one();
    B two();
    C three();
    static <A,B,C> TriTuple<A,B,C> of(A a, B b, C c) {
        abstract class TriTupleValue extends Value<TriTuple<A,B,C>> implements TriTuple<A,B,C> {}
        return new TriTupleValue() {
            public A one() { return a; }
            public B two() { return b; }
            public C three() { return c; }
        }.using(TriTuple::one, TriTuple::two, TriTuple::three);
    }
 
    default <R,E extends Exception> R map(ExceptionalTriFunction<A, B, C, R, E> f) throws E {
        return f.apply(one(),two(),three());
    }
 
    default <E extends Exception> void consume(ExceptionalTriConsumer<A,B,C,E> consumer) throws E {
        consumer.accept(one(), two(), three());
    }
}

Browse the full code on Github

Posted by & filed under Uncategorized, XP.

Can you afford not to do continuous deployment?

Continuous deployment is the practice of regularly (more than daily) deploying updated software to production.

Arguments in favour continuous deployment often focus on how it enables us to continually, regularly, and rapidly deliver value to the business, allowing us to move fast. It’s also often discussed how it reduces release-risk by making deployments an everyday event – with smaller, less risky changes, which are fully automated.

I want to consider another reason

Unforeseen Production Issues

It can be tempting to reduce the frequency of deployments in response to risk. If a deployment with a bug can result in losing a significant amount of money or catastrophic reputation damage, it’s tempting to shy away from the risk and do it less often. Why not plan to release once a month instead of every day? Then we’re only taking the risk monthly.

Let’s leave aside the many reasons why doing things less often is unlikely to reduce the risk.

There will always be things that can break your system in production that are outside your control, and are unlikely to be caught by your pre-deployment testing. If you hit one of these issues then you will need to perform a deployment to fix the issue.

Time-sensitive Bugs

How often do we run into bugs in software due to failing to anticipate some detail of time and dates. We hear stories of systems breaking on 29/02 nearly every leap year. Have you anticipated backwards time corrections from NTP? Are you sure you handle leap-seconds? What about every framework and third party service you are using?

Yes, most of these can be tested for in advance with sufficiently rigorous testing, but we always miss test cases.

Time means that we cannot be sure that software that worked when it was deployed will continue to work.

Capacity Problems

If you production system is overloaded due to unforeseen bottlenecks you may need to fix them and re-deploy. Even if you do capacity planning you may find the software needs to scale in ways that you did not anticipate when you wrote your capacity tests.

When you find an unexpected bottleneck in a production system, you’re going to need to make a change and deploy. How quickly depends on how much early warning your monitoring software gave you.

Third Party dependencies

That service that you depend on goes down for an extended period of time, suddenly you need to change your system to not rely on it.

Infrastructure

Do you handle all the infrastructure related issues that can go wrong? What happens if DNS is intermittently returning incorrect results? What if there’s a routing problem between two services in your system?

Third Party Influence

“Noisy Neighbours” in virtualised or shared hosting. Third party JavaScript in your page overwriting your variables or event listeners. In most environments, you can be affected by others over whom you have no control.

Response

Monitor Monitor Monitor

We cannot foresee all production issues and guard against them in our deployment pipeline. Even for those we could, in many cases the cost would not be warranted by the benefit.

Any of the above factors (and more) can cause your system to stop meeting its acceptance criteria at some point after it has been deployed.

The first response to this fact is that we need to monitor everything important in production. This could mean running your automated acceptance tests against your production environment. You can do this both for “feature” and “non-functional” requirement acceptance tests such as capacity.

If you have end-to-end tests feature tests for your system why not run them against your production app? If you’re worried about cluttering production with fake test data you can build in a cleanup mechanism, or a means of excluding your test data from affecting real users.

If you have capacity tests for your system why not run them against your production app? If you’re worried about the risk that it won’t cope at least you can control when the capacity tests run. Wouldn’t you rather your system failed during working hours due to an capacity test which you could just turn off, rather than failing in the middle of the night due to a real spike in traffic?

Be ready to deploy at any point

Tying this back to continuous delivery, I think our second response should be being ready to deploy at any point.

Some of these potential production issues you can prevent with sufficient testing, but will you have thought of every possible case? Some of these issues can be mitigated in production by modifying infrastructure configuration and or application configuration – but

  1. Should you be modifying those independently of the application? They’re just as likely to break your production system as code changes
  2. Not everything will (or should be) configurable

At the point you find yourself needing to do an emergency bug fix on production you have two scenarios when you have not been practising continuous delivery.

  1. You pull up the branch/tag of code last deployed to production a few weeks ago, make your changes, and deploy.
  2. You risk deploying what you have now. Weeks of changes which have not been deployed. Any of which may introduce new, subtle issues.

You may not currently even be at an integrable state. If you were not expecting to deploy why should you be? If you’re not then you need to

  1. Identify quickly what code is actually in production.
  2. Context-switch back to the code-base as it was then. Your mental model of the code will be as it is now, not as it was a few weeks ago.
  3. Create yourself re-integration pain when you have to merge your bugfix with the (potentially many) changes that have been made since. This may be fun if you have performed significant refactorings since.
  4. Perform a risky release. You haven’t deployed for a few weeks. Lots could have broken since then. Are your acceptance tests still working? Or do they have some of the same problems as your production environment? If you have fixed a time-sensitive bug in your tests since your last deploy you will now have to back-port that fix to the branch you are now making your changes on.

Summary

We’ll never be able to foresee everything that will happen in production. It’s eve more important to be able to notice and react quickly to production problems than to stop broken software getting into production.

Posted by & filed under Java, JavaScript.

Here’s a neat trick to transform JSON into Java objects that implement an interface with the same structure.

Java 8 comes with Nashorn – a JavaScript runtime that has a number of extensions.

One of these extensions allows you to pass a JavaScript object to a constructor of an interface to anonymously implement that interface. So for example we can do

jjs> var r = new java.lang.Runnable({ run: function() print('hello') });
jjs> r.run();
hello

n.b. the above uses a JavaScript 1.8 lambda expression, which is supported by Nashorn and allows us to omit the braces and “return” keyword in the function defintion. We could have also omitted the parentheses in the Runnable constructor call in this case.

Now let’s suppose we have a JSON file as follows

{
  "firstname":"Some",
  "lastname":"One",
  "petNames":["Fluffy","Pickle"],
  "favouriteNumber":5
}

and we want to to treat it as a Java interface as follows.

public interface Person {
  String firstname();
  String lastname();
  List<String> petNames();
  int favouriteNumber();
}

It’s trivial to convert the JSON to a JavaScript object with JSON.parse.

The only conceptual difference between the JavaScript object and the Java Interface is that in the interface the properties such as firstname and lastname are methods not literal strings. It’s easy to convert a JavaScript object such that each value is wrapped in a function definition.

We just need to define a function that iterates through each value on our JavaScript object and wraps each in a function.

// A function that takes a value and returns a function 
// which when invoked returns the original value
function createFunc(value) function() value 
// Wrap each property in a function
function iface(map) {
  var ifaceImpl = {}
  for (key in map) ifaceImpl[key] = createFunc(map[key]);
  return ifaceImpl;
}

Applying it to our JS object gives us the following

{
  "firstname":function() "Some",
  "lastname":function() "One",
  "petNames":function() ["Fluffy","Pickle"],
  "favouriteNumber":function() 5
}

This is now satisfies our Java Interface, so we can just pass it to the constructor. Putting it all together

var Person = Packages.Person; // Import Java Person Interface
var someoneElse = new Person(iface({ 
  "firstname":"Some",
  "lastname":"One",
  "petNames":["Fluffy","Pickle"],
  "favouriteNumber":5
}));
 
Person.print(someoneElse); // Pass to a Java method that accepts a Person instance.

If our JSON were originally in a text file we could use Nashorn’s scripting extensions to read the text file and convert it to an interface in the same way. This can be useful for bootstrapping a Java app without a main method – you can read a JSON config file, convert it to a typed Java interface, and start the app. This can free the Java app from dealing with JSON or argument parsing

#!/usr/bin/jjs
function createFunc(value) function() value
function iface(map) {
  var ifaceImpl = {}
  for (key in map) ifaceImpl[key] = createFunc(map[key]);
  return ifaceImpl;
}
var Settings = Packages.Settings;
var MyApplication = Packages.MyApplication;
 
// Backticks in Nashorn scripting mode works like in Bash
var settings = new Settings(iface(JSON.parse(`cat app_config.json`))); 
MyApplication.start(settings);
public interface Settings {
  String hostname();
  int maxThreads();
}
public class MyApplication {
  public static void start(Settings settings) {
    //
  }
}

One small annoyance with this is that jjs (Nashorn executable) does not seem able to accept a classpath parameter when used with a shebang #!/usr/bin/jjs . So for now you have to execute the JavaScript with

$ jjs -scripting -cp . ./example.js

There’s a complete example here

Posted by & filed under Java.

One of the language features many people miss in Java is pattern matching, and/or an equivalent of Scala case classes.

In Scala we can match on types and structure. We have “switch” in Java, but it’s much less powerful and it can’t even be used as an expression. It’s possible to simulate matching in Java with some degree of success.

Matching on Type

Here’s an example matching on type. Our description method accepts a shape which can be one of three types – Rectangle, Circle or Cube. It is a compile failure to not specify a handler for each of Rectangle/Circle/Cube.

@Test
public void case_example() {
    Shape cube = Cube.create(4f);
    Shape circle = Circle.create(6f);
    Shape rectangle = Rectangle.create(1f, 2f);
 
    assertEquals("Cube with size 4.0", description(cube));
    assertEquals("Circle with radius 6.0", description(circle));
    assertEquals("Rectangle 1.0x2.0", description(rectangle));
}
 
public String description(Shape shape) {
    return shape.match()
        .when(Rectangle.class, rect -> "Rectangle " + rect.width() + "x" + rect.length())
        .when(Circle.class, circle -> "Circle with radius " + circle.radius())
        .when(Cube.class, cube -> "Cube with size " + cube.size());
}
interface Shape extends Case3<Rectangle, Circle, Cube> { }
interface Cube extends Shape {
    float size();
    static Cube create(float size) {
        return () -> size;
    }
}
//...

If Java didn’t already have an Optional type we could use this to implement our own, although type-erasure causes us to need a hack to match on the raw (unparameterised) type – to convert from a Class<Some<String>> to a Class<Some> for matching.

@Test
public void some_none_match_example() {
    Option<String> exists = some("hello");
    Option<String> missing = none();
 
    assertEquals("hello", describe(exists));
 
    assertEquals("missing", describe(missing));
}
 
private String describe(Option<String> option) {
    return option.match()
        .when(erasesTo(Some.class), some -> some.value())
        .when(erasesTo(None.class), none -> "missing");
}
interface Option<T> extends Case2<Some<T>, None<T>>{ }

Matching on Structure

Another thing we might want to do is match on particular values. This means we’re be unable to ensure that all values are handled and need to provide a default case.

Here’s an example. The underscore character is used to denote any value, like in Scala.

@Test
public void constructor_matching_any() {
    Person so = person("Some", "One");
    Person an = person("Ann", "Other");
 
    String another = an.match()
        .when(person("Some", _), p -> "someone")
        .when(person(_, "Other"), p -> "another")
        ._("Unknown Person");
 
    assertEquals("another", another);
}

A more practical example might be handling command line arguments

@Test
public void parse_arguments_example() {
    applyArgument(arg("--help", "foo"));
    assertEquals("foo", this.helpRequested);
 
    applyArgument(arg("--lang", "English"));
    assertEquals("English", this.language);
 
    applyArgument(arg("--nonsense","this does not exist"));
    assertTrue(badArg);
}
 
private void applyArgument(Argument input) {
    input.match()
        .when(arg("--help", _), arg -> printHelp(arg.value()))
        .when(arg("--lang", _), arg -> setLanguage(arg.value()))
        ._(arg -> printUsageAndExit());
}

Decomposition

With a bit more effort we can even match on the structure of the type and pull out the bits we are interested in. Here’s an example. We match on certain attributes of Person and then consume the other attributes in the following function to build the result.

@Test
public void decomposition_variable_items_example() {
    Person a = person("Bob", "Smith", 18);
    Person b = person("Bill", "Smith", 28);
    Person c = person("Old", "Person", 90);
 
    assertEquals("first_Smith_18", matchExample(a));
    assertEquals("second_28", matchExample(b));
    assertEquals("unknown", matchExample(c));
}
 
String matchExample(Person person) {
    return person.match()
        .when(person("Bob", _, _), (surname, age) -> "first_" + surname + "_" + age)
        .when(person("Bill", "Smith", _), age -> "second_" + age)
        ._("unknown");
}

Making it Work

To implement this we create an interface for each number of cases we want to be able to match on, parameterised by the possible types it can take. This is effectively giving us the “Type A OR Type B” restriction. This interface provides a default method match() which returns a builder which lets you specify a handler for each and all of the cases.

When evaluated, match() compares its own type to the passed in Class<?>.

public interface Case3<T,U,V> {
    default MatchBuilderNone<T,U,V> match() {
        return new MatchBuilderNone<T, U, V>() {
            public <R> MatchBuilderOne<T, U, V, R> when(Class<T> clsT, Function<T, R> fT) {
                return (clsU, fU) -> (clsV, fV) -> {
                    if (clsT.isAssignableFrom(Case3.this.getClass())) return fT.apply((T)Case3.this);
                    if (clsU.isAssignableFrom(Case3.this.getClass())) return fU.apply((U)Case3.this);
                    if (clsV.isAssignableFrom(Case3.this.getClass())) return fV.apply((V)Case3.this);
 
                    throw new IllegalStateException("Match failed");
                };
            }
        };
    }
}

For the matching on parts of a value to work (as in the argument parsing example) we need equals/hashCode to work. This would be a lot easier with actual value type support.

We can extend the base Case interface to allow matching on value as well class, and then building on the autoEquals/Hashcode I explained previously add additional constructor functions that create value objects that ignore specific fields for equality checking.

This is still far more verbose than it should have to be, and it requires adding suitable constructors to each value type you want to use match() with, but I think it’s quite neat.

Here’s what a Person type that’s matchable on either firstname or lastname or both looks like.

interface Person extends Case<Person> {
    String firstname();
    String lastname();
 
    static Person person(String firstname, String lastname) {
        return person(firstname, lastname, Person::firstname, Person::lastname);
    }
    static Person person(String firstname, MatchesAny lastname) {
        return person(firstname, null, Person::firstname);
    }
    static Person person(MatchesAny firstname, String lastname) {
        return person(null, lastname, Person::lastname);
    }
    static Person person(String firstname, String lastname, Function<Person, ?>... props) {
        abstract class PersonValue extends Value<Person> implements Person {}
        return new Person() {
            public String firstname() { return firstname; }
            public String lastname() { return lastname; }
        }.using(props);
    }
}

Decomposition uses pretty much the same approach, but now the factory method has to record the properties that we want to capture as follows. .missing() is overloaded with different numbers of fields to return different types. This allows our .when() match method to also be overloaded based on how many fields we are extracting and therefore accept a lambda with the correct number of parameters. i.e. when(OneMissing, Function), when(TwoMissing, BiFunction)

static OneMissing<Person, String> person(String firstname, MatchesAny lastname, Integer age) {
    return person(firstname, null, age, Person::firstname, Person::age)
        .missing(Person::lastname);
}

Further reading

More examples and implementation on github