Posted by & filed under c#, Java.

A feature often missed in Java by c# developers is yield return

It can be used to create Iterators/Generators easily. For example, we can print the infinite series of positive numbers like so:

public static void Main()
{
    foreach (int i in positiveIntegers()) 
    {
        Console.WriteLine(i);    
    }
}
 
public static IEnumerable<int> positiveIntegers()
{
    int i = 0;
    while (true) yield return ++i;
}

Annoyingly, I don’t think there’s a good way to implement this in Java, because it relies on compiler transformations.

If we want to use it in Java there are three main approaches I am aware of, which have various drawbacks.

The compile-time approach means your code can’t be compiled with javac alone, which is a significant disadvantage.

The bytecode transformation approach means magic going on that you can’t easily understand by following the code. I’ve been burnt by obscure problems with aspect-oriented-programming frameworks using bytecode manipulation enough times to avoid it.

The threads approach has a runtime performance cost of extra threads. We also need to dispose of the created threads or we will leak memory.

I don’t personally want the feature enough to put up with the drawbacks of any of these approaches.

That being said, if you were willing to put up with one of these approaches, can we make them look cleaner in our code.

I’m going to ignore the lombok/compile-time transformation approach as it allows pretty much anything.

Both the other approaches above require writing valid Java. The threads approach is particularly verbose, but there is a wrapper which simplify it down to returning an anonymous implementation of an abstract class that provides yield / yieldBreak methods. e.g.

public Iterable<Integer> oneToFive() {
    return new Yielder<Integer>() {
        protected void yieldNextCore() {
            for (int i = 1; i < 10; i++) {
                if (i == 6) yieldBreak();
                yieldReturn(i);
            }
        }
    };	
}

This is quite ugly compared to the c# equivalent. We can make it cleaner now we have lambdas, but we can’t use the same approach as above.

I’m going to use the threading approach for this example as it’s easier to see what’s going on.

Let’s say we have an interface Foo which extends Runnable, and provides an additional default method.

interface Foo extends Runnable {
    default void bar() {}
}

If we create an instance of this as an anonymous inner class we can call the bar() method from our implementation of run();

Foo foo = new Foo() {
    public void run() {
        bar();
    }
};

However, if we create our implementation with a lambda this no longer compiles

Foo foo = () -> {
    bar(); // java: cannot find symbol. symbol: method bar()
};

This means we’ll have to take a different approach. Here’s something we can do, that is significantly cleaner thanks to lambdas.

public Yielderable<Integer> oneToFive() {
    return yield -> {
        for (int i = 1; i < 10; i++) {
            if (i == 6) yield.breaking();
            yield.returning(i);
        }
    };
}

How can this work? Note the change of the method return type from Iterable to a new interface Yielderable. This means we can return any structurally equivalent lambda. Since we want to create an Iterable we’ll need to extend Iterable to have it behave as an Iterable.

public interface Yielderable<T> extends Iterable<T> { /* ... */ }

The Iterable interface already has an abstract method iterator(), which means we’ll need to implement that if we want to add a new method to build our yield statement, while still remaining a lambda-compatible single method interface.

We can add a default implementation of iterator() that executes a yield definition defined by a new abstract method.

public interface Yielderable<T> extends Iterable<T> {
    void execute(YieldDefinition<T> builder);
 
    default Iterator<T> iterator() {  /* ... */  }
}

We now have a single abstract method, still compatible with a lambda. It accepts one parameter – a YieldDefinition. This means we can call methods defined on YieldDefinition in our lambda implementation. The default iterator() method can create an instance of a YieldDefinition and invoke the execute method for us.

public interface Yielderable<T> extends Iterable<T> {
    void execute(YieldDefinition<T> builder);
 
    default Iterator<T> iterator() {
	    YieldDefinition<T> definition = new YieldDefinition<>()
	    execute(definition);
	    //... more implementation needed.
    }
}

Our YieldDefinition class provides the returning(value) and breaking() methods to use in our yield definition.

class YieldDefinition<T> {
    public void returning(T value) { /* ... */ }
 
    public void breaking() { /* ... */ }
}

Now Java can infer the type of the lambda parameter, allowing us to call them in the lambda body. You should even get code completion in your IDE of choice.

I have made a full implementation of the threaded approach using with above lambda definition style if you are interested.

There’s some more examples of what is possible and the full implementation for reference.

Posted by & filed under Java.

The Java Streams API is lovely, but there are a few operations that One repeats over and over again which could be easier.

One example of this is Collecting to a List.

List<String> input = asList("foo", "bar");
List<String> filtered = input
  .stream()
  .filter(s -> s.startsWith("f"))
  .collect(Collectors.toList());

There are a few other annoyances too. For example I find myself using flatMap almost exclusively with methods that return a Collection not a Stream. Which means we have to do

List<String> result = input
  .stream()
  .flatMap(s -> Example.threeOf(s).stream())
  .collect(Collectors.toList());
 
//...
 
public static List<String> threeOf(String input) {
  return asList(input, input, input);
}

It would be nice to have a convenience method to allow us to use a method reference in this very common scenario

List<String> result = input
  .stream()
  .flatMap(Example::threeOf) // Won't compile, threeOf returns a collection, not a stream 
  .collect(Collectors.toList());

Benjamin Winterberg has written a good guide to getting intellij to generate these for us . But we could extend the API ourselves.

Extending the Streams API is a little tricky as it is chainable, but it is possible. We can use the Forwarding Interface Pattern to add methods to List, but we want to add a new method to Stream.

What we can end up with is

EnhancedList<String> input = () -> asList("foo","bar");
List<String> filtered = list
  .stream()
  .filter(s -> s.startsWith("f"))
  .toList();

To do this we can first use the Forwarding Interface Pattern to create an enhanced Stream with our toList method. Starting with a ForwardingStrem

interface ForwardingStream<T> extends Stream<T> {
 
  Stream<T> delegate();
 
  default Stream<T> filter(Predicate<? super T> predicate) {
    return delegate().filter(predicate);
  }
 
  // Other methods omitted for brevity
 
}

Now, since Stream provides a chainable api, with methods that return Streams – when we implement our EnhancedStream we need to change the subset of methods that return a Stream to return our EnhancedStream.

interface EnhancedStream<T> extends ForwardingStream<T> {
 
  default EnhancedStream<T> filter(Predicate<? super T> predicate) {
    return () -> ForwardingStream.super.filter(predicate);
  }
 
  // Other methods omitted for brevity
 
}

Note that the filter method already exists on Stream and ForwardingStream, but as Java supports covariant return types we can override it and change the returntype to a more specific type (In this case from Stream to EnhancedStream).

You may also notice the lambda returned from the filter() method. Since ForwardingStream is a single method interface with just a Stream delegate() method it is compatible with a lambda that supplies a delegate Stream. Equivalent to a Supplier. EnhancedStream extends ForwardingStream and doesn’t declare any additional abstract methods, so we can return lambdas from each method that needs to return a Stream, delegating to the ForwardingStream. The ForwardingStream.super.filter() syntax allows us to call the implementation already defined in ForwardingStream explicitly.

Now we can add our additional behaviour to the EnhancedStream. Let’s add the toList() method, and a new flatMapCollection

interface EnhancedStream<T> extends ForwardingStream<T> {
  default List<T> toList() {
    return collect(Collectors.toList());
  }
 
  default <R> EnhancedStream<R> flatMapCollection(Function<? super T, ? extends Collection<? extends R>> mapper) {
    return flatMap(mapper.andThen(Collection::stream));
  }
 
  // Other methods omitted for brevity
 
}

Finally, we’ll need to hook up the Stream to our List. We can use the same technique of a forwarding interface which overrides a method and specifies a more specific return type.

 
interface EnhancedList<T> extends ForwardingList<T> {
  default EnhancedStream<T> stream() {
    return () -> ForwardingList.super.stream();
  }
}
 
interface ForwardingList<T> extends List<T> {
  List<T> delegate();
 
  default int size() {
    return delegate().size();
  }
 
  // All the other methods omitted for brevity 
}

So now we can put it all together. EnhancedList is a single method interface so compatible with any existing List in our code, we just wrap it in a lambda, and then we can flatMapCollection() and toList(). The following prints “foo” three times.

EnhancedList<String> input = () -> asList("foo", "bar");
List<String> result = input
  .stream()
  .flatMapCollection(Example::threeOf) 
  .toList();
 
result.forEach(System.out::println);
 
// ...
 
public static List<String> threeOf(String input) {
  return asList(input, input, input);
}

Here’s the unabbreviated code.

Posted by & filed under ContinuousDelivery, XP.

I have become increasingly convinced that there is little difference between monitoring and testing. Often we can run our automated tests against a production system with only a little effort.

We are used to listening to our automated tests for feedback about our software in the form of test-smells. If our tests are complicated, it’s often a strong indicator that our code is poorly designed and thus hard to test. Perhaps code is too tightly coupled, making it hard to test parts in isolation. Perhaps a class has too many responsibilities so we find ourselves wanting to observe and alter private state.

Doesn’t the same apply to our production monitoring? If it’s hard to write checks to ensure our system is behaving as desired in production, it’s easy to blame the monitoring tool, but isn’t it a reflection on the applications we are monitoring. Why isn’t it easy to monitor them?

Just as we test-drive our code, I like to do monitoring check-driven development for our applications. Add checks that the desired business features are working in production even before we implement them, then implement them and deploy to production in order to make the monitoring checks pass. (This does require that the same team building the software is responsible for monitoring it in production.)

As I do this more I notice that in the same way that TDD gives me feedback on code design (helping me write better code), Check-driven-development gives feedback on application design.

Why are our apps too hard to test? Perhaps they are too monolithic so the monitoring tools can’t check behaviour or data buried in the middle. Perhaps they do not publish the information we need to ascertain whether they are working.

Here are 4 smells of production monitoring checks that I think give design feedback, just like their test counterparts. There are lots more.

  • Delving into Private State
  • Complexity, Verbosity, Repetition
  • Non-Determinism
  • Heisentests

Delving into Private State

Testing private methods in a class often indicates there’s another class screaming to be extracted. If something is complex enough to require its own test, but not part of the public interface of the object under test, it likely violates the single responsibility principle and should be pulled out.

Similarly, monitoring checks that peek into the internal state of an app may indicate there’s another app that could be extracted with a clearly separate responsibility that we could monitor independently.

For example, we had an application that ingested some logs, parsed them and generated some sequenced events, which then triggered various state changes.

We had thoroughly tested all of this, but we still occasionally had production issues due to unexpected input and problems with the sequence generation triggered by infrastructure-environment problems.

In response, we added monitoring that spied on the intermediate state of the events flowing through the system, and spied on the database state to keep track of the sequencing.

Monitoring that pokes into the internals of an application like this is similar to tests that use reflection to spy on private state – and similar problems arise. In this case schema changes and refactorings of the application would break the monitoring checks.

In the end we split out some of these responsibilities. We ended up with an entirely stateless app that created events from logs, a separate event sequencer, and the original app consumed the resultant events.

The result is much easier to monitor as the responsibilities are more clear. We can monitor the inputs and outputs of the event generator, both passively and actively by passing through test data in production.

Our monitoring checks are relying on the public interface that is used by the other applications so we are less likely to inadvertently break it. It’s similarly easy to monitor what the sequencer is doing, and we can ingest the events from our production system in a copy to spot problems early.

Complexity, Verbosity, Repetition

I’ve also been guilty of writing long monitoring checks that poll various systems, perform some calculations on the results, and then determine whether there is a problem based on some business logic embedded in the check.

These checks tend to be quite long, and are at risk of copy-paste duplication when someone wants to check something similar. Just as long and duplicated unit tests are a smell, so are monitoring checks. Sometimes we even want to TDD our checks because we realise they are complex.

When our tests get long and repetitive, we sometimes need to invest in improving our test infrastructure to help us write more concise tests in the business domain language. The same applies to monitoring checks – perhaps we need to invest in improving our monitoring tools if the checks are long and repetitive.

Sometimes verbose and unclear tests can be a sign that the implementation they are testing is also unclear and at the wrong abstraction level. If we have modelled concepts poorly in our code then we’ll struggle to write the tests (which are often closer to the requirements). It’s a sign we need to improve our model to simplify how it fulfils the requirements.

For example, if we had a payment amount in our code which we were representing as an integer or a currency value, but could only be positive in our context, we might end up with tests in several places that check that the number is positive and check what happens if we ended up with a negative number due to misuse.

public void deposit(int paymentAmount) { ... }
public void withdraw(int paymentAmount) { ... }
 
@Test
public void deposit_should_not_accept_negative_paymentAmount() { ... }
@Test
public void withdraw_should_not_accept_negative_paymentAmount() { ... }

We might spot this repetition and complexity in our tests and realise we need to improve our design, we could introduce a PaymentAmount concept that could only be instantiated with positive numbers and pass that around instead, removing the need to test the same thing everywhere.

class PaymentAmount { ... }
@Test
public void should_not_be_able_to_create_negative_paymentamount() { ... }
 
public void deposit(PaymentAmount amount) { ... }
public void withdraw(PaymentAmount amount) { ... }

In the same way monitoring checks that are repetitive can often be replaced with enforcement of invariants within applications themselves. Have the applications notify the monitoring system if an assumption or constraint is violated, rather than having the monitoring systems check themselves. This encapsulates the business logic in the applications and keeps checks simple.

Compare

#!/bin/bash
COUNT=$(runSql "select count(*) from payment_amounts where value < 0")
 
# Assert value of count

with

class PaymentAmount {
  public PaymentAmount(int value) {
    monitoredAssert(value > 0 , "Negative payment amount observed")
    ...
  }
}
 
static void monitoredAssert(boolean condition, String message) {
  if (!condition) notifyMonitoringSystemOfFailure(message);
}

Now your monitoring system just alerts you to the published failures.

Too often I’ve written checks for things that could be database constraints or assertions in an applications.

Non-Determinism

Non-deterministic tests are dangerous. They rapidly erode our confidence in a test suite. If we think a test failure might be a false alarm we may ignore errors until some time after we introduce them. If it takes multiple runs of a test suite to confirm everything is OK then our feedback loops get longer and we’re likely to run our tests less frequently.

Non-deterministic monitoring checks are just as dangerous, but they’re so common that most monitoring tools have built in support for only triggering an incident/alert if the check fails n times in a row. When people get paged erroneously on a regular basis it increases the risk they’ll ignore a real emergency.

Non-deterministic tests or monitoring checks are often a sign that the system under test is also unreliable. We had a problem with a sporadically failing check that seemed to always fix itself. It turned out to be due to our application not handling errors from the S3 API (which happen quite frequently). We fixed our app to re-try under this scenario and the check became reliable.

HeisenTests

A test or check that influences the behaviour it is trying to measure is another smell. We’ve had integration tests that ran against the same system as our performance tests and influenced the results by priming caches. We’ve also had similar issues with our production monitoring.

In one particularly fun example, we had reports that one of our webapps was slow during the night. We were puzzled and decided to start by adding monitoring of the response times of the pages in question at regular intervals so we could confirm the user’s reports.

The data suggested there was no problem, and the users actually reported the problem fixed. It turned out that by loading these pages regularly we were keeping data in caches that normally expired during low usage periods. By monitoring the application we had changed the behaviour.

Both of these scenarios were resolved by having the application publish performance metrics that we could check in our tests. We could then query from a central metrics database for production monitoring. This way we were checking performance against our acceptance criteria and real-world user behaviour, and not influencing that behaviour.

Conclusion

Writing your production monitoring before you implement features can help you towards a better design. Look out for smelly monitoring, what is it telling you about your application design?

What other smells have you noticed from monitoring your systems in production?

Posted by & filed under Java.

A common frustration with Java is the inability to overload methods when the method signatures differ only by type parameters.

Here’s an example, we’d like to overload a method to take either a List of Strings or a List of Integers. This will not compile, because both methods have the same erasure.

class ErasureExample {
 
    public void doSomething(List<String> strings) {
        System.out.println("Doing something with a List of Strings " );
    }
 
    public void doSomething(List<Integer> ints) { 
        System.out.println("Doing something with a List of Integers " );
    }
 
}

If you delete everything in the angle brackets the two methods will be identical, which is prohibited by the spec

public void doSomething(List<> strings) 
public void doSomething(List<> strings)

As with most Java things – if it’s not working, you’re probably not using enough lambdas. We can make it work with just one extra line of code per method.

class ErasureExample {
 
    public interface ListStringRef extends Supplier<List<String>> {}
    public void doSomething(ListStringRef strings) {
        System.out.println("Doing something with a List of Strings " );
    }
 
    public interface ListIntegerRef extends Supplier<List<Integer>> {}
    public void doSomething(ListIntegerRef ints) {
        System.out.println("Doing something with a List of Integers " );
    }
 
}

Now we call call the above as simply as the following, which will print “Doing something with a List of Strings” followed by “Doing something with a List of Integers”

public class Example {
 
    public static void main(String... args) {
        ErasureExample ee = new ErasureExample();
        ee.doSomething(() -> asList("aa","b"));
        ee.doSomething(() -> asList(1,2));
    }
}

Using the wrapped lists inside the method is straightforward. Here we print out the length of each string and print out each integer, doubled. It will cause the above main method to print “2124”.

class ErasureExample {
 
    public interface ListStringRef extends Supplier<List<String>> {}
    public void doSomething(ListStringRef strings) {
        strings.get().forEach(str -> System.out.print(str.length()));
    }
 
    public interface ListIntegerRef extends Supplier<List<Integer>> {}
    public void doSomething(ListIntegerRef ints) {
        ints.get().forEach(i -> System.out.print(i * 2));
    }
 
}

This works because the methods now have different erasure, in fact the method signatures have no generics in at all. The only additional requirement is prefixing each argument with “() ->” at the callsite, creating a lambda that is equivalent to a Supplier of whatever type your argument is.

Posted by & filed under Java.

What use is a method that just returns its input? Surprisingly useful. A surprising use is as a way to convert between types.

There’s a well known trick that’s often used to work around Java’s terrible array literals that you may have come across. If you have a method that takes an array as an argument

public static void foo(String[] anArray) { }

Invoking foo requires the ugly

foo(new String[]{"hello", "world"});

For some reason Java requires the redundant “new String[]” here, even though it can be trivially inferred. Fortunately we can work around this with the following method which at first glance might seem pointless.

public static <T> T[] array(T... input) {
    return input;
}

It just returns the input. However it is accepting an array of Type T in the form of a varargs, and returning that array. It becomes useful due as now Java will infer the types and create the array cleanly. We can now call foo like so.

foo(array("hello", "world"));

That is a neat trick, but it really becomes useful in Java 8 thanks to structural typing of lambdas/method references and implicit conversions between types. Here’s another example of a method that’s far more useful than it appears at first glance. It just accepts a function and returns the same function.

public static <T,R> Function<T,R> f(Function<T,R> f) {
    return f;
}

The reason it’s useful is we can pass it any structurally equivalent method reference and have it converted to a java.util.function.Function, which provides us some useful utility methods for function composition.

Here’s an example. Let’s say we have a list of Libraries (Collection of Collection of Books)

interface Library {
    List<Book> books();
    static Library library(Book... books) {
        return () -> asList(books);
    }
}
 
interface Book {
    String name();
    static Book book(String name) { return () -> name; }
}
 
List<Library> libraries = asList(
    library(book("The Hobbit"), book("LoTR")),
    library(book("Build Quality In"), book("Lean Enterprise"))
);

We can now print out the book titles thusly

libraries.stream()
    .flatMap(library -> library.books().stream()) // Stream of libraries to stream of books.
    .map(Book::name) // Stream of names
    .forEach(System.out::println);

But that flatMap call is upsetting, everything else is using a method reference, not a lambda expression. I’d really like to write the following, but it won’t compile because flatMap requires a function that returns a Stream, rather than a function that returns a List.

libraries.stream()
    .flatMap(Library::books) // Compile Error, wrong return type.
    .map(Book::name) 
    .forEach(System.out::println);

Here’s where our method that returns its input comes in again. This compiles fine.

libraries.stream()
    .flatMap(f(Library::books).andThen(Collection::stream)) 
    .map(Book::name) 
    .forEach(System.out::println);  
 
public static <T,R> Function<T,R> f(Function<T,R> f) {
    return f;
}

This works because Library::books is equivalent to a Function<Library, List<Book>>, so passing it to the f() method implicitly converts it to that type. java.util.function.Function provides an andThen method which returns a new function which composes the two functions.

Now in this trivial example it’s actually longer to write this than the equivalent lambda, but it can be useful when combining more complex examples.

We can do the same thing with other functional interfaces. For example to allow Predicate composition or negation.

Here we have a handy isChild() method implemented for us on Person, but we really want the inverse – isAdult() check to pass to the serveAlcohol method. This sort of thing comes up all the time.

interface Person {
    boolean isChild();
    static Person child() { return () -> true; }
    static Person adult() { return () -> false; }
}
 
public static void serveAlcohol(Person person, Predicate<Person> isAdult) {
    if (isAdult.test(person)) System.out.println("Serving alcohol");
}

If we want to reuse Person::isChild we can do the same trick. The p() method converts the method reference to a Predicate for us, and we can then easily negate it.

serveAlcohol(adult(), p(Person::isChild).negate());
 
public static <T> Predicate<T> p(Predicate<T> p) {
    return p;
}

Have you got any other good examples?

Posted by & filed under XP.

At work, we’ve always pair-programmed all our production code, so we’re already pretty bought into it being a good idea to have multiple people working on a single problem. I previously wrote about some of the reasons for pairing.

Mob Programming

Recently, having inspired by a talk by Woody Zuill, we decided to give mob-programming a go, and our experiences so far have been very positive.

Mob programming is having the whole team working on the same problem, at the same time, using a single workstation.

We’re not using it all the time right now. We’ve started doing Mob-Fridays as a way of regularly working together as a group instead of pairing. We’re still pretty new to it – only having done it for a few weeks, but I thought I’d post some of my observations thus far.

Setup

Here’s our setup. We all (4-6 of us) sit round in a semicircle, as we would when having a group disscusion. We have a big 124cm HD TV for everyone to see the code on, and a 76cm monitor for the person at the keyboard – positioned perpendicularly to the TV. This allows the driver to see the rest of the team. We also have a large whiteboard behind the team (just out of the photo) which we can scribble design ideas on.

We have been using strict 5 minute rotations for driving. Every 5 minutes the person with the keyboard relinquishes it and another team member takes over. This gives us a rhythm for continuous deployment (We try to deploy to production after between 5-10 rotations – i.e. at least once an hour). 5 minute rotations keep it very fast paced, and keeps everyone engaged.

We’ve also tried including team members with specialities in our mobbing sessions, including having them drive. We’ve had mob sessions with our product manager and UX specialist. I think it could be interesting to include our internal team customers in the future.

Why?

Efficiency

You may be thinking that this can’t possibly be efficient. Surely 5 or 6 people working individually can get more done than working together, constrained by the speed that one person can type and they can communicate. I think you might well be right, but the amount of stuff a team can get done (throughput) is not necessarily what you want to optimise. Often the speed at which we can get from where we are now to achieving a business goal is more important (latency). Anything we can do to get there faster is a good thing, even if it’s less efficient in terms of throughput.

Regardless of the efficiency of cranking out code, mobbing provides several efficiencies.

Mobbing eliminates a whole class of meetings – removing synchronisation points that slow down developers working independently. There’s no need for detailed design discussions in advance of starting on implementation, because everyone can contribute to the design while working on it. There’s also no need for traditional standup meetings to catch up on what is going on. When working together as a team everyone knows what everyone has been doing and is going to do.

There is also less time loss due to interruptions. People seem more reticent to interrupt a group session than an individual or pair – We pause to answer any questions from outside the team and have a break after each person in the team has had a driving session. It’s also less disruptive when someone’s phone rings or someone needs a toilet break. They can just nip out of the mob and let the mob continue. When pairing, work often stops when these small interruptions occur, and a lot of the context is lost.

A combination of the 5 minute cadence and having more people involved also seems to help avoid wasting time doing things that we don’t really need, which helps us move faster.

We’re also able to more rapidly adapt to what we discover in the course of implementation. Our preconceived ideas of how we might build a feature don’t always survive implementation. We often learn along the way that our original plans won’t work. When pairing we often convened team huddles to discuss these issues before continuing. When working as a mob we just press through them unfazed, without any delay waiting for input from others.

Communication

Mobbing seems great for making significant architectural changes to the system. Things that you need everyone on the team to be bought into, and ideally want as many pairs of eyes on to avoid problems. For instance, we have been mobbing on a new design for a system that processes money. It’s a core technology for us, that it’s important for the whole team to understand, and since it deals with processing money mistakes could be costly.

Mobbing also completely eliminates one of the problems I’ve observed with pair programming – that purity of design can be lost when you rotate out one of the developers from the pair and swap someone in. When mobbing everyone on the team gets to see designs through to the end.

Another reason for mobbing is that it’s great fun. Doing something together as a team makes us a better team. Mobbing is a teambuilding activity that’s actually achieving what we would be achieving if we were working individually.

Summary

Do try mob programming yourself. It’s great fun, should help you be a better team, and an effective way to build software.

Posted by & filed under Java.

The builder patten is often used to construct objects with many properties. It makes it easier to read initialisations by having parameters named at the callsite, while helping you only allow the construction of valid objects.

Builder implementations tend to either rely on the constructed object being mutable, and setting fields as you go, or on duplicating all the settable fields within the builder.

Since Java 8, I find myself frequently creating lightweight builders by defining an interface for each initialisation stage.

Let’s suppose we have a simple immutable Person type like

static class Person {
    public final String firstName;
    public final String lastName;
    public final Centimetres height;
 
    private Person(String firstName, String lastName, Centimetres height) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.height = height;
    }
}

I’d like to be able to construct it using a builder, so I can see at a glance which parameter is which.

Person benji = person()
    .firstName("benji")
    .lastName("weber")
    .height(centimeters(182));

All that is needed to support this now is 3 single-method interfaces to define each stage, and a method to create the builder

The three interfaces are as follows. Each has a single method, so is compatible with a lambda, and each method returns another single method interface. The final interface returns our completed Person type.

interface FirstNameBuilder {
    LastNameBuilder firstName(String firstName);
}
interface LastNameBuilder {
    HeightBuilder lastName(String lastName);
}
interface HeightBuilder {
    Person height(Centimetres height);
}

Now we can create a person() method which creates the builder using lambdas.

public static FirstNameBuilder person() {
    return firstName -> lastName -> height -> new Person(firstName, lastName, height);
}

While it is still quite verbose, this builder definition is barely longer than simply adding getters for each of the fields.

Suppose we wanted to be able the give people’s height in millimetres as well as centimetres. We could simply add a default method to the HeightBuilder interface that does the conversion.

interface HeightBuilder {
    Person height(Centimetres height);
    default Person height(MilliMetres millis) {
        return height(millis.toCentimetres());
    }
}

We can use the same approach to present different construction “paths” without making our interfaces incompatible with lambdas (which is necessary to keep it concise)

Let’s look at a more complex example of a “Burger” type. We wish to allow construction of burgers, but if the purchaser is a vegetarian we would like to restrict the available choices to only vegetarian options.

The simple meat-eater case looks exactly like above

Burger lunch = burger()
    .with(beef())
    .and(bacon());
class Burger {
    public final Patty patty;
    public final Topping topping;
 
    private Burger(Patty patty, Topping topping) {
        this.patty = patty;
        this.topping = topping;
    }
 
    public static BurgerBuilder burger() {
        return patty -> topping -> new Burger(patty, topping);
    }
 
    interface BurgerBuilder {
        ToppingBuilder with(Patty patty);
    }
    interface ToppingBuilder {
        Burger and(Topping topping);
    }
}

Now let’s introduce a vegetarian option. It will be a compile failure to put meat into a vegetarian burger.

Burger lunch = burger()
    .vegetarian()
    .with(mushroom())
    .and(cheese());
 
Burger failure = burger()
    .vegetarian()
    .with(beef()) // fails to compile. Beef is not vegetarian.
    .and(cheese());

To support this we add a default method to our BurgerBuilder which returns a new VegetarianBuilder which dissallows meat.

interface BurgerBuilder {
    ToppingBuilder with(Patty patty);
    default VegetarianBuilder vegetarian() {
        return patty -> topping -> new Burger(patty, topping);
    }
}
interface VegetarianBuilder {
    VegetarianToppingBuilder with(VegetarianPatty main);
}
interface VegetarianToppingBuilder {
    Burger and(VegetarianTopping topping);
}

After you have expressed your vegetarian preference, the builder will no longer present you with the option of choosing meat.

Now, let’s add the concept of free toppings. After choosing the main component of the burger we can choose to restrict ourselves to free toppings. In this example Tomato is free but Cheese is not. It will be a compile option to add cheese as a free topping. Now the divergent option is not the first in the chain.

Burger lunch = burger()
    .with(beef()).andFree().topping(tomato());
 
Burger failure = burger()
    .with(beef()).andFree().topping(cheese()); // fails to compile. Cheese is not free

We can support this by adding a new default method to our ToppingBuilder, which in turn calls the abstract method, meaning we don’t have to repeat the entire chain of lambdas required to construct the burger again.

interface ToppingBuilder {
    Burger and(Topping topping);
    default FreeToppingBuilder free() {
        return topping -> and(topping);
    }
}
interface FreeToppingBuilder {
    Burger topping(FreeTopping topping);
}

Here’s the full code from the burger example, with all the types involved.

class Burger {
    public final Patty patty;
    public final Topping topping;
 
    private Burger(Patty patty, Topping topping) {
        this.patty = patty;
        this.topping = topping;
    }
 
    public static BurgerBuilder burger() {
        return patty -> topping -> new Burger(patty, topping);
    }
 
    interface BurgerBuilder {
        ToppingBuilder with(Patty patty);
        default VegetarianBuilder vegetarian() {
            return patty -> topping -> new Burger(patty, topping);
        }
    }
    interface VegetarianBuilder {
        VegetarianToppingBuilder with(VegetarianPatty main);
    }
    interface VegetarianToppingBuilder {
        Burger and(VegetarianTopping topping);
    }
    interface ToppingBuilder {
        Burger and(Topping topping);
        default FreeToppingBuilder andFree() {
            return topping -> and(topping);
        }
    }
    interface FreeToppingBuilder {
        Burger topping(FreeTopping topping);
    }
 
}
 
interface Patty {}
interface BeefPatty extends Patty {
    public static BeefPatty beef() { return null;}
}
interface VegetarianPatty extends Patty, Vegetarian {}
interface Tofu extends VegetarianPatty {
    public static Tofu tofu() { return null; }
}
interface Mushroom extends VegetarianPatty {
    public static Mushroom mushroom() { return null; }
}
 
interface Topping {}
interface VegetarianTopping extends Vegetarian, Topping {}
interface FreeTopping extends Topping {}
interface Bacon extends Topping {
    public static Bacon bacon() { return null; }
}
interface Tomato extends VegetarianTopping, FreeTopping {
    public static Tomato tomato() { return null; }
}
interface Cheese extends VegetarianTopping {
    public static Cheese cheese() { return null; }
}
 
interface Omnivore extends Vegetarian {}
interface Vegetarian extends Vegan {}
interface Vegan extends DietaryChoice {}
interface DietaryChoice {}

When would(n’t) you use this?

Often a traditional builder just makes more sense.

If you want your builder to be used to supply an arbitrary number of fields in an arbitrary order then this isn’t for you.

This approach restricts the order in which fields are initialised to a specific order. This is can be a feature – sometimes it’s useful to ensure that some parameters are supplied first. e.g. to ensure mandatory parameters are supplied without the boilerplate of the typesafe builder pattern. It’s also easier to make the order flexible in the future if you need to than the other way around.

If your builder forms part of a public API then this probably isn’t for you.

Traditional builders are easier to change without breaking existing uses. This approach makes it easy to change uses with refactoring tools, provided you own all the affected code and can make changes to it. To change behaviour without breaking consumers in this approach you would have to restrict yourself to adding default methods rather than modifying existing interfaces.

On the other hand, by being restrictive in what it allows to compile, this does help you help people using your code to use it in the way you intended.

Where I do find myself using this approach is building lightweight fluent interfaces both to make the code more readable, and to help out my future self by letting the IDE autocomplete required field/code blocks. For instance, when recently implementing some automated performance tests, where we needed a warmup and a rampup period, we used one of these to prevent us from forgetting to include them.

When things are less verbose, you end up using them more often, in places you might not have bothered otherwise.

Posted by & filed under Java.

Twice recently we have had “fun” trying to get things using HK2 (Jersey), to place nicely with code built using Guice and Spring. This has renewed my appreciation for code written without DI frameworks.

The problem with (many) DI frameworks.

People like to complain about Spring. It’s an easy target, but often the argument is a lazy “I don’t like XML, it’s verbose, and not fashionable, unlike JSON, or YAML, or the flavour of the month”. This conveniently ignores that it’s possible to do entirely XML-less Spring. With JavaConfig it’s not much different to other frameworks like Guice. (Admittedly, this becomes harder if you try to use other parts of Spring like MVC or AoP)

My issue with many DI frameworks is the complexity they can introduce. It’s often not immediately obvious what instance of an interface is being used at runtime without the aid of a debugger. You need to understand a reasonable amount about how the framework you are using works, rather than just the programming language. Additionally, wiring errors are often only visible at runtime rather than compile time, which means you may not notice the errors until a few minutes after you make them.

Some frameworks also encourage you to become very dependent on them. If you use field injection to have the framework magically make dependencies available for you with reflection, then it becomes difficult to construct things without the aid of the framework – for example in tests, or if you want to stop using that framework.

Even if you use setter or constructor injection, the ease with which the framework can inject a large number of dependencies for you allows you to ignore the complexity introduced by having excessive dependencies. It’s still a pain to construct an object with 20 dependencies without the framework in a test, even with constructor or setter injection. DI frameworks can shield us from the pain that is useful feedback that the design of our code is too complex.

What do I want when doing dependency injection? I have lots of desires but these are some of the most important to me

  • Safety – I would like it to be a compile time error to fail to satisfy a dependency
  • Testability – I want to be able to replace dependencies with test doubles where useful for testing purposes
  • Flexibility – I would like to be able to alter the behaviour of my program by re-wiring my object graph without having to change lots of code

It’s also nice to be able to build small lightweight services without needing to add lots of third party dependencies to get anything done. If we want to avoid pulling in a framework, how else could we achieve our desires? There are a few simple techniques we can use which only require pure Java, some of which are much easier in Java 8.

I’ve tried to come up with a simple example that might exist if we were building a monitoring system like Nagios. Imagine we have a class that is responsible for notifying the on call person for your team when something goes wrong in production.

Manual Constructor Injection

class IncidentNotifier {
    final Rota rota; 
    final Pager pager;
 
    IncidentNotifier(Pager pager, Rota rota) {
        this.pager = pager;
        this.rota = rota;
    }
 
    void notifyOf(Incident incident) {
        Person onCall = rota.onCallFor(incident.team());
        pager.page(onCall, "Oh noes, " + incident + " happened");
    }
}

I would expect that the Pager and Rota will have dependencies of their own. What if we want to construct this ourselves, it’s still fairly straightforward. Not much more verbose to to do explicitly in Java.

public static void main(String... args) {
    IncidentNotifier notifier = new IncidentNotifier(
        new EmailPager("smtp.example.com"),
        new ConfigFileRota(new File("/etc/my.rota"))
    );
}

The advantage of this over automatically injecting them with a framework and @Inject annotations or an XML configuration is that doing it manually like this allows the compiler to warn us about invalid configuration. To omit one of the constructor arguments is a compile failure. To pass a dependency that does not satisfy the interface is also a compile failure. We find out about the problem without having to run the application.

Let’s increase the complexity slightly. Suppose we want to re-use the ConfigFileRota instance, within several object instances that require access to the Rota. We can simply extract it as a variable and refer to it as many times as we wish.

public static void main(String... args) {
    Rota rota = new ConfigFileRota(new File("/etc/my.rota"));
    Pager pager = new EmailPager("smtp.example.com");
    IncidentNotifier incidentNotifier = new IncidentNotifier(
        pager,
        rota
    );
    OnCallChangeNotifier changeNotifier = new OnCallChangeNotifier(
        pager, 
        rota
    );
}

Now this will of course get very long when we start having a significant amount of code to wire up, but I don’t see this an argument not to do the wiring manually in code. The wiring of objects is just as much code as the implementation.

There is behaviour emergent from the way in which we wire up object graphs. Behaviour we might wish to test, and have as much as possible checked by the compiler.

Wanting this configuration to be separated from code into configuration files to make it easier to change may be a sign that you are not able to release code often enough. There is little need for configuration outside your deployable artifact if you are practising Continuous Deployment.

Remember, we have the full capabilities of the language to organise the resulting wiring code. We can create classes that wire up conceptually linked objects, or modules.

Method-Reference Providers

Now suppose we want to send the notification time in the page. We might do something like

pager.page(onCall, "Oh noes, " + incident + " happened at " + new DateTime());

Only now it is hard to test, because the time in the message will change each time the tests run. Previously we might have approached this problem by creating a Factory type for time, maybe a Clock type. However, in Java 8, thanks we can just use constructor method references which significantly reduces the boilerplate.

IncidentNotifier notifier = new IncidentNotifier(
    new EmailPager("smtp.example.com"),
    new ConfigFileRota(new File("/etc/my.rota")),
    DateTime::new
);
 
class IncidentNotifier {
    final Rota rota; 
    final Pager pager;
    final Supplier<DateTime> clock;
 
    IncidentNotifier(Pager pager, Rota rota, Supplier<DateTime> clock) {
        this.pager = pager;
        this.rota = rota;
        this.clock = clock;
    }
 
    void notifyOf(Incident incident) {
        Person onCall = rota.onCallFor(incident.team());
        pager.page(onCall, "Oh noes, " + incident + " happened at " + clock.get());
    }
}

Test Doubles

It’s worth pointing out at this point how easy Java 8 makes it to replace this kind of dependency with a test double. If your collaborators are single-method interfaces then we can cleanly stub out their behaviour in tests without using a mocking framework.

Here’s a test for the above code that asserts that it invokes the page method, and also checks the argument, both in the same test to simplify the example. The only magic is the 2 line static method on the Exception. I have used no dependencies other than JUnit.

@Test(expected=ExpectedInvocation.class)
public void should_notify_me_when_I_am_on_call() {
    DateTime now = new DateTime();
    Person benji = person("benji");
    Rota rota = regardlessOfTeamItIs -> benji;
    Incident incident = incident(team("a team"), "some incident");
 
    Pager pager = (person, message) -> ExpectedInvocation.with(() ->
        assertEquals("Oh noes, some incident happened at " + now, message)
    );
 
    new IncidentNotifier(pager, rota, () -> now).notifyOf(incident);
}
static class ExpectedInvocation extends RuntimeException{
    static void with(Runnable action) {
        action.run();
        throw new ExpectedInvocation();
    }
}

As you can see the stubbings are quite consise thanks to most collaborators being single method interfaces. This probably isn’t going to remove your need to use Mockito or JMock, but stubbing with lambdas is handy where it works.

Partial Application

You might have noticed that the dependencies that we inject in the constructor could equally be passed to our notify method, we could pass them around like this. It can even be a static method in this case.

Incident incident = incident(team("team name"), "incident name");
FunctionalIncidentNotifier.notifyOf(
    new ConfigFileRota(new File("/etc/my.rota")),
    new EmailPager("smtp.example.com"),
    DateTime::new,
    incident
);
 
class FunctionalIncidentNotifier {
    public static void notifyOf(Rota rota, Pager pager, Supplier<DateTime> clock, Incident incident) {
        Person onCall = rota.onCallFor(incident.team());
        pager.page(onCall, "Oh noes, " + incident  + " happened at " + clock.get());
    }
}

If we have to pass around all dependencies to every method call like this it’s going to make our code difficult to follow, but if we structure the code like this we can partially apply the function to give us a function with all the dependencies satisfied.

Incident incident = incident(team("team name"), "incident name");
 
Notifier notifier = notifier(Partially.apply(
    FunctionalIncidentNotifier::notifyOf,
    new ConfigFileRota(new File("/etc/my.rota")),
    new EmailPager("smtp.example.com"),
    DateTime::new,
    _
));
 
notifier.notifyOf(incident);

There’s just a couple of helpers necessary to make this work. First, a Notifier interface that can be created from a generic Consumer<T>. Static methods on interfaces come to the rescue here.

interface Notifier {
    void notifyOf(Incident incident);
    static Notifier notifier(Consumer<Incident> notifier) {
        return incident -> notifier.accept(incident);
    }
}

Then we need a way of doing the partial application. There’s no support for this built into Java as far as I am aware, but it’s trivial to implement. We just declare a method that accepts a reference to a consumer with n arguments, and also takes the arguments you wish to apply. I am using an underscore to represent missing values that are still unknown. We could add overloads to allow other parameters to be unknown.

class Partially {
    static <T,U,V,W> Consumer<W> apply(
        QuadConsumer<T,U,V,W> f, 
        T t, 
        U u, 
        V v, 
        MatchesAny _) {
            return w -> f.apply(t,u,v,w);
    }
}
interface QuadConsumer<T,U,V,W> {
    void apply(T t, U u, V v, W w);
}
class MatchesAny {
    public static MatchesAny _;
}

Mixins

Going back to the object-oriented approach. Suppose we want our Pager to send emails in production but just print messages to the console if we are running it on our workstation. We can create two implementations of Pager, an EmailPager and a ConsolePager, but we want to use an implementation based on what environment we are running in. We can do this by creating an EnvironmentAwarePager which decides which implementation to use at Runtime.

class EnvironmentAwarePager implements Pager {
    final Pager prodPager;
    final Pager devPager;
 
    EnvironmentAwarePager(Pager prodPager, Pager devPager) {
        this.prodPager = prodPager;
        this.devPager = devPager;
    }
 
    public void page(Person onCall, String message) {
        if (isProduction()) prodPager.page(onCall, message);
        else devPager.page(onCall, message);
    }
 
    boolean isProduction() { ... }
}

But what if we want to test the behaviour of this environment aware pager, to be sure that it calls the production pager when in production. To do this we need to extract the responsibility of checking whether we are running in the production environment. We could make a collaborator, but there is another option (which I’ll use for want of a better example) – We can mix in functionality using an interface.

interface EnvironmentAware {
    default boolean isProduction() {
        // We could check for a machine manifest here
        return false;
    }
}

Now our Pager becomes

class EnvironmentAwarePager implements Pager, EnvironmentAware {
    final Pager prodPager;
    final Pager devPager;
 
    EnvironmentAwarePager(Pager prodPager, Pager devPager) {
        this.prodPager = prodPager;
        this.devPager = devPager;
    }
 
    public void page(Person onCall, String message) {
        if (isProduction()) prodPager.page(onCall, message);
        else devPager.page(onCall, message);
    }
}

We can use isProduction without implementing it.

Now let’s write a test that checks that the production pager is called in the production environment. Here we extend EnvironmentAwarePager to override its production-awareness by mixing in the AlwaysInProduction interface. We stub the dev pager to fail the test because we don’t want that to be called, and stub the prod pager to fail the test if not invoked.

@Test(expected = ExpectedInvocation.class)
public void should_use_production_pager_when_in_production() {
    class AlwaysOnProductionPager 
        extends EnvironmentAwarePager 
        implements AlwaysInProduction {
        AlwaysOnProductionPager(Pager prodPager, Pager devPager) {
            super(prodPager, devPager);
        }
    }
 
    Person benji = person("benji");
    Pager prod = (person, message) -> ExpectedInvocation.with(() -> {
        assertEquals(benji, person);
        assertEquals("hello", message);
    });
    Pager dev = (person, message) -> fail("Should have used the prod pager");
 
    new AlwaysOnProductionPager(prod, dev).page(benji , "hello");
 
}
 
interface AlwaysInProduction extends EnvironmentAware {
    default boolean isProduction() { return true; }
}

Using mixins here is a bit contrived, but I struggled to come up with an example that was both not contrived and sufficiently brief to illustrate the point.

Cake Pattern

I mention mixins partly because it leads onto the Cake Pattern

The cake pattern can make it a bit easier to wire up more complex graphs, when compared to manual constructor injection. While it does also add quite a lot of complexity in itself, we do at least retain a lot of compile time checking. Failing to satsify a dependency will be a compilation failure.

Here’s what an example application using cake would look like that sends an page about an incident specified with command line args.

public class Example {
    public static void main(String... args) {
        ProductionApp app = () -> asList(args);
        app.main();
    }
}
 
interface ProductionApp extends
        MonitoringApp,
        DefaultIncidentNotifierProvider,
        EmailPagerProvider,
        ConfigFileRotaProvider,
        DateTimeProvider {}
 
interface MonitoringApp extends
        Application,
        IncidentNotifierProvider {
    default void main() {
        String teamName = args().get(0);
        String incidentName = args().get(1);
        notifier().notifyOf(incident(team(teamName), incidentName));
    }
}
 
interface Application {
    List<String> args();
}

Here we’re using interfaces to specify the components we wish to use in our application. We have a MonitoringApp interface that specifies the entry point behaviour. It sends a notification using the command line arguments. We also have a ProductionApp interface that specifies which components we want to use in this application.

If we want to replace a component – for example to print messages to the console instead of sending an email when running it on our workstation it’s just a matter of replacing that component –

interface WorkstationApp extends
        MonitoringApp,
        DefaultIncidentNotifierProvider,
        ConsolePagerProvider, // This component is different
        ConfigFileRotaProvider,
        DateTimeProvider  {}

This is checked compile time. If we were to not specify a PagerProvider at all we’d get a compile failure in our main method when we try to instantiate the WorkstationApp. Admittedly, not a very informative message if you don’t know what’s going on (WorkstationApp is not a functional interface, multiple non-overriding abstract methods found in com.benjiweber.WorkstationApp.)

For each thing that we want to inject, we declare a provider interface, which can itself rely on other providers like

interface DefaultIncidentNotifierProvider extends 
        IncidentNotifierProvider, 
        PagerProvider, 
        RotaProvider, 
        ClockProvider {
    default IncidentNotifier notifier() { 
        return new IncidentNotifier(pager(), rota(), clock()); 
    }
}

PagerProvider has multiple mixin-able implementations

interface EmailPagerProvider extends PagerProvider {
    default Pager pager() { return new EmailPager("smtp.example.com"); }
}
interface ConsolePagerProvider extends PagerProvider {
    default Pager pager() { 
        return (Person onCall, String message) -> 
            System.out.println("Stub pager says " + onCall + " " + message); 
    }
}

As I mentioned above, this pattern starts to add too much complexity for my liking, however neat it may be. Still, it can be a useful technique for using sparingly for parts of your application where manual constructor injection is becoming tedious.

Summary

There are various techniques for doing the kinds of things that we often use DI frameworks to do, just using pure Java. It’s worth considering the hidden costs of using the framework.

The code for the examples used in this post is on Github

Posted by & filed under Java.

This is a follow up to Pattern matching in Java, where I demonstrated pattern matching on type and structure using Java 8 features. The first thing most people asked is “does it support matching on nested structures?”

The previous approach did not, at least not without creating excessive boilerplate constructors. So here’s another approach that does.

Let’s suppose we have a nested structure representing a customer like this. Illustrated as a JavaScript object literal for clarity.

{
  firstName: "Benji",
  lastName: "Weber",
  address: {
    firstLine: {
      houseNumber: 123,
      roadName: "Some Street"
    },
    postCode: "AB123CD"
  }
}

What if we want to match customers with my name and pull out my house number, road name, and post code? With pattern matching it becomes straightforward.

First we’ll create Java types to represent it such that we can create the above representation like:

Customer customer = customer(
    "Benji", 
    "Weber", 
    address(
        firstLine(123,"Some Street"), 
        "AB123CD"
    )
);

I’ll use the value object pattern I described previously to create these.

Now we just need a way to build up a structure to match against, which retains the properties we want to extract for the pattern matching we previously implemented.

Here’s what we can do. We use underscores to indicate properties we wish to extract rather than match. All other properties are matched.

// Using the customer instance from above
String address = customer.match()
    .when(a(Customer::customer).matching(
        "Benji",
        "Weber",
        an(Address::address).matching(
            a(FirstLine::firstLine).matching(_,_),
            _
        )
    )).then((houseNo, road, postCode) -> houseNo + " " + road + " " + postCode)
    .otherwise("unknown");
 
assertEquals("123 Some Street AB123CD", address);

So how does it work? Well we get the .match() method by implementing the Case interface on our value types. This interface has a default method match which returns a match builder which we can use to specify our cases.

Last time we implemented overloads to the when(..) method such that we could match on types or instances. Now we can re-use that work and add overloads that take a Match reference. e.g.

public <A,B> BiMatchConstructorBuilder<T, A, B> when(BiMatch<T, A, B> matchRef) {
// Here we can know we are matching for missing properties of types A and B
// So we can expect a function to consume these properties that accepts an A and B
    return new BiMatchConstructorBuilder<T, A, B>() {
        public <R> MatchBuilderR<T, R> then(BiFunction<A, B, R> f) {
            // ...
        }
    };
}

The matchRef can capture method references to the properties we want to extract, and then we can apply these method references to the object we are matching against to check for a match.

Lastly we simply add a couple of static methods: a(constructor) and an(constructor) for building up our matches, which return a builder that accepts either the constructor arguments, or a wildcard underscore to indicate we want to match and extract that value.

Here are some more examples to help illustrate the idea

Posted by & filed under Java.

The absence of tuples in Java is often bemoaned. Tuples are collections of a variety of types. They might look like (“benji”,8,”weber”) – a 3-tuple of a string, a number, and another string. People often wish to use tuples to return more than one value from a method.

Often this is a smell that we need to create a domain type that is more meaningful than a tuple, but in some cases tuples are actually the best tool for the job.

We can pretty easily create our own tuple types, and now that we have Lambdas we can consume them pretty easily as well. There is a little unnecessary verbosity left over from type signatures, but we can avoid it in most cases.

If we want to return a tuple from a method, by statically importing an “tuple” method that is overloaded for each arity of tuple we can do

static TriTuple<String, Integer, String> me() {
    return tuple("benji",9001,"weber");
}

Other than the return type of the method it is fairly concise.

On the consuming side it would be nice to be able to use helpful names for each field in the tuple, rather than the ._1() or .one() accessors that are used in many tuple implementations.

This is easily realised thanks to lambdas. We can simply define a map method on the tuple that accepts a function with the same arity of the tuple, to transform the tuple into something else.

Using the above method now becomes

String name = me()
    .map((firstname, favouriteNo, surname) -> firstname + " " + surname);
 
assertEquals("benji weber", name);

We can even throw checked exceptions on the consuming side if we allow our map method to accept functions that throw exceptions.

String name = me()
    .map((firstname, favouriteNo, surname) -> {
        if (favouriteNo > 9000) throw new NumberTooBigException();
        return firstname;
    });

Of course we’d also like identical tuples to be equal to each other so that this is true.

assertEquals(tuple("hello","world"), tuple("hello", "world"));

The implementation looks like this, re-using the value-object pattern I described previously

public interface Tuple {
    //...
    static <A,B,C> TriTuple<A,B,C> tuple(A a, B b, C c) {
        return TriTuple.of(a, b, c);
    }
    //...
}
 
public interface TriTuple<A,B,C> {
    A one();
    B two();
    C three();
    static <A,B,C> TriTuple<A,B,C> of(A a, B b, C c) {
        abstract class TriTupleValue extends Value<TriTuple<A,B,C>> implements TriTuple<A,B,C> {}
        return new TriTupleValue() {
            public A one() { return a; }
            public B two() { return b; }
            public C three() { return c; }
        }.using(TriTuple::one, TriTuple::two, TriTuple::three);
    }
 
    default <R,E extends Exception> R map(ExceptionalTriFunction<A, B, C, R, E> f) throws E {
        return f.apply(one(),two(),three());
    }
 
    default <E extends Exception> void consume(ExceptionalTriConsumer<A,B,C,E> consumer) throws E {
        consumer.accept(one(), two(), three());
    }
}

Browse the full code on Github