Posted by & filed under XP.

There has been more discussion recently on the concept of a “10x engineer”. 10x engineers are, (from Quora) “the top tier of engineers that are 10x more productive than the average”

Productivity

I have observed that some people are able to get 10 times more done than me. However, I’d argue that individual productivity is as irrelevant as team efficiency.

Productivity is often defined and thought about in terms of the amount of stuff produced.

“The effectiveness of productive effort, especially in industry, as measured in terms of the rate of output per unit of input”

Diseconomies of Scale

The trouble is, software has diseconomies of scale. The more we build, the more expensive it becomes to build and maintain. As software grows, we’ll spend more time and money on:

  • Operational support – keeping it running
  • User support – helping people use the features
  • Developer support – training new people to understand our software
  • Developing new features – As the system grows so will the complexity and the time to build new features on top of it (Even with well-factored code)
  • Understanding dependencies – The complex software and systems upon which we build
  • Building Tools – to scale testing/deployment/software changes
  • Communication – as we try to enable more people to work on it

The more each individual produces, the slower the team around them will operate.

Are we Effective?

Only a small percentage of things I build end up generating enough value to justify their existence – and that’s with a development process that is intended to constantly focus us on the highest value work.

If we build a feature that users are happy with it’s easy to count that as a win. It’s even easier to count it as a win if it makes more money than it cost to build.

Does it look as good when you its compare its cost/benefit to some of the other things that the team could have been working on over the same time period? Everything we choose to work on has an opportunity cost, since by choosing to work on it we are therefore not able to work on something potentially more valuable.

Applying the 0.1x

The times I feel I’ve made most difference to our team’s effectiveness is when I find ways to not build things.

  • Let’s not build that feature.
    Is there existing software that could be used instead?
  • Let’s not add this functionality.
    Does the complexity it will introduce really justify its existence?
  • Let’s not build that product yet.
    Can we first do some small things to test the assumption that it will be valuable?
  • Let’s not build/deploy that development tool.
    Can we adjust our process or practices instead to make it unnecessary?
  • Let’s not adopt this new technology.
    Can we achieve the same thing with a technology that the team is already using and familiar with? “The best tool for the job” is a very dangerous phrase.
  • Let’s not keep maintaining this feature.
    What is blocking us from deleting this code?
  • Let’s not automate this.
    Can we find a way to not need to do it all?

Identifying the Value is Hard

Given the cost of maintaining everything we build, it would literally be better for us to do 10% the work and sit around doing nothing for the rest of our time, if we could figure out the right 10% to work on.

We could even spend 10x as long on minimising the ongoing cost of maintaining that 10%. Figuring out what the most valuable things to work on and what is a waste of time is the hard part.

Posted by & filed under XP.

The most common reaction I hear when I tell people about mob programming (or even paired programing) is “How can that possibly be efficient?”, sometimes phrased as “How can you justify that to management?” or “How productive are you?”

I think that efficiency in terms of “How much stuff can we get done in a week” is the wrong thing to be focussing on in teams. It can often be helpful to be less efficient.

“All the brilliant people working at the same time, in the same space, on the same thing, at the same computer.” — Woody Zuill


At Unruly we’ve been Mob Programming regularly over the last year.

At first glance it’s hard to see why it could be worth working this way. Five or more people working on a single task seems inefficient compared to working on five tasks simultaneously. As developers we’re used to thinking about parallelising work so that we can scale out.

Build Less!

If your team builds twice as much stuff as another team, are you more effective?

What if 80% of the software your team builds is never used, and everything another team builds is heavily used?

What if all the features you build are worth less than a single feature the other team has built?

We’re better off slowing down if it means that what we do build is more valuable

Value Disparity

There’s often a huge disparity between the relative value of different things we can be working on. We can easily get distracted building Feature A that might make us $10,000 this year, when we could be building Feature B which will make us $10,000,000 this year.

It’s often not evident up front which of these will be more valuable. However, if we can order our development to start with testing hypotheses about features A and B we often learn that one is much less valuable than we thought, for some reason it won’t work for us — meanwhile, new opportunities often open up that makes the other option much more interesting.

Focus on Goal

When working alone it’s very easy to get sidetracked into working on things you notice along the way that are important but unrelated to the current goal of the team. When working together there are more people to hold one another accountable and bring the focus of the team back to the primary goal, avoiding time consuming diversions.

When working together we also help hold each other accountable for following working agreements like fixing non-deterministic tests immediately, or refactoring a piece of code the next time we’re in the area.

If you’re going to build it, build it right

It’s easy to plan a feature, implement what you planned to do, and have it technically working, but generating no value. Here is a case where “technically correct” is not the best kind of correct.

If we release a feature and it’s not being used, or not making any money, we need to learn, iterate and improve. This may involve ordering the development to prioritise trying things out early, even if we’re not entirely happy with the finished product.

Unstoppable Team

It’s often more interesting how quickly we can achieve a team goal, than how much our team can get done in a set time period. In programmer parlance Low latency is more valuable than high throughput.

Therefore it can be worth trading off “efficiency” if it means you get to your goal slightly quicker.

In Extreme Programming circles there’s a concept of ideal time — if everything went exactly according to plan, and you had no interruptions, how long would a task take.

Ideal Days

Working together as team in a mob is the closest I’ve experienced to real “Ideal Days”.

When working alone, or even when pairing, there are often interruptions. You have to go off to a meeting, so work stops. Somebody asks you a question, and work stops. You get stuck on a distracting problem, so work stops. You take a bathroom break, and work stops.

This tends to lead to individual or pair developer days being less than ideal. Rather, you get a few periods of productivity interspersed with interruptions where you lose your “flow” and train of thought.

This is quite different with a mob of a few people.

Can’t stop the mob

If you need to go off to a meeting, you go off to your meeting. The mob keeps on rolling.

If someone comes over with a question, someone peels off the mob to help them. The mob keeps on rolling.

You encounter a puzzling problem, no-one has any idea how to approach it, someone peels off to go and spike a couple of approaches. The mob keeps on rolling.

You’re feeling like a break, you can just take one whenever you like. The mob keeps on rolling. In this regard mob programming is actually less tiring than pair programming. There’s no amount of guilt from losing concentration or taking a break. You know the team will continue.

So while a mob requires more people, it lets us achieve a specific goal more quickly than if we were working on individual tasks.

Team Investment

It’s also worth bearing in mind that the value of your team practices can’t be measured purely by the amount of stuff you deliver, or even in the amount of money generated by the features you build.

If your work is investing in the team’s ability to support the software in production in the future, or in their ability to move and learn faster in the future, then that’s adding value, albeit sometimes hard to measure.

So…

Don’t aim to be an efficient team, aim to be an effective team.

Instead of optimising the amount of stuff you deliver, optimise the amount of value you add to your organisation.

Mob-programming and pair-programming are techniques that can help teams be more effective. They may or may not affect productivity, but it doesn’t matter.

Posted by & filed under Java.

Many things can be modelled as finite state machines. Particularly things where you’d naturally use “state” in the name e.g. the current state of an order, or delivery status. We often model these as enums.

enum OrderStatus {
    Pending,
    CheckingOut,
    Purchased,
    Shipped,
    Cancelled,
    Delivered,
    Failed,
    Refunded
}

Enums are great for restricting our order status to only valid states. However, usually there are only certain transitions that are valid. We can’t go from Delivered to Failed. Nor would we go straight from Pending to Delivered. Maybe we can transition from Purchased to either Shipped or Cancelled.

Using enum values we cannot restrict to the transitions to only those that we desire. It would be nice to also let the compiler help us out by not letting us choose invalid transitions in our code.

We can, however, achieve this if we use a class hierarchy to represent our states instead, and it can still be fairly concise. There are other reasons for using regular classes, they allow us to store and even capture state from the surrounding context.

Here’s a way we could model the above enum as a class heirarchy with the valid transitions.

interface OrderStatus extends State<OrderStatus> {}
static class Pending     implements OrderStatus, BiTransitionTo<CheckingOut, Cancelled> {}
static class CheckingOut implements OrderStatus, BiTransitionTo<Purchased, Cancelled> {}
static class Purchased   implements OrderStatus, BiTransitionTo<Shipped, Failed> {}
static class Shipped     implements OrderStatus, BiTransitionTo<Refunded, Delivered> {}
static class Delivered   implements OrderStatus, TransitionTo<Refunded> {}
static class Cancelled   implements OrderStatus {}
static class Failed      implements OrderStatus {}
static class Refunded    implements OrderStatus {}

We’ve declared an OrderStatus interface and then created implementations of OrderStatus for each valid state. We’ve then encoded the valid transitions as other interface implementations. There’s a TransitionTo<State> and BiTransitionTo<State1,State2>, or TriTransitionTo<State1,State2,State3> depending on the number of valid transitions from that state. We need differently named interfaces for different numbers of transitions because Java doesn’t support variance on the number of generic type parameters.

Compile-time checking valid transitions

Now we can create the TransitionTo/BiTransitionTo interfaces, which can give us the functionality to transition to a new state (but only if it is valid)

We might imagine an api like this where we can choose which state to transition to

new Pending()
    .transitionTo(CheckingOut.class)
    .transitionTo(Purchased.class)
    .transitionTo(Refunded.class) // <-- can we make this line fail to compile?

This turns out to be a little tricky, but not impossible, due to type erasure.

Let’s try to implement BiTransitionTo interface with the two valid transition.

public interface BiTransitionTo<T, U> {
    default T transitionTo(Class<T> type) { ... }
    default U transitionTo(Class<U> type) { ... }
}

Both of these transitionTo methods have the same erasure. So we can’t do it quite like this. However, if we can encourage the consumer of our API to pass a lambda, there is a way to work around this same erasure problem.

So how about this API, where instead of passing class literals we pass constructor references. It looks similarly clean, but constructor references are basically lambdas so we can avoid type erasure.

new Pending()
    .transition(CheckingOut::new)
    .transition(Purchased::new)
    .transition(Refunded::new) // <-- Now we can make this fail to compile

In order to make this work the trick is to create a new interface type for each valid transition within our BiTransitionTo interface

public interface BiTransitionTo<T, U> {
    interface OneTransition<T> extends Supplier<T> { }
    default T transition(OneTransition<T> constructor) { ... }
    interface TwoTransition<T> extends Supplier<T> { }
    default U transition(TwoTransition<U> constructor) { ... }
}

Supplier<T> is a functional interface in the java.util.function that is equivalent to a no-args constructor reference. By creating two interfaces that extend this we can overload the transition() method twice, allowing both methods to be passed a constructor reference without the two methods having the same erasure.

Runtime checking

Sometimes we might not be able to know at compile-time what state our statemachine is in. Perhaps a Customer has a field of type OrderStatus that we serialize to a database. We would need to be able to try a transition at runtime, and fail in some manner if the transition is not valid.

This is also possible using the TransitionTo<NewState> approach outlined above. Since supertype parameters are available at runtime, we can implement a tryTransition() method that uses reflection to check which transitions are permitted.

First we’ll need a way of finding the valid transition types. We’ll add it to our State base interface.

default List<Class<?>> validTransitionTypes() {
    return asList(getClass().getGenericInterfaces())
        .stream()
        .filter(type -> type instanceof ParameterizedType)
        .map(type -> (ParameterizedType) type)
        .filter(TransitionTo::isTransition) 
        .flatMap(type -> asList(type.getActualTypeArguments()).stream())
        .map(type -> (Class<?>) type)
        .collect(toList());
}

Note the isTransition filter. Since we have multiple transition interfaces – TransitionTo<T>, BiTransitionTo<T,U>, TriTransitionTo<T,U,V> etc, we need a way of marking them as all specifying transitions. I’ve used an annotation

@Retention(RUNTIME)
@Target(ElementType.TYPE)
public @interface Transition {
 
}
static boolean isTransition(ParameterizedType type) {
     Class<?> cls = (Class<?>)type.getRawType();
     return cls.getAnnotationsByType(Transition.class).length > 0;
}
 
@Transition
public interface TriTransitionTo...

Once we have validTransitionTypes() we can find which transitions are valid at runtime.

static class Pending implements OrderStatus, BiTransitionTo<CheckingOut, Cancelled> {}
@Test
public void finding_valid_transitions_at_runtime() {
    Pending pending = new Pending();
    assertEquals(
        asList(CheckingOut.class, Cancelled.class),
        pending.validTransitionTypes()
    );
}

Now that we have the valid types, tryTransition() needs to check whether the requested transition is to one of those types.

This is a little tricky, but since we’re passing a lambda we make it a lambda-type-token and use reflection to find the type parameter of the lambda.

Our implementation then looks something like

 
interface NextState<T> extends Supplier<T>, MethodFinder {
    default Class<T> type() {
        return (Class<T>) getContainingClass();
    }
}
default <T> T tryTransition(NextState<T> desired) {
    if (validTransitionTypes.contains(desired.type())) {
        return desired.get();
    }
 
    throw new IllegalStateTransitionException();
}

We can make it a bit nicer by allowing the caller to specify the exception to throw on error, like an Optional’s orElseThrow. We can also allow the caller to ignore failed transitions.

@Test
public void runtime_checked_transition() {
    OrderStatus state = new Pending();
    assertTrue(state instanceof Pending);
    state = state
        .tryTransition(CheckingOut::new)
        .unchecked();
    assertTrue(state instanceof CheckingOut);
}

Since we’ve transitioned into a known state (or thrown an exception) with tryTransition we could then chain compile-time checked transitions on the end.

@Test
public void runtime_checked_transition() {
    OrderStatus state = new Pending();
    assertTrue(state instanceof Pending);
    state = state
        .tryTransition(CheckingOut::new)
        .unchecked()
        .transition(Purchased::new); // This will be permitted if the tryTransition succeeds.
    assertTrue(state instanceof CheckingOut);
}

We can even let people ignore transition failures if they wish, just by catching the exception and returning the original value.

@Test
public void runtime_checked_transition_ignoring_failure() {
    OrderStatus state = new Pending();
    assertTrue(state instanceof Pending);
    state = state
        .tryTransition(Refunded::new)
        .ignoreIfInvalid();
    assertFalse(state instanceof Refunded);
    assertTrue(state instanceof Pending);
}

Adding Behaviour

Since our states are classes, we can add behaviour to them.

For instance we could add a notifyProgress() method to our OrderStatus, with different implementations in each state.

interface OrderStatus extends State<OrderStatus> {
    default void notifyProgress(Customer customer, EmailSender sender) {}
}
static class Purchased implements OrderStatus, BiTransitionTo<Shipped, Failed> {
    public void notifyProgress(Customer customer, EmailSender emailSender) {
        emailSender.sendEmail("fulfillment@mycompany.com", "Customer order pending");
        emailSender.sendEmail(customer.email(), "Your order is on its way");
    }
}
...
OrderStatus status = new Pending();
status.notifyProgress(customer, sender); // Does nothing
status = status
    .tryTransition(CheckingOut::new)
    .unchecked()
    .transition(Purchased::new);
status.notifyProgress(customer, sender) ; // sends emails

Then we can call notifyProgress on any OrderStatus instance and it will notify differently depending on which implementation is active.

Internal Transitions

One of the ways to make most use of the typechecked transitions is to have the transitions internally within the state. e.g. in a state machine for the Regex “A+B” the A state can transition either

  • Back to A
  • To B
  • To a match failure state

If we do this we can make them typechecked even though we don’t know what the string we’re matching in advance is.

static class A implements APlusB, TriTransitionTo<A,B,NoMatch> {
    public APlusB match(String s) {
        if (s.length() < 1) return transition(NoMatch::new);
        if (s.charAt(0) == 'A') return transition(A::new).match(s.substring(1));
        if (s.charAt(0) == 'B') return transition(B::new).match(s.substring(1));
        return transition(NoMatch::new);
    }
}

Full example here

Capturing State

If we use non-static classes we could also capture state from the enclosing class. Supposing these OrderStatuses are contained within an Order class that already has an EmailSender available, we’d no longer need to pass in the emailSender and the customer to the notifyProgress() method.

class Order {
    EmailSender emailSender;
    Customer customer;
    class Purchased implements OrderStatus, BiTransitionTo<Shipped, Failed> {
        public void notifyProgress() {
            emailSender.sendEmail("fulfillment@mycompany.com", "Customer order pending");
            emailSender.sendEmail(customer.email(), "Your order is on its way");
        }
    }
}

Guards

Another feature we might want is the ability to execute some code before transitioning into a new state or after transitioning into a new state. This is something we can add to our base State interface. Let’s add two methods beforeTransition() and afterTransition()

interface State {
    default void afterTransition(T from) {}
    default void beforeTransition(T to) {}
}

We can then update our transition implementation to invoke these guard methods before and after a transition occurs.

We could use this to log all transitions into the Failure state.

class Failed implements OrderStatus {
    @Override
    public void afterTransition(OrderStatus from) {
        failureLog.warning("Oh bother! failed from " + from.getClass().getSimpleName());
    }
}

We could also combine state capturing and guard methods to build a stateful-state machine that updates its state on transition instead of just returning the new state. Here’s an example where we use a guard method to mutate the state of lightSwitch after each transition.

class LightExample {
    Switch lightSwitch = new Off();
 
    public class Switch implements State<Switch> {
        @Override
        public void afterTransition(Switch from) {
            LightExample.this.lightSwitch = Switch.this;
        }
    }
    public class On extends Switch implements TransitionTo<Off> {}
    public class Off extends Switch implements TransitionTo<On> {}
 
    @Test
    public void stateful_switch() {
        assertTrue(lightSwitch instanceof Off);
        lightSwitch.tryTransition(On::new).ignoreIfInvalid();
        assertTrue(lightSwitch instanceof On);
        lightSwitch.tryTransition(Off::new).ignoreIfInvalid();
        assertTrue(lightSwitch instanceof Off);
    }
}

Show me the code

The code is on github if you’d like to play with it/see full executable examples

Posted by & filed under Java.

Update: this technique no longer works since name reflection was removed in later versions of Java. Here is another approach. Another use of lambda parameter reflection could be to write html inline in Java. It allows us to create builders like this, in Java, where we’d previously have to use a language like Kotlin and a library like Kara.

String doc =
    html(
        head(
            meta(charset -> "utf-8"),
            link(rel->stylesheet, type->css, href->"/my.css"),
            script(type->javascript, src -> "/some.js")
        ),
        body(
            h1("Hello World", style->"font-size:200%;"),
            article(
                p("Here is an interesting paragraph"),
                p(
                    "And another",
                    small("small")
                ),
                ul(
                    li("An"),
                    li("unordered"),
                    li("list")
                )
            )
        )
    ).asString();

Which generates html like

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN">
 
<html>
  <head>
    <meta name="generator" content=
    "HTML Tidy for Java (vers. 2009-12-01), see jtidy.sourceforge.net">
<meta charset="utf-8"><script type="text/javascript" src=
"/some.js">
</script>
 
    <title></title>
  </head>
 
  <body>
    <h1>Hello World</h1>
 
    <p>Here is an interesting paragraph</p>
 
    <p>And another<small>small</small></p>
 
    <ul>
      <li>An</li>
 
      <li>unordered</li>
 
      <li>list</li>
    </ul>
  </body>
</html>

Code Generation

Why would you do this? Well we could do code generation. e.g. we can programmatically generate paragraphs.

body(
    asList("one","two","three")
        .stream()
        .map(number -> "Paragraph " + number)
        .map(content -> p(content))
)

Help from the Type System

We can also use the Java type system to help us write valid code.

It will be a compile time error to specify an invalid attribute for link rel.

It’s a compile time error to omit a mandatory tag

It’s also a compile time error to have a body tag inside a p tag, because body is not phrasing content.

We can also ensure that image sizes are in pixels

Safety

We can also help reduce injection attacks when inserting content from users into our markup, by having the DSL html-encoding any content passed in.

e.g.

assertEquals(
    "<p>&lt;script src=&quot;attack.js&quot;&gt;&lt;/script&gt;</p>", 
    p("<script src=\"attack.js\"></script>").asString()
);

How does it work?

See this previous blogpost that shows how to get lambda parameter names with reflection. This allows us to specify the key value pairs for html attributes quite cleanly.

I’ve created an Attribute type that converts a lambda to a html attribute.

public interface Attribute<T> extends NamedValue<T> {
    default String asString() {
        return name() + "=\"" + value()+"\"";
    }
}

For the tags themselves we declare an interface per tag, with a heirarchy to allow certain tags in certain contexts. For example Small is PhrasingContent and can be inside a P tag.

public interface Small extends PhrasingContent {
    default Small small(String content) {
        return () -> tag("small", content);
    }
}

To make it easy to have all the tag names available in the context without having to static import lots of things, we can create a “mixin” interface that combines all the tags.

public interface HtmlDsl extends
        Html,
        Head,
        Body,
        Link,
        Meta,
        P,
        Script,
        H1,
        Li,
        Ul,
        Article,
        Small,
        Img
        ...

Then where we want to write html we just make our class implement HtmlDsl (Or we could staticly import the methods instead.

We can place restrictions on which tags are valid using overloaded methods for the tag names. e.g. HTML

public interface Html extends NoAttributes {
    default Html html(Head head, Body body) { 
    ...

and restrict the types of attributes using enums or other wrapper types. Here Img tag can only have measurements in pixels

public interface Img extends NoChildren {
    default Img img(Attribute<String> src, Attribute<PixelMeasurement> dim1, Attribute<PixelMeasurement> dim2) {
    ...

All the code is available on github to play with. Have a look at this test for executable examples. n.b. it’s just a proof of concept at this point. Only sufficient code exists to illustrate the examples in this blog post.

What other creative uses can you find for parameter reflection?

Posted by & filed under Java.

Java 8 introduced a compiler flag -parameters, which makes method parameter names available at runtime with reflection. Up to now, this has not worked with lambda parameter names. However, Java 8u60 now has a fix for this bug back-ported which makes it possible.

Some uses that spring to mind (and work as of recent 8u60ea build) are doing things like hash literals

Map<String, String> hash = hash(
    hello -> "world",
    bob   -> bob,
    bill  -> "was here"
);
 
assertEquals("world",    hash.get("hello"));
assertEquals("bob",      hash.get("bob"));
assertEquals("was here", hash.get("bill"));

Or just another tool for creating DSLs in Java itself. For example, look how close to puppet’s DSL we can get now.

static class Apache extends PuppetClass {{
    file(
        name -> "/etc/httpd/httpd.conf",
        source -> template("httpd.tpl"),
        owner -> root,
        group -> root
    );
 
    cron (
        name -> "logrotate",
        command -> "/usr/sbin/logrotate",
        user -> root,
        hour -> "2",
        minute -> "*/10"
    );
}}

Doing

System.out.println(new Apache().toString());

Would print

class Apache {
 
	file{
		name	=> "/etc/httpd/httpd.conf",
		source	=> template(httpd.tpl),
		owner	=> root,
		group	=> root,
	}
	cron{
		name	=> "logrotate",
		command	=> "/usr/sbin/logrotate",
		user	=> root,
		hour	=> "2",
		minute	=> "*/10",
	}
 
}

The code examples for the hash literal example and the puppet example are on Github.

They work by making use of a functional interface that extends Serializable. This makes it possible to get access to a SerializedLambda instance, which then gives us access to the synthetic method generated for the lambda. We can then use the standard reflection API to get the parameter names.

Here’s a utility MethodFinder interface that we can extend our functional interfaces from that makes it easier.

public interface MethodFinder extends Serializable {
    default SerializedLambda serialized() {
        try {
            Method replaceMethod = getClass().getDeclaredMethod("writeReplace");
            replaceMethod.setAccessible(true);
            return (SerializedLambda) replaceMethod.invoke(this);
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }
 
    default Class<?> getContainingClass() {
        try {
            String className = serialized().getImplClass().replaceAll("/", ".");
            return Class.forName(className);
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }
 
    default Method method() {
        SerializedLambda lambda = serialized();
        Class<?> containingClass = getContainingClass();
        return asList(containingClass.getDeclaredMethods())
                .stream()
                .filter(method -> Objects.equals(method.getName(), lambda.getImplMethodName()))
                .findFirst()
                .orElseThrow(UnableToGuessMethodException::new);
    }
 
    default Parameter parameter(int n) {
        return method().getParameters()[n];
    }
 
    class UnableToGuessMethodException extends RuntimeException {}
}

Given the above, we can create a NamedValue type that allows a lambda to represent a mapping from a String to a value.

public interface NamedValue<T> extends MethodFinder, Function<String, T> {
    default String name() {
        checkParametersEnabled();
        return parameter(0).getName();
    }
    default void checkParametersEnabled() {
        if (Objects.equals("arg0", parameter(0).getName())) {
            throw new IllegalStateException("You need to compile with javac -parameters for parameter reflection to work; You also need java 8u60 or newer to use it with lambdas");
        }
    }
 
    default T value() {
        return apply(name());
    }
}

Then we can simply ask our lambda for the name and value.

NamedValue<Integer> pair = five -> 5;
assertEquals("five", pair.name());
assertEquals(5, pair.value());

See also HTML in Java for another example use of this.

I imagine there are more uses too, any ideas?

Posted by & filed under Java.

Java allows casting to an intersection of types, e.g. (Number & Comparable)5. When combined with default methods on interfaces, it provides a way to combine behaviour from multiple types into a single type, without a named class or interface to combine them.

Let’s say we have two interfaces that provide behaviour for Quack and Waddle.

interface Quacks {
    default void quack() {
        System.out.println("Quack");
    }
}
interface Waddles {
    default void waddle() {
        System.out.println("Waddle");
    }
}

If we wanted something that did both we’d normally declare a Duck type that combines them, something like

interface Duck extends Quacks, Waddles {}

However, casting to an intersection of types we can do something like

with((Anon & Quacks & Waddles)i->i, ducklike -> {
    ducklike.quack();
    ducklike.waddle();
});

What’s going on here? Anon is a functional interface compatible with the identity lambda, so we can safely cast the identity lambda i->i to Anon.

interface Anon {
    Object f(Object o);
}

Since Quacks and Waddles are both interfaces with no abstract methods, we can also cast to those and there’s still only a single abstract method, which is compatible with our lambda expression. So the cast to (Anon & Quacks & Waddles) creates an anonymous type that can both quack() and waddle().

The with() method is just a helper that also accepts a consumer of our anonymous type and makes it possible to use it.

static <T extends Anon> void with(T t, Consumer<T> consumer) {
    consumer.accept(t);
}

This also works when calling a method that accepts an intersection type. We might have a method that accepts anything that quacks and waddles.

public static <Ducklike extends Quacks & Waddles> void doDucklikeThings(Ducklike ducklike) {
    ducklike.quack();
    ducklike.waddle();
}

We can now invoke the above method with an intersection cast

doDucklikeThings((Anon & Quacks & Waddles)i->i);

Source code

You can find complete examples on github

Enhancing existing objects

Some have asked whether one can use this to add functionality to existing objects. This trick only works with lambdas, but we can make use of a delegating lambda, for common types like List that we might want to enhance.

Let’s say we are fed up of having to call list.stream().map(f).collect(toList()) and wish List had a map() method on it directly.

We could do the following

List<String> stringList = asList("alpha","bravo");
with((ForwardingList<String> & Mappable<String>)() -> stringList, list -> {
    List<String> strings = list.map(String::toUpperCase);
    strings.forEach(System.out::println);
});

Where Mappable declares our new method

interface Mappable<T> extends DelegatesTo<List<T>> {
    default <R> List<R> map(Function<T,R> mapper) {
        return delegate().stream().map(mapper).collect(Collectors.toList());
    }
}

And DelegatesTo is a common ancestor with our ForwardingList

interface DelegatesTo<T> {
    T delegate();
}
interface ForwardingList<T> extends DelegatesTo<List<T>>, List<T> {
    default int size() {
        return delegate().size();
    }
    //... and so on
}

Here’s a full example

Posted by & filed under Java.

We’re used to type erasure ruining our day in Java. Want to create a new T() in a generic method? Normally not possible. We can pass a Class<T> around, but that doesn’t work for generic types. We can’t write List<String>.class

One exception to this was using super type tokens. We can get the generic type of a super class at runtime, so if we can force the developer to subclass our type, then we’re able to find out our own generic types.

It let us do things like

List<String> l1 = new TypeReference<ArrayList<String>>() {}.newInstance();

This was really convenient prior to Java 8, because without lambdas we had to use anonymous inner classes to represent functions, and they could double up as type references.

Unfortunately, the super type token does not work with Lambdas, because Lambdas are not just sugar for anonymous inner classes. They are implemented differently.

However, there’s another trick we can use to get the generic type. It’s far more hacky implementation-wise, so probably not useful in a real scenario, but I think it’s neat nonetheless.

Here’s an example of what we can do, a method that takes a TypeReference<T> and creates an instance of that type

public static <T> T create(TypeReference<T> type) {
    return type.newInstance();
}

So far just the same as the supertype tokens approach. However, to use it we just need to pass an identity lambda.

This prints hello world

ArrayList<String> list = create(i->i);
list.add("hello");
list.add("world");
list.forEach(System.out::println);

This prints hello=1 world=2

LinkedHashMap<String, Integer> map = create(i->i);
map.put("hello", 1);
map.put("world", 2);
map.entrySet().forEach(System.out::println);

We could also use as a variable. This prints String

TypeReference<String> ref = i->i;
System.out.println(ref.type().getSimpleName());

Unfortunately, it won’t work for the main motivation for supertype tokens – we can’t use this TypeReference as a key in a map because it will be a different instance each time.

First Attempt – Casting

For my first attempt I noticed that if we try casting something, and the cast is invalid we’ll get a ClassCastException at runtime that will tell us exactly what the type actually is. This does work with lambdas, since they’re translated into normal methods.

Underneath the above Java snippet with a TypeReference<String> has been translated into something like

private static java.lang.String lambda$main$0(java.lang.String input) {
    return input;
}

As you can see, calling this with a type other than String is going to generate a ClassCastException.

So we could invoke the lambda with a type other than it is expecting and pull the type name from the resulting exception.

This worked, but suggested that a better way is possible since we’re invoking a real method, that we should be able to interrogate with reflection.

Better Attempt – Reflection

If we force our lambda to be Serializable, by having our TypeReference interface implement Serializable we can get access to a SerializableLambda instance. This gives us access to the containing class and the lambda name, and then we’re able to use the normal reflection API to interrogate the types.

Here’s a MethodFinder interface that we can extend our TypeReference from which gives us access to the Parameter types

Parameter Objects

Let’s consider a more concrete example of why lambdas that are aware of their types can be useful. One application is parameter objects. Extract parameter-object is a common refactoring.

It lets us go from a method with too many parameters, to a method that takes a parameter object.

e.g.

List<Customer> customers = listCustomers(dateFrom, includeHidden, companyName, haveOrders);

becomes

List<Customer> customers = listCustomers(customerQuerySpec);

Unfortunately the way this is commonly implemented means simply moves the problem one level up to the constructor of the parameter object.

e.g.

CustomerQuerySpecification customerQuerySpec = new CustomerQuerySpecification(dateFrom, includeHidden, companyName, haveOrders);
List<Customer> customers = listCustomers(customerQuerySpec);

We still have just as many parameters, and still just as hard to follow. Furthermore, we now have to import the CustomerQuerySpecification type. The CustomerQuerySpecification type that your IDE might generate for you is also quite big. So this isn’t ideal.

At this point we might reach for the builder pattern, or a variation thereof, to help us name our parameters. However, there are alternatives.

If we were using JavaScript we might pass an object literal in this scenario, to allow ourselves to have default parameter values and named parameters.

listCustomers({
    includeHidden = true,
    companyName = "A Company"
});

We can achieve something similar in Java using lambdas. (Or we could pass an anonymous inner class and use double-brace initialisation to override values)

First of all we’ll create a Parameter Object to store our parameters. It can also have default values. Instead of using getters/setters and a constructor I’m going to deliberately use public fields.

public static class CustomerQueryOptions {
    public Date from = null;
    public boolean includeHidden = true;
    public String companyName = null;
    public boolean haveOrders = true;
}

Now we want a way of overriding these default values in a given call to our method. One way of doing this is instead of accepting the CustomerQueryOptions directly, accepting a function that mutates the CustomerQueryOptions. If we did this then we can easily specify our overrides at the callsite.

listCustomers(config -> {
    config.includeHidden = true;
    config.companyName = "A Company";
});

You might notice that this lambda looks a lot like a Consumer<CustomerQueryOptions> – it accepts a config and returns nothing.

We could just use a Consumer as is, but we can make life easier for ourselves with a little utility method that just gives us the config back and applies the function to it.

Let’s make a Parameters interface that extends consumer. We’ll add a default method to it that returns our config. It instantiates the config for us, applies the consumer function to it in order to override any default values, and then returns the instantiated config.

First we’ll need a way of creating a type of the CustomerQueryOptions ourselves, this is where our Newable<T> interface comes in. We define a NewableConsumer<T>

interface NewableConsumer<T> extends Consumer<T>, Newable<T> {
    default Consumer<T> consumer() {
        return this;
    }
}

And now we define our Parameters interface extending NewableConsumer

public interface Parameters<T> extends NewableConsumer<T> {
    default T get() {
        T t = newInstance(); // provided by Newable<T>
        accept(t); // apply our config
        return t; // return the config ready to use
    }
}
public static class CustomerQueryOptions {
    public Date from = null;
    public boolean includeHidden = true;
    public String companyName = null;
    public boolean haveOrders = true;
}
public List<Customer> listCustomers(Parameters<CustomerQueryOptions> spec) {
 
    System.out.println(spec.get().haveOrders);
 
    // ...
}

This would even work with generic types.

The following will print hello world.

foo(list -> {
    list.add("Hello");
    list.add("World");
    // list.add(5); this would be a compile failure
});
 
public static void foo(Parameters<ArrayList<String>> params) {
    params.get().forEach(System.out::println);
}

Summary

We can use a hack to make lambdas aware of their generic type. It’s a shame that it’s rather too terrible to use for real, because it would be really useful for some of the reasons outlined above.

Unlike the super type tokens we also cannot use it as a key in a map because we’ll get a different lambda instance each time.

Does anyone have an alternative approach?

This post was inspired by Duncan‘s use of this pattern in Expec8ions

The code from this post is available on github.

Posted by & filed under ContinuousDelivery, XP.

There was a recent discussion on the Extreme Programming mailing list kicked off by Ron Jeffries saying he wants his XP back.

The implication being that Extreme Programming is no longer practised, and that most “Agile” organisations are actually practising Flaccid Scrum – some agile process but little of the technical practices from Extreme Programming.

Update: Ron clarifies in the comments that we agree that extreme programming is still practised, but it would be good if it were practised by more teams.

I disagree with this premise. Extreme Programming is alive and well, at least here in London. We have XProlo, eXtreme Tuesday Club, XPDay and many other communities dedicated to XP practices under other names like Continuous Delivery and Software Craftsmanship. There are enough organisations practising Extreme Programming for us to organise regular developer exchanges to cross-pollenate ideas. Extreme programming skills such as Test-driven development and continuous-integration are highly in demand skills in Job Descriptions, even if there is much confusion about what these things actually entail.

When I say that Extreme Programming is alive and well, I do not mean we are working in exactly the same way as described in Kent Beck’s Extreme Programming Explained book. Rather, we still have the same values, and have continued to evolve our technical and team practices. Kent Beck says

“my goal in laying out the project style was to take everything I know to be valuable about software engineering and turn the dials to 10.”

Well now we have turned the dials up to eleven. What does modern Extreme Programming look like?

Turning the dials up to 11

Here are some of the ways we are now more extreme than outlined in Extreme Programming explained.

Pair Programming becomes Mob Programming

Update: Apparently XP Teams are so aligned that Rachel has written a similar blog post, covering this in more detail.

XP Explained says “Write all production programs with two people sitting at one machine”. We’ve turned this to eleven by choosing how many people are appropriate for a task. We treat a pair as a minimum for production code, but often choose to work with the whole team around a single workstation.

Mobbing is great when the whole team needs to know how something will work, when you need to brainstorm and clarify ideas and refinements as you build. It also reduces the impact of interruptions as team-members can peel in and out of the mob as they like with minimal disruption, while a pair might be completely derailed by an interruption.

When pair programming it’s encouraged to rotate partners regularly to ensure knowledge gets shared around the team and keep things fresh. Mobbing obviates the need to rotate for knowledge sharing , and takes away the problem of fragmented knowledge that is sometimes a result of pair rotation.

Continuous Integration becomes Continuous Deployment

In Extreme Programming explained Kent Beck explains that “XP shortens the release cycle”, but still talks about planning “releases once a Quarter”. It suggests we should have a ten-minute build, integrate with the rest of the team after no more than a couple of hours, and do Daily Deployments.

We can take these practises to more of an extreme.

Deploy to production after no more than a couple of hours
Not just build but deploy to production in under ten minutes
Allow the business to choose to release whenever they wish because we decouple deployment from release

Each of these vertical blue lines is our team deploying a service to production during an afternoon.

I think of Continuous Deployment as full Continuous Integration. Not only are we integrating our changes with the rest of the team, but the rest of the world in our production environment. We reduce the release deployment pain to zero because we’re doing it all the time, and we get feedback from our production monitoring, our customers and our users.

David Farley recently said “Reduce cycle time and the rest falls out” at Pipeline 2015. When turning up the dial on release frequency we find we need to turn other dials too.

Collective Code-Ownership becomes Collective Product-Ownership

XP Explained suggests that “Anyone on the team can improve any part of the system at any time” but mostly discusses the idea of shared code – anyone is free to modify any part of the codebase that team owns. However, it also suggests that we need real customer involvement in the team.

We wish to move fast for two reasons.

  1. To generate value as rapidly as possible
  2. To get feedback as frequently as possible to ensure the above

Continuous Deployment helps us move fast and get valuable feedback frequently, but the freedom to deploy-continually is only practicable if our teams also have a collective responsibility for maintaining our systems in production. To make sensible decisions about what to build and release we need to also have the responsibility of being woken up when it goes wrong at 2am.

Continuous deployment and collective ownership of infrastructure operations means we can move fast, but then the bottleneck becomes the customer. We can learn whether our changes are having the desired effect rapidly, but the value of the feedback is not fully realised if we cannot act upon the feedback to change direction until a scheduled customer interaction such as a prioritisation meeting.

Hence we extend collective ownership to all aspects of product development.

  1. Product planning
  2. Coding
  3. Keeping it running in production

Everyone on the team should be not only be free, but encouraged to

  • Change any part of the codebase
  • Change any part of our production infrastructure
  • Discuss product direction with business decision makers.

Products not Projects

Collective product ownership implies some more things. Instead of giving teams a project to build a product or some major set of features over a time period, the team is given ownership of one or more Products and tasked with adding value to that product. This approach allows for full collective product ownership, close collaboration with customers, and fully embracing change as we learn which features turn out to be valuable and which do not.

This approach is more similar to the military idea of Commander’s Intent. The team is aware of the high level business goals, but it is up to the team, with embedded customer to develop their own plans to transform that thought into action.

Hypotheses as well as Stories

User stories are placeholders for conversation and delivery of small units of customer-visible functionality. These are useful, but often we make decisions about which features to prioritise based on many assumptions. If we can identify these assumptions we can construct tests to confirm whether they are accurate and reduce the risk of spending time implementing the feature.

When working in close collaboration with customers to test assumptions and collectively decide what to do there’s also less need for estimation of the cost of stories, and there’s certainly less need to maintain long backlogs of stories that we plan to do in the future. We can focus on what we need to learn and build in the immediate future.

Test-First Programming becomes Monitoring-First Programming

Or TDD becomes MDD. When we collectively own the whole product lifecycle we can write our automated production monitoring checks first, and see them fail before we start implementing our features.

This forces us to make sure that features can be easily monitored, and helps us avoid scope creep, just like TDD, while ensuring we have good production monitoring coverage.

It also helps us have trust in our production systems and the courage to make frequent changes to our production system.s

Just like with TDD, MDD gives us feedback about the design of our systems, which we can use to improve our designs. It also gives us a rhythm for doing so.

Continuous Learning

Nearly all of the above are designed to maximise learning, whether it is from our peers during development, our production environment when we integrate our changes, our users when we release changes, or our customers when we test hypotheses.

But it’s also important to have space for individual learning, it helps retain people and benefits the team.

Extreme Programming practices have changed in the last 15 years because we are continually learning. Some of the ways to provide space for learning include

  • Gold cards/20% time – provide dedicated and regular time for individuals in the team to do work of their own choosing. Providing opportunity for bottom-up innovation.
  • Dev-exchanges – regularly swap developers with other organisations to allow for sharing of ideas between organisations
    Meetups and Conferences – Encouraging each other to attend and speak at conferences and local meetups helps accelerate learning from other organisations.
  • Team-Rotations – Regularly swapping people between teams inside the organisation helps spread internal ideas around.

Posted by & filed under Conferences, ContinuousDelivery, XP.

One of the more interesting questions that came up at Pipeline Conference was:

“How can we mitigate the risk of releasing a change that damages our data?”

When we have a database holding data that may be updated and deleted, as well as inserted/queried, then there’s a risk of releasing a change that causes the data to be mutated destructively. We could lose valuable data, or worse – have incorrect data upon which we make invalid business decisions.

Point in time backups are insufficient. For many organisations, simply being unable to access the missing or damaged data for an extended period of time while a backup is restored, would have an enormous cost. Invalid data used to make business decisions could also result in a large loss.

Worse, with most kinds of database it’s much harder to roll back the database to a point in time, than it is to roll-back our code. It’s also hard to or isolate and roll-back the bad data, while retaining the good data inserted since a change.

How can we avoid release-paralysis when there’s risk of catastrophic data damage if we release bad changes?

Practices like having good automated tests and pair programming may reduce the risk of releasing a broken change – but in the worst-case scenario where they don’t catch a destructive bug, how can we mitigate its impact?

Here’s some techniques I think can help.

Release more Frequently

This may sound counter-intuitive. If every release we make has a risk of damaging our data, surely releasing more frequently increases that risk?

There has been lots written about this. The reality seems to be that the more frequent our releases, the smaller they are, which means the chances of one causing a problem are reduced.

We are able to reason about the impact of a tiny change more easily than a huge change. This helps us to think through potential problems when reviewing before deployment.

We’re also more easily able to confirm that a small change is behaving as expected in production. Which means we should notice any undesirable behaviour more quickly. Especially if we are practising monitoring driven development,

Attempting to release more frequently will likely force you to think about the risks involved in releasing your system, and consider other ways to mitigate them. Such as…

Group Data by Importance

Not all data is equally important. You probably care a lot that financial transactions are not lost or damaged, but you may not care quite so much whether you know when a user last logged into your system.

If every change you release is theoretically able to both update the user’s last logged in date, and modify financial transactions, then there’s some level of risk that it does the latter when you intended it to do the former.

Simply using different credentials and permissions to control which parts of your system can modify what data, can increase your confidence in changing less critical parts of your system. Most databases support quite fine-grained permissions, to restrict what applications are able to do, and you can also physically separate different categories of data.

Separate Reads from Writes

If you separate the responsibilities of reading/writing data into separate applications (or even clearly separated parts of the same application), you can make changes to code that can only read data with more peace of mind, knowing there’s limits to how badly it can go wrong.

Command Query/Responsibility Separation can also help simplifying conceptual models, and simplify certain performance problems.

Make Important data Immutable

If your data is very important, why allow it to be updated at all? If it can’t be changed, then there’s no risk of something you release damaging it.

There’s no reason you should ever need to alter a financial transaction or the fact that an event has occurred.

There are often performance reasons to have mutable data, but there’s rarely a reason that your canonical datastore needs to be mutable.

I like using append-only logs as the canonical source of data.

If changes to a data model are made by playing through an immutable log, then we can always recover data by replaying the log with an older snapshot.

If you have an existing system with lots of mutable database state, and you can’t easily change it, you may be able to get some of the benefits by using your database well. Postgres allows you to archive its Write-Ahead-Logs. If you archive a copy of these you can use them to restore the state of the database at any arbitrary point in time, and hence recover data even if it was not captured in a snapshot backup.

Delayed Replicas

Let’s say we mess up really badly and destroy/damage data. Having good snapshot backups probably isn’t going to save us, especially if we have a lot of data. Restoring a big database from a snapshot can take a significant amount of time. You might even have to do it with your system offline or degraded, to avoid making the problem worse.

Some databases have a neat feature of delayed replcation. This allows you to have a primary database, and then replicate changes to copies, some of which you delay by a specified amount of time.

This gives you a window of opportunity to spot a problem. If you do spot a problem you have the option to failover to a slightly old version, or recover data without having to wait for a backup to restore. For example you could have a standby server that is 10 minutes delayed, another at 30mins, and another at an hour.

When you notice the problem you can either failover to the replica or stop replication and merge selected data back from the replica to the master.

Even if your database can’t do this, you could also build it in at the application level. Particularly if your source of truth is a log or event-stream.

Verify Parallel Producers

There will always be some changes that are riskier than others. It’s easy to shy away from updating code that deals with valuable data. This in turn makes the problem worse, it can lead to the valuable code being the oldest and least clean code in your system. Old and smelly code tends to be riskier to change and release.

Steve Smith described an application pattern called verify branch by abstraction that I have used successfully to incrementally replace consumers of data, such as reporting tools or complex calculations.

A variation of this technique can also be used to incrementally and safely make large, otherwise risky changes, to producers of data. i.e. things that might need to write to stores of important data and can potentially damage them.

In this case we fork our incoming data just before the point at which we want to make a change. This could be achieved by sending HTTP requests to multiple implementations of a service, by having two implementations of a service ingesting different event logs, or simply having multiple implementations of an interface within an application, one of which delegates to both the old and new implementation.

At whatever stage we fork our input, we write the output to a separate datastore that is not yet used by our application.

We can then compare the old and new datastore after leaving it running in parallel for a suitable amount of time.

If we are expecting it to take some time to make our changes, it may be worth automating this comparison of old and new. We can have the application read from both the old and the new datastores, and trigger an alert if the results differ. If this is too difficult we could simply automate periodic comparisons of the datastores themselves and trigger alerts if they differ.

Summary

If you’re worried about the risk to your data of changing your application, look at how you can change your design to reduce the risk. Releasing less frequently will just increase the risk.

Posted by & filed under Conferences, ContinuousDelivery, XP.

Alex and I recently gave a talk at Pipeline Conference about our approach of testing in production.

With our limited time we focused on things we check in production. Running our acceptance/integration tests, performance tests, and data fuzzing against our production systems. We also prefer doing user acceptance testing and exploratory testing in production.

In a Continuous-Deployment environment with several releases a day there’s little need or time for staging/testing environments. They just delay the rate at which we can make changes to production. We can always hide incomplete/untested features from users with Feature Toggles

“How do you cope with junk data in production?”

The best question we were asked about the approach of both checking and testing in production, was “How do you cope with the junk test data that it produces?”. Whether it is an automated system injecting bad data into our application, or a human looking for new ways to break the system, we don’t want to see this test data polluting real users’ views or reports. How do we handle this?

Stateless or Read Only Applications

Sometimes we cheat beacause it’s possible to make a separately releasable part of a system entirely stateless. The application is effectively a pure function. Data goes in, data comes out with some deterministic transformation.

These are straightforward to both check and test in production, as no action we take can result in unexpected side-effects. It’s also very hard to alter the behaviour of a stateless system, but not impossible – for example if you overload the system, its performance will be altered.

Similarly, we can test and check read-only applications to our heart’s content without worrying about data we generate. If we can keep things that read and write data separate, we don’t have to worry about any testing of the read-only parts.

Side-Effect Toggles

When we do have side-effects, if we make the side-effects controllable we can avoid triggering them except when explicitly checking that they exist.

For example, an ad unit on a web page is generally read-only in that no action you can perform with it can change it. However, it does trigger side effects in that the advertising company can track that you are viewing or clicking on the ad.

If we had a way of loading the ad, but could disable its ability to send out tracking events, then we can check any other behaviour of the ad without worrying about the side effects. This technique is useful for running selenium webdriver tests against production systems to check the user interactions, without triggering side effects.

In a more complex application we could have the ability to only grant certain users read-only access. That way we can be sure that bots or humans using those accounts can’t generate invalid data.

Data Toggles

Ultimately, if we are going to check or test that our production systems are behaving as we expect, we do need to be able to write data. One way to deal with this is data-toggles. A simple boolean flag against a record which indicates the data is test data. You can then use a similar flag at a user or query level to hide/show the test data.

This might sound like a bit of a pain, but often this functionality is needed in any case to fulfil business requirements –

Reporting systems often need a way to filter out data which is invalid, anomalous, or outdated. Test data is just one type of data that is invalid in normal reports.

Many systems need a security system to control which users have access to what data. This is exactly what we want to achieve – hiding data generated by test users from real users.

We can often re-use the permissions and filtering systems that we we needed to build anyway, to hide our test data.