Posted by & filed under Java.

A while back I promised to follow up from this tweet to elaborate on the fun I was having with Java’s new Records (currently preview) feature.

Records, like lambdas and default methods on interfaces are tremendously useful language features because they enable many different patterns and uses beyond the obvious.

Java 8 brought lambdas, with lots of compelling uses for streams. What I found exciting at the time was that for the first time lots of things that we’d previously have to have waited for as new language features could become library features. While waiting for lambdas we had a Java 7 release with try-with-resources. If we’d had lambdas we could have implemented something similar in a library without needing a language change.

There’s often lots one can do with a bit of creativity. Even if Brian Goetz does sometimes spoil one’s fun ¬_¬

https://twitter.com/nipafx/status/1028979167591890944

Records are another such exciting addition to Java. They provide a missing feature that’s hard to correct for in libraries due to sensible limitations on other features (e.g. default methods on interfaces not being able to override equals/hashcode)

Here’s a few things that records help us do that would otherwise wait indefinitely to appear in the core language.


Implicitly Implement (Forwarding) Interfaces

Java 8 gave us default methods on interfaces. These allowed us to mix together behaviour defined in multiple interfaces. One use of this is to avoid having to re-implement all of a large interface if you just want to add a new method to an existing type. For example, adding a .map(f) method to List. I called this the Forwarding Interface pattern.

Using forwarding interface still left us with a fair amount of boilerplate, just to delegate to a concrete implementation. Here’s a MappableList definition using a ForwardingList.

class MappableList<T> implements List<T>, ForwardingList<T>, Mappable<T> {
   private List<T> impl;
 
   public MappableList(List<T> impl) {
       this.impl = impl;
   }
 
   @Override
   public List<T> impl() {
       return impl;
   }
}

The map(f) implementation is defined in Mappable<T> and the List<T> implementation is defined in ForwardingList<T>. All the body of MappableList<T> is boilerplate to delegate to a given List<T> implementation.

We can improve on this a bit using anonymous types thanks to Jdk 10’s var. We don’t have to define MappableList<T> at all. We can define it inline with intersection casts and structural equivalence with a lambda that returns the delegate type.

var y = (IsA<List<String>> & Mappable<String> & FlatMappable<String> & Joinable<String>)
    () -> List.of("Anonymous", "Types");

Full implementation

This is probably a bit obscure for most people. Intersection casts aren’t commonly used. You’d also have to define your desired “mix” of behaviours at each usage site.

Records give us a better option. The implementation of a record definition can implicitly implement the boilerplate in the above MappableList definition

public record EnhancedList<T>(List<T> inner) implements
       ForwardingList<T>,
       Mappable<T>,
       Filterable<T, EnhancedList<T>>,
       Groupable<T> {}
 
interface ForwardingList<T> extends List<T>, Forwarding<List<T>> {
   List<T> inner();
   //…
}

Here we have defined a record with a single field named “inner“. This automatically defines a getter called inner() which implicitly implements the inner() method on ForwardingList. None of the boilerplate on the above MappableList is needed. Here’s the full code. Here’s an example using it to map over a list.

Decomposing Records

Let’s define a Colour record

public record Colour(int red, int green, int blue) {}

This is nice and concise. However, what if we want to get the constituent parts back out again.

Colour colour = new Colour(1,2,3);
var r = colour.red();
var g = colour.green();
var b = colour.blue();
assertEquals(1, r.intValue());
assertEquals(2, g.intValue());
assertEquals(3, b.intValue());

Can we do better? How close can we get to object destructuring?

How about this.

Colour colour = new Colour(1,2,3);
 
colour.decompose((r,g,b) -&gt; {
   assertEquals(1, r.intValue());
   assertEquals(2, g.intValue());
   assertEquals(3, b.intValue());
});

How can we implement this in a way that requires minimal boilerplate? Default methods on interfaces come to the rescue again. What if we could get any of this additional sugary goodness on any record, simply by implementing an interface.

public record Colour(int red, int green, int blue) 
   implements TriTuple<Colour,Integer,Integer,Integer> {}

Here we’re making our Colour record implement an interface so it can inherit behaviour from that interface.

Let’s make it work…

We’re passing the decompose method a lambda function that accepts three values. We want the implementation to invoke the lambda and pass our constituent values in the record (red, green, blue) as arguments when invoked.

Firstly let’s declare a default method in our TriTuple interface that accepts a lambda with the right signature.

interface TriTuple<TRecord extends Record & TriTuple<TRecord, T, U, V>,T,U,V>
    default void decompose(TriConsumer<T,U,V> withComponents) {
   	//
    }
}

Next we need a way of extracting the component parts of the record. Fortunately Java allows for this. There’s a new method Class::getRecordComponents that gives us an array of the constituent parts.

This lets us extract each of the three parts of the record and pass to the lambda.

var components = this.getClass().getRecordComponents();
return withComponents.apply(
    (T) components[0].getAccessor().invoke(this),
    (U) components[1].getAccessor().invoke(this),
    (V) components[2].getAccessor().invoke(this)
);

There’s some tidying we can do, but the above works. A very similar implementation would allow us to return a result built with the component parts of the record as well.

Colour colour = new Colour(1,2,3);
var sum = colour.decomposeTo((r,g,b) -&gt; r+g+b);
assertEquals(6, sum.intValue());

Structural Conversion

Sometimes the types get in the way of people doing what they want to do with the data. However wrong it may be ¬_¬

Let’s see if we can allow people to convert between Colours and Towns

public record Person(String name, int age, double height) 
    implements TriTuple<Person, String, Integer, Double> {}
public record Town(int population, int altitude, int established)
    implements TriTuple<Town, Integer, Integer, Integer> { }
 
 
Colour colour = new Colour(1, 2, 3);
Town town = colour.to(Town::new);
assertEquals(1, town.population());
assertEquals(2, town.altitude());
assertEquals(3, town.established());

How to implement the “to(..)” method? We’ve already done it! It’s accepting a method reference to Town’s constructor. This is the same signature and implementation of our decomposeTo method above. So we can just alias it.

default <R extends Record & TriTuple<R, T, U, V>> R to(TriFunction<T, U, V, R> ctor) {
   return decomposeTo(ctor);
}

Replace Property

We’ve now got a nice TriTuple utility interface allowing us to extend the capabilities that tri-records have.

Another nice feature would be to create a new record with just one property changed. Imagine we’re mixing paint and we want a variant on an existing shade. We could just add more of one colour, not start from scratch.

Colour colour = new Colour(1,2,3);
Colour changed = colour.with(Colour::red, 5);
assertEquals(new Colour(5,2,3), changed);

We’re passing the .with(..) method a method reference to the property we want to change, as well as the new value. How can we implement .with(..) ? How can it know that the passed method reference refers to the first component value?

We can in fact match by name.

The RecordComponent type from the standard library that we used above can give us the name of each component of the record.

We can get the name of the passed method reference by using a functional interface that extends from Serializable. This lets us access the name of the method the lambda is invoking. In this case giving us back the name “red”

default <R> TRecord with(MethodAwareFunction<TRecord, R> prop, R newValue) { 
    //
}

MethodAwareFunction extends another utility interface MethodFinder which provides us access to the Method invoked and from there, the name.

The last challenge is reflectively accessing the constructor of the type we’re trying to create. Fortunately we’re passing the type information to our utility interface at declaration time

public record Colour(int red, int green, int blue)
    implements TriTuple<Colour,Integer,Integer,Integer> {}

We want the Colour constructor. We can get it from Colour.class. We can get this by reflectively accessing the first type parameter of the TriTuple interface. Using Class::getGenericInterfaces() then ParameterizedType::getActualTypeArguments() and taking the first to get a Class<Colour>

Here’s a full implementation.

Automatic Builders

We can extend the above to have some similarities with the builder pattern, without having to create a builder manually each time.

We’ve already got our .with(namedProperty, value) method to build a record step by step. All we need is a way of creating a record with default values that we can replace with our desired values one at a time.

Person sam = builder(Person::new)
   .with(Person::name, "Sam")
   .with(Person::age, 34)
   .with(Person::height, 83.2);
 
assertEquals(new Person("Sam", 34, 83.2), sam);
 
static <T, U, V, TBuild extends Record & TriTuple<TBuild, T, U ,V>> TBuild builder(Class<TBuild> cls) {
    //
}

This static builder method invokes the passed constructor reference passing it appropriate default values. We’ll use the same SerializedLambda technique from above to access the appropriate argument types.

static <T, U, V, TBuild extends Record & TriTuple<TBuild, T, U ,V>> TBuild builder(MethodAwareTriFunction<T,U,V,TBuild> ctor) {
   var reflectedConstructor = ctor.getContainingClass().getConstructors()[0];
   var defaultConstructorValues = Stream.of(reflectedConstructor.getParameterTypes())
           .map(defaultValues::get)
           .collect(toList());
   return ctor.apply(
       (T)defaultConstructorValues.get(0),
       (U)defaultConstructorValues.get(1),
       (V)defaultConstructorValues.get(2)
   );
}

Once we’ve invoked the constructor with default values we can re-use the .with(prop,value) method we created above to build a record up one value at a time.

Example Usage

public record Colour(int red, int green, int blue) 
    implements TriTuple<Colour,Integer,Integer,Integer> {}
 
public record Person(String name, int age, double height) 
    implements TriTuple<Person, String, Integer, Double> {}
 
public record Town(int population, int altitude, int established) 
    implements TriTuple<Town, Integer, Integer, Integer> {}
 
public record EnhancedList<T>(List<T> inner) implements
    ForwardingList<T>,
    Mappable<T> {}
 
@Test
public void map() {
    var mappable = new EnhancedList<>(List.of("one", "two"));
 
    assertEquals(
        List.of("oneone", "twotwo"),
        mappable.map(s -> s + s)
    );
}
 
@Test
public void decomposable_record() {
   Colour colour = new Colour(1,2,3);
 
   colour.decompose((r,g,b) -> {
       assertEquals(1, r.intValue());
       assertEquals(2, g.intValue());
       assertEquals(3, b.intValue());
   });
 
   var sum = colour.decomposeTo((r,g,b) -> r+g+b);
   assertEquals(6, sum.intValue());
}
 
@Test
public void structural_convert() {
   Colour colour = new Colour(1, 2, 3);
   Town town = colour.to(Town::new);
   assertEquals(1, town.population());
   assertEquals(2, town.altitude());
   assertEquals(3, town.established());
}
 
@Test
public void replace_property() {
   Colour colour = new Colour(1,2,3);
   Colour changed = colour.with(Colour::red, 5);
   assertEquals(new Colour(5,2,3), changed);
 
   Person p1 = new Person("Leslie", 12, 48.3);
   Person p2 = p1.with(Person::name, "Beverly");
   assertEquals(new Person("Beverly", 12, 48.3), p2);
}
 
@Test
public void auto_builders() {
   Person sam = builder(Person::new)
           .with(Person::name, "Sam")
           .with(Person::age, 34)
           .with(Person::height, 83.2);
 
   assertEquals(new Person("Sam", 34, 83.2), sam);
}

Code is all in this test and this other test. Supporting records with arities other than 3 is left as an exercise to the reader ¬_¬

Posted by & filed under XP.

A recent twitter discussion reminded me of an interesting XTC discussion last year. The discussion topic was refactoring code to make it worse. We discussed why this happens, and what we can do about it.

I found the most interesting discussion arose from the question “when might this be a good thing?”—when is it beneficial to make code worse?

Refactorings are small, safe, behaviour-preserving transformations to code. Refactoring is a technique to improve the design of existing code without changing the behaviour. The refactoring transformations are merely a tool. The result may be either better or worse. 

Make it worse for you; make it better for someone else

Refactoring ruthlessly can keep code habitable, inline with our best understanding of the domain, even aesthetically pleasing. 

They can also make the code worse. Whether the result is better or worse is in the eye of the beholder. What’s better to one person may be worse to another. What’s better for one team may be worse for another team. 

For example, some teams may be more comfortable with abstraction than others. Some teams prefer code that more explicitly states how it is working at a glance. Some people may be comfortable with OO design patterns and find functional programming idioms unfamiliar, and vice versa.

You may refactor the code to a state you’re less happy with but the team as a whole prefers. 

Refactoring the code through different forms also allows for conversations to align on a preferred style in a team. After a while you can often start to predict what others on the team are going to think of a given refactoring even without asking them. 

Making refactoring a habit, e.g. as part of the TDD cycle accelerates this, as do mechanisms for fast feedback between each person in the team—such as pairing with rotation or collective group code review.

Learning through Exploration

Changing the structure of code without changing its behaviour can help to understand what the code’s doing, why it’s written in that way, how it fits into the rest of the system. 

In his book “Working effectively with legacy code” Michael feathers calls this “Scratch Refactoring”. Refactor the code without worrying about whether your changes are safe, or even better.

Then throw those refactorings away. 

Exploratory refactoring can be done even when there’s no tests, even when you don’t have enough understanding of the system to know if your change is better or worse, even when you don’t know the acceptance criteria for the system.

Moulding the code into different forms that have the same behaviour can increase your understanding of what that core behaviour is.

A sign it’s safe to take risks

If every refactoring you perform makes the code better, it seems likely that we could be more courageous in our refactoring attempts. 

If we only tackle the changes where we know what better looks like and leave scary code alone the system won’t stay simple.

If we’re attempting to improve code we don’t fully understand and don’t intuitively know the right design for we’ll get it wrong some of the time. 

It’s easy to try so hard to avoid the risk of bad things happening that we also get in the way of good things happening.

Many teams use gating code review before code may make its way to production. Establishing a gate to stop bad code making it into production, that also slows down good code getting to production.

Refactorings are often small steps towards a deeper insight into the domain of the code we’re working on. Sometimes those steps will be in a useful direction, sometimes wrong. All of them will build up understanding in the team. Not all of them will be unquestionably better at each integration point, and could easily be filtered out by a risk-averse code review gate. Avoiding the risk that a refactoring might be taking us in the wrong path may rob us of the chance of a breakthrough in the next refactoring, or the one after. 

A team that’s not afraid to make improvements to the system will also get it wrong some of the time. That has to be ok. We learn as much or more from the failures.

Making it safe to make code worse

Extreme programming practices really help create an environment where it’s safe experiment with code in this manner.

Pair programming means you’ve got a second person to catch some of the riskiest things that could happen and give immediate feedback in the moment. It gives two perspectives on the shape the code should be in. Tom Johnson calls this optician-style “Do you prefer this… or this”. Refactorings are small changes so it’s feasible to switch back and forth between each structure to compare and consider together.

Group code review. (Reviewing code together as a team, after it’s already in production) can build a shared understanding of what the team considers good code. Help you foresee the preferences of the rest of your team. Between you build a better understanding of the code than you could even in a pair. Spot the refactoring paths we’ve embarked on that have made code worse rather than better. Highlight changes to make the next time we’re in the area.

Continuous integration means we’re only making small steps before getting feedback from integrating the code. The size of our mistakes is limited.

Test Driven Development gives us a safety net that tells us when our refactoring may have not just changed the structure of the code but also inadvertently the behaviour. i.e. it wasn’t a refactoring. Test suites going red during a refactoring is a “surprise” we can learn from. We predict the suite will stay green. If it goes red then there’s something we didn’t fully understand about the code. Surprises are where learning happens.

Test Driven Development also makes refactoring habitual. Every micro-iteration of behaviour we perform to the system includes refactoring. Tidying the implementation, trying out another approach, simplifying the test, improving its diagnostic power (maybe not strictly a refactoring). If you never move onto writing the next test without doing at least some refactoring you’ll build up the habit and skill at refactoring fast. If you do lots of refactorings some of them will make things worse, and that’s ok. 

Posted by & filed under XP.

There are many reasons to consider hiring inexperienced software engineers into your team, beyond the commonly discussed factors of cost and social responsibility.

Hire to maximise team effectiveness; not to maximise team size. Adding more people increases the communication and synchronisation overhead in the team. Growing a team has rapidly diminishing returns.

However, adding the right people, perspectives, skills, and knowledge into a team can transform that team’s impact. Instantly unblocking problems that would have taken days of research. Resolving debates that would have paralysed. The right balance between planning and action.

It’s easy to undervalue inexperienced software engineers as part of a healthy team mix. While teams made up of entirely senior software engineers can be highly effective. There are many benefits beyond cost and social responsibility for hiring entry level and junior software engineers onto your team.

Fresh Perspectives

Experienced engineers have learned lots of so called “best practices” or dogma. Mostly these are good habits that are safer ways of working, save time, and aid learning. On the other hand sometimes the context has changed and these practices are no longer useful, but we carry on doing them anyway out of habit. Sometimes there’s a better way, now that tech has moved on, and we haven’t even stopped to consider.

There’s a lot of value in having people on the team who’ve yet to develop the same biases. People who’ll force you to think through and articulate why you do the things you’ve come to take for granted. The reflection may help you spot a better way.

To take advantage you need sufficient psychological safety that anyone can ask a question without fear of ridicule. This also benefits everyone.

Incentive for Simplicity and Safety

A team of experienced engineers may be able to tolerate a certain amount of accidental code complexity. Their expertise may enable them to work relatively safely without good test safety nets and gaps in their monitoring. I’m sure you know better ;)

Needing to make our code simple enough to understand for a new software engineer to be able to understand and change it exerts positive pressure on our code quality.

Having to make it safe to fail. Protecting everyone on the team from being able to make a change that takes down production or corrupts data helps us all. We’re all human.

Don’t have any junior engineers? What would you do differently if you knew someone new to programming was joining your team next week? Which of those things should you be doing anyway? How many would pay back their investment even with experienced engineers? How much risk and complexity are you tolerating? What’s its cost?

Growth opportunity for others

Teaching, advising, mentoring, coaching less experienced people on the team can be a good development opportunity for others. Teaching helps deepen your own understanding of a topic. Practising your ability to lift others up will serve you well.

Level up fast

It can be humbling how swiftly new developers can get up to speed and become highly productive. Particularly in an environment that really values learning. Pair programming can be a tremendous accelerator for learning through doing. True pairing, i.e. solving problems together, rather than spoonfeeding or observing.

Tenure

Amount of software engineering experience is one indicator for the impact an individual can have. Amount of experience within your organisation is also relevant. If you only hire senior people and your org is not growing fast enough to provide them with further career development opportunities they are more likely to leave to find growth opportunities. It can be easier to find growth opportunities for people earlier in their career.

A mix of seniorities can help increase the average tenure of developers in your organisation—assuming you will indeed support them with their career development.

Action over Analysis

Junior engineers often bring a healthy bias towards getting on with doing things over excessive analysis. Senior engineers sometimes get stuck evaluating foreseen possibilities, finding “the best tool for the job”, or debating minutiae ad nauseam. Balancing the desires to do the right things right, with the desire to do something, anything quickly on a team can be transformational.

Hire Faster

There’s more inexperienced people. It’s quicker to find people if we relax our experience and skill requirements. Some underrepresented minorities may be less underrepresented at more junior levels.

The inexperienced engineer you hire today could be the senior engineer you need in years to come.

To ponder

What other reasons have I missed? In what contexts is the opposite true? When would you only hire senior engineers?

Posted by & filed under ContinuousDelivery, XP.

When I ask ask people about their approach to continuous integration, I often hear a response like

“yes of course, we have CI, we use…”.

When I ask people about doing continuous integration I often hear “that wouldn’t work for us…”

It seems the practice of continuous integration is still quite extreme. It’s hard, takes time, requires skill, discipline and humility.

What is CI?

Continuous integration is often confused with build tooling & automation. CI is not something you have, it’s something you do.

Continuous integration is about continually integrating. Regularly (several times a day) integrating your changes (in small & safe chunks) with the changes being made by everyone else working on the same system.

Teams often think they are doing continuous integration, but are using feature branches that live for hours or even days to weeks.

Code branches that live for much more than an hour are an indication you’re not continually integrating. You’re using branches to maintain some degree of isolation from the work done by the rest of the team.

I like the current Wikipedia definition: “continuous integration (CI) is the practice of merging all developer working copies to a shared mainline several times a day.”

I like this description. It’s worth calling out a few bits.

CI is a practice. Something you do, not something you have. You might have “CI Tooling”. Automated build/test running tooling that helps check all changes.

Such tooling is good and helpful, but having it doesn’t mean you’re continually integrating.

Often the same tooling is even used to make it easier to develop code in isolation from others. The opposite of continuous integration.

I don’t mean to imply that developing in isolation and using the tooling this way is bad. It may be the best option in context. Long lived branches and asynchronous tooling has enabled collaboration amongst large groups of people across distributed geographies and timezones.

CI is a different way of working. Automated build and test tooling may be a near universal good. (even a hygiene factor). The practice of Continuous Integration is very helpful in some contexts, even if less universally beneficial.

…all developer working copies…

All developers on the team integrating their code. Not just small changes. If bigger features are worked on in isolation for days or until they’re complete you’re not integrating continuously.

…to a shared mainline…

Code is integrated into the same branch. Often “master” or “main” in git parlance. It’s not just about everyone pushing their code to be checked by a central service. It’s about knowing it works when combined with everyone else’s work in progress, and visible to the rest of the team.

…several times a day

This is perhaps the most extreme part. The part that highlights just how unusual a practice continuous integration really is. Despite everyone talking about it.

Imagine you’re in a team of five developers, working independently, practising CI. Aiming to integrate your changes roughly once an hour. You might see 40 commits to main in a single day. Each commit representing a functional, working, potentially releasable state of the system.

(Teams I’ve worked on haven’t seen quite such a high commit rate. It’s reduced by pairing and non-coding work; nonetheless CI means high rate of commits to the mainline branch)

Working in this way is hard, requires a lot of discipline and skill. It might seem impossible to make large scale changes this way at first glance. It’s not surprising it’s uncommon.

To visualise the difference


Why CI?

Get Feedback

Why would work work in such a way? Integrating our changes will incur some overhead. It likely means taking time out every single hour to review changes so far, tidy, merge, and deal with any conflicts arising.

Continuously integrating helps us get feedback as fast as possible. Like most Extreme Programming practices. It’s worth practising CI if that feedback is more valuable to you than the overhead.

Team mates

We may get feedback from other team members—who will see our code early when they pull it. Maybe they have ideas for doing things better. Maybe they’ll spot a conflict or an opportunity from their knowledge and perspective. Maybe you’ve both thought to refactor something in subtly different ways and the difference helps you gain a deeper insight into your domain.

Code

CI amplifies feedback from the code itself. Listening to this feedback can help us write more modular, supple code that’s easier to change.

If our very-small change conflicts with another working on a different feature it’s worth considering whether the code being changed has too many responsibilities. Why did it need to change to support both features? Modularity is promoted by CI creating micro-pain from multiple people changing the same thing at the same time.

Making a large-scale change to our system via small sub-hour changes forces us to take a tidy-first approach. Often the next change we want to make is hard, not possible in less than an hour. Instead of taking our preconceived path towards our preconceived design, we are pressured to first make the change we want to make easier. Improve the design of the existing code so that the change we want to make becomes simple.

Even with this approach we’re unlikely to be able to make large scale changes in a single step. CI encourages mechanisms for integrating the code for incomplete changes. Such as branch by abstraction which further encourages modularity.

CI also exerts pressure to do more and better automated testing. If we don’t have automated checks for the behaviour of our code it may break when changed rapidly.

If our tests are brittle—coupled to the current structure of the code rather than the important behaviour then they will fail frequently when the code is changed. If our tests are slow then we’d waste lots of time running regularly, hopefully incentivising us to invest in speeding them up.

Continuous integration of small changes exposes us to this feedback regularly.

If we’re integrating hourly then this feedback is also timely. We can get feedback on our code structure and designs before it becomes expensive to change direction.

Production

CI is a useful foundation for continuous delivery, and continuous deployment. Having the code always in an integrated state that’s safe to release.

Continuously deploying (not the same as releasing) our changes to production enables feedback from customers, users, its impact on production health.

Combat Risk

Arguably the most significant benefit of CI is that it forces us to make our changes in small, safe, low-risk steps. Constant practice ensures it’s possible when it really matters.

It’s easy to approach a radical change to our system from the comforting isolation of a feature branch. We can start pulling things apart across the codebase and shaping them into our desired structure. Freed from the constraints of keeping tests passing or even our code compiling. Coming back to getting it working, the code compiling, and the tests compiling afterwards.

The problem with this approach is that it’s high risk. There’s a high risk that our change takes a lot longer than expected and we’ll have nothing to integrate for quite some time. There’s a high risk that we get to the end and discover unforeseen problems only at integration time. There’s a high risk that we introduce bugs that we don’t detect until after our entire change is complete. There’s a high risk that our product increment and commercial goals are missed because they are blocked by our big radical change. There’s a risk we feel pressured into rushing and sacrificing code quality when problems are only discovered late during an integration phase.

CI liberates us from these risks. Rather than embarking on a grand plan all at once, we break it down into small steps that we can complete and integrate swiftly. Steps that only take a few mins to complete.

Eventually the accumulation of these small changes unlock product capabilities and enable releasing value. Working in small steps becomes predictable. No longer is there a big delay from “we’ve got this working” to “this is ready for release”

This does not require us to be certain of our eventual goal and design. Quite the opposite. We start with a small step towards our expected goal. When we find something hard, hard to change then we stop and change tack. First making a small refactoring to try and make our originally intended change easy to make. Once we’ve made it easy we can go back and make the actual change.

What if we realise we’re going in the wrong direction? Well we’ve refactored our code to make it easier to change. What if we’ve made our codebase better for no reason? We’ve still won.

Collaborate Effectively

Meetings are not always popular. Especially ceremonies such as standups. Nevertheless it’s important for a team of people working towards a common goal to understand where each other have got to. To be able to react to new information, change direction if necessary, help each other out.

The more we work separately in isolation, the more costly and painful synchronisation points like standups can become. Catching each other up on big changes in order to know whether to adjust the plan.

Contrast this with everyone working in small, easy to digest steps. Making their progress visible to everyone else on the team frequently. It’s more likely that everyone already has a good idea of where the rest of the team is at and less time must be spent catching up. When everyone on the team is aware of where everyone else has got to the team can actually work as team. Helping each other out to speed a goal.

No-one likes endless discussions that get in the way of making progress. No-one likes costly re-work when they discover their approach conflicts with other work in the team. No-one likes wasting time duplicating work. CI enables constant progress of the whole team, at a rate the whole team can keep up with.

Arguably the most extreme continuous integration is mob programming. The whole team working on the same thing, at the same time, all the time.

Obstacles

“but we’re making a large scale change”

We touched on this above. It’s usually possible to make a large scale change via small, safe, steps. First making the change easier, then making the change. Developing new functionality side by side in the same codebase until we’re satisfied it can replace older functionality.

Indeed the discipline required to make changes this way can be a positive influence on code quality.

“but code review”

Many teams have a process of blocking code review prior to integrating changes into a mainline branch. If this code review requires interrupting someone else every few minutes this may be impractical.

Continuous integration like requires being comfortable with changes being integrated without such a blocking pull-request review style gate.

It’s worth asking yourself why you do such review and whether a blocking approach is the only way. There are alternatives that may even achieve better results.

Pair programming means all code is reviewed at the point in time it was written. It also gives the most timely feedback from someone else who fully understands the context. Pairing tends to generate feedback that improves the code quality. Asynchronous reviews all too often focus on whether the code meets some arbitrary bar—focusing on minutiae such as coding style and the contents of the diff, rather than the implications of the change on our understanding of the whole system.

Pair programming doesn’t necessarily give all the benefits of a code review. It may be beneficial for more people to be aware of each change, and to gain the perspective of people who are fresher or more detached. This can be achieved to a large extent by rotating people through pairs, but review may still be useful.

Another mechanism is non-blocking code review. Treating code review more like a retrospective. Rather than “is this code good enough to be merged” ask “what can we learn from this change, and what can we do better?”.

Consider starting each day reviewing as a team the changes made the previous day and what you can learn from them. Or stopping and reviewing recent changes when rotating who you are pair-programming with. Or having a team retrospective session where you read code together and share ideas for different approaches.

“but main will be imperfect”

Continuous integration implies the main branch is always in an imperfect state. There will be incomplete features. There may be code that would have been blocked by a code review. This may seem uncomfortable if you strive to maintain a clean mainline that the whole team is happy with and is “complete”.

Imperfection in the main branch is scary if you’re used to the main branch representing the final state of code. Once it’s there being unlikely to change any time soon. In such a context being protective of it is a sensible response. We want to avoid mistakes we might need to live with for a long time.

However, an imperfect mainline is less of a problem in a CI context. What is the cost of a coding style violation that only lives for a few hours? What is the cost of temporary scaffolding (such as a branch by abstraction) living in the codebase for a few days?

CI suggests instead a habitable mainline branch. A workspace that’s being actively worked in. It’s not clinically clean, it’s a safe and useful environment to get work done in. An environment you’re comfortable spending lots of time in. How clean a workspace needs to be depends on the context. Compare a gardeners or plumbers’ work environment to a medical work environment.

“but how will we test it?”

Some teams separate the activities of software development from software testing. One pattern is testing features when each feature is complete, during an integration and stabilisation phase.

This allows teams to maintain a main branch that they think works, with uncertain work in progress in isolation.

However thorough our automated, manual, and exploratory testing we’re never going to have perfect software quality. Integration-testing might be a pattern to ensure integrated code meets some arbitrary quality bar but it won’t be perfect.

CI implies a different approach. Continuous exploratory testing of the main version. Continually improving our understanding of the current state of the system. Continuously improving it as our understanding improves. Combine this with TDD and high levels of automated checks and we can have some confidence that each micro change we integrate works as intended.

Again, this sort of approach requires being comfortable with main being imperfect. Or perhaps a recognition that it is always going to be imperfect, whatever we do.

“but we need to be able to do bugfixes”

Many teams work in batches. Deploying and releasing one set of features, working on more features in feature branches, then integrating, deploying, and releasing the next batch.

Under this model they can keep a branch that represents the current deployed version of the software. When an urgent bug is discovered in production they can fix it on this branch and deploy just that change.

From such a position the prospect of making a bugfix on top of a bunch of other already integrated changes might seem alarming. What if one of our other changes causes a regression.

CI is a fundamentally different way of working. Where our current state of main always captures the team’s current understanding of the most progressed, safest, least buggy system. Always deployable. Zero-bugs (bugs fixed when they’re discovered). Constantly evolving through small, safe steps.

A good way to make it safe to deploy bugfixes in a CI context is to also practise continuous deployment. Every micro-change deployed to production (not necessarily released). Doing this we’ll always have confidence we can deploy fixes rapidly. We’re forced to ensure that main is always safe for bugfixes.

“but…”

There’s also plenty of circumstances in which CI is not feasible or not the right approach for you. Maybe you’re the only developer! Occasional integration works well for sporadic collaboration between people with spare time open source contributions. For teams distributed across wide timezones there’s less benefits to CI. You’re not going to get fast feedback while your colleague is asleep! You can still work in and benefit from small steps regardless of whether anyone is watching.

Sometimes feedback is less important than hammering out code. If you’re working on something that you could do in your sleep and all that holds you back is how fast you can hammer out lines of code. The value of CI is much less.

Perhaps your team is very used to working with long lived branches. Used to having the code/tests broken for extended periods while working on a problem. It’s not feasible to “just” switch to a continuous integration style. You need to get used to working in small, safe, steps.

How…

Try it

Make “could we integrate what we’ve done” a question you ask yourself habitually. It fits naturally into the TDD cycle. When the tests are green consider integration. It should be safe.

Listen to the feedback

Listen to the feedback. Ok, you tried integrating more frequently and something broke, or things were slower. Why was that really? How could you avoid similar problems occurring while still being able to integrate regularly?

Tips when it’s hard

Combine with other Extreme Programming practices.

CI is easier with other Extreme Programming practices, not just TDD—which makes it safer and lends a useful cadence to development .

It’s easier when pair programming. Someone else helping remember the wider context. Someone to suggest stepping back and integrating a smaller set before going down a rabbit hole. Pairing also helps our chances of each change being safe to make. It’s more likely that others on the team will be happy with our change if our pair is on board.


CI is a lot easier with collective ownership. Where you are free to change any part of the codebase to make your desired change easy.

When your change is hard to do in small steps, first tackle one thing that makes it hard. “First make the change easy”

Separate expanding and contracting. Start building your new functionality in several steps alongside the old, then migrate existing usages, then finally remove the old. This can be done in several steps.

Separate integrating and releasing. Integrating your code should not mean that the code necessarily affects your users. Make releasing a product/business decision with feature toggles.

Invest in fast tooling. If your build and test suite takes more than 5 minutes you’re going to struggle to do continuous integration. A 5 min build and test run is feasible even with tens of thousands of tests. However, it does require constant investment in keeping the tooling fast. This is a cost of CI, but it’s also a benefit. CI requires you to keep the tooling you need to safely integrate and release a change fast and reliable. Something you’ll be thankful for when you need to make a change fast.

That’s a lot of work…

Unlike having CI [tooling], doing CI is not for all teams. It seems uncommonly practised. In some contexts it’s impractical. In others it’s not worth the overhead. Maybe worth considering whether the feedback and risk reduction would help your team.

If you’re not doing CI and you try it out, things will likely be hard. You may break things. Try to reflect deeper than “we tried it and it didn’t work”. What made it hard to work in and integrate small changes? Should you address those things regardless?

Posted by & filed under ContinuousDelivery, Java, XP.

Pain is something we generally try to avoid; pain is unpleasant, but it also serves an important purpose.

Acute pain can be feedback that we need to avoid doing something harmful to our body, or protect something while it heals. Pain helps us remember the cause of injuries and adapt our behaviour to avoid a repeat.

As a cyclist I occasionally get joint pain that indicates I need to adjust my riding position. If I just took painkillers and ignored the pain I’d permanently injure myself over time.

I’m currently recovering from a fracture after an abrupt encounter with a pothole. The pain is helping me rest and allow time for the healing process. The memory of the pain will also encourage me to consider the risk of potholes when riding with poor visibility in the future.

We have similar feedback mechanisms when planning, building, and running software; we often find things painful.

Alas, rather than learn from pain and let it guide us, we all too often stock up on painkillers in the form of tooling or practices that let us press on obstinately doing the same thing that caused the pain in the first place.

Here are some examples…

Painful Tests

Automated tests can be a fantastic source of feedback that helps us improve our software and learn to write better software in the future. Tests that are hard to write are a sign something could be better.

The tests only help us if we listen to the pain we feel when tests are hard to write and read. If we reach for increasingly sophisticated tooling to allow us to continue doing the painful things, then we won’t realise the benefits. Or worse, if we avoid unit testing in favour of higher level tests, we’ll miss out on this valuable feedback altogether.

Here’s an example of a test that was painful to write and read, testing the sending of a booking confirmation email.

6c9ceeacf7b57f1d6da683572597f16c018

  • The test method is very long at around 50 lines of code
  • We have boilerplate setting up stubbing for things irrelevant to the test such as queue sizes and supervisors
  • We’ve got flakiness from assuming the current date will be the same in two places—the test might not pass if run at midnight, or when changing the time
  • There’s multiple assertions for multiple responsibilities
  • We’ve had to work hard to capture side effects

Feeling this pain, one response would be to reach for painkillers in the form of more powerful mocking tools. If we do so we end up with something like this. Note that we haven’t improved the implementation at all (it’s unchanged), but now we’re feeling a lot less pain from the test.

@Test // Click to Expand, Full code in link above
public void sendsBookingConfirmationEmail() {
    var emailSender = new EmailSender() {
        String message;
        String to;
 
        public void sendEmail(String to, String message) {
            this.to = to;
            this.message = message;
        }
 
        public void sendHtmlEmail(String to, String message) {
 
        }
 
        public int queueSize() {
            return 0;
        }
    };
 
    var support = new Support() {
        @Override
        public AccountManager accountManagerFor(Customer customer) {
            return new AccountManager("Bob Smith");
        }
 
        @Override
        public void calculateSupportRota() {
 
        }
 
        @Override
        public AccountManager superviserFor(AccountManager accountManager) {
            return null;
        }
    };
 
 
    BookingNotifier bookingNotifier = new BookingNotifier(emailSender, support);
 
    Customer customer = new Customer("jane@example.com", "Jane", "Jones");
    bookingNotifier.sendBookingConfirmation(customer, new Service("Best Service Ever"));
 
    assertEquals("Should send email to customer", customer.email, emailSender.to);
    assertEquals(
        "Should compose correct email",
        emailSender.message,
        "Dear Jane Jones, you have successfully booked Best Service Ever on " + LocalDate.now() + ". Your account manager is Bob Smith"
    );
 
}
  • The test method is a quarter the length—-but the implementation is as complex
  • The flakiness is gone as the date is mocked to a constant value—but the implementation still has a hard dependency on the system time.
  • We’re no longer forced to stub irrelevant detail—but the implementation still has dependencies on collaborators with too many responsibilities.
  • We only have a single assertion—but there are still as many responsibilities in the implementation
  • It’s easier to capture the side effects—but they’re still there

A better response would be to reflect on the underlying causes of the pain. Here’s one direction we could go that removes much of the pain and doesn’t need complex frameworks

@Test // Click to Expand, Full code in link above
public void sendsBookingConfirmationEmail() throws Exception {
    var emailSender = mock(EmailSender.class);
    var support = mock(Support.class);
 
    BookingNotifier bookingNotifier = new BookingNotifier(emailSender, support);
 
    LocalDate expectedDate = LocalDate.parse("2000-01-01");
    Customer customer = new Customer("jane@example.com", "Jane", "Jones");
    when(support.accountManagerFor(customer)).thenReturn(new AccountManager("Bob Smith"));
    mockStatic(LocalDate.class, args -> expectedDate);
 
    bookingNotifier.sendBookingConfirmation(customer, new Service("Best Service Ever"));
 
    verify(emailSender).sendEmail(
        customer.email,
        "Dear Jane Jones, you have successfully booked Best Service Ever on 2000-01-01. Your account manager is Bob Smith"
    );
 
}
  • The test method is shorter, and the implementation does less
  • The flakiness is gone as the implementation no longer has a hard dependency on the system time
  • We’re no longer forced to stub irrelevant detail because the implementation only depends on what it needs
  • We only have a single assertion, because we’ve reduced the scope of the implementation to merely composing the email. We’ve factored out the responsibility of sending the email.
  • We’ve factored out the side effects so we can test them separately

My point is not that the third example is perfect (it’s quickly thrown together), nor am I arguing that mocking frameworks are bad. My point is that by learning from the pain (rather than rushing to hide it with tooling before we’ve learnt anything) we can end up with something better.

The pain we feel when writing tests can also be a prompt to reflect on our development process—do we spend enough time refactoring when writing the tests, or do we move onto the next thing as soon as they go green? Are we working in excessively large steps that let us get into messes like the above that are painful to clean up?

n.b. there’s lots of better examples of learning from test feedback in chapter 20 of the GOOS book.

Painful Dependency Injection

Dependency injection seems to have become synonymous with frameworks like spring, guice, dagger; as opposed to the relatively simple idea of “passing stuff in”. Often people reach for dependency injection frameworks out of habit, but sometimes they’re used as a way of avoiding design feedback.

If you start building a trivial application from scratch you’ll likely not feel the need for a dependency injection framework at the outset. You can wire up your few dependencies yourself, passing them to constructors or function calls.

As complexity increases this can become unwieldy, tedious, even painful. It’s easy to reach for a dependency injection framework to magically wire all your dependencies together to remove that boilerplate.

However, doing so prematurely can deprive you of the opportunity to listen to the design feedback that this pain is communicating.

Could you reduce the wiring pain through increased modularity—adding, removing, or finding better abstractions?

Does the wiring code have more detail than you’d include in a document explaining how it works? How can you align the code with how you’d naturally explain it? Is the wiring code understandable to a domain expert? How can you make it more so?

Here’s a little example of some manual wiring of dependencies. While short, it’s quite painful:

@Test // Click to Expand, Full code in link above
public void composesBookingConfirmationEmail() {
 
    AccountManagers dummyAllocation = customer -> new AccountManager("Bob Smith");
    Clock stoppedClock = () -> LocalDate.parse("2000-01-01");
 
    BookingNotificationTemplate bookingNotifier = new BookingNotificationTemplate(dummyAllocation, stoppedClock);
 
    Customer customer = new Customer("jane@example.com", "Jane", "Jones");
 
    assertEquals(
        "Should compose correct email",
        bookingNotifier.composeBookingEmail(customer, new Service("Best Service Ever")),
        "Dear Jane Jones, you have successfully booked Best Service Ever on 2000-01-01. Your account manager is Bob Smith"
    );
 
}
  • There’s a lot of components to wire together
  • There’s a mixture of domain concepts and details like database choices
  • The ordering is difficult to get right to resolve dependencies, and it obscures intent

At this point we could reach for a DI framework and @Autowire or @Inject these dependencies and the wiring pain would disappear almost completely.

However, if instead we listen to the pain, we can spot some opportunities to improve the design. Here’s an example of one direction we could go

// Click to Expand, Full code in link above
public static void main(String... args) {
    var credentialStore = new CredentialStore();
 
    var eventStore = new InfluxDbEventStore(credentialStore);
 
    var probeStatusReporter = new ProbeStatusReporter(eventStore);
 
    var probeExecutor = new ProbeExecutor(new ScheduledThreadPoolExecutor(2), probeStatusReporter, credentialStore, new ProbeConfiguration(new File("/etc/probes.conf")));
 
    var alertingRules = new AlertingRules(new OnCallRota(new PostgresRotaPersistence(), LocalDateTime::now), eventStore, probeStatusReporter)
 
    var pager = new Pager(new SMSGateway(), new EmailGateway(), alertingRules, probeStatusReporter);
 
    var dashboard = new Dashboard(alertingRules, probeExecutor, new HttpsServer());
}
  • We’ve spotted and fixed the dashboard’s direct dependency on the probe executor, it now uses the status reporter like the pager.
  • The dashboard and pager shared a lot of wiring as they had a common purpose in providing visibility on the status of probes. There was a missing concept here, adding it has simplified the wiring considerably.
  • We’ve separated the wiring of the probe executor from the rest.

After applying these refactorings the top level wiring reads more like a description of our intent.

Clearly this is just a toy example, and the refactoring is far from complete, but I hope it illustrates the point: dependency injection frameworks are useful, but be aware of the valuable design feedback they may be hiding from you.

Painful Integration

It’s common to experience “merge pain” when trying to integrate long lived branches of code and big changesets to create a releasable build. Sometimes the large changesets don’t even pass tests, sometimes your changes conflict with changes others on the team have made.

One response to this pain is to reach for increasingly sophisticated build infrastructure to hide some of the pain. Infrastructure that continually runs tests against branched code, or continually checks merges between branches can alert you to problems early. Sadly, by making the pain more bearable, we risk depriving ourselves of valuable feedback.

Ironically continuous-integration tooling often seems to be used to reduce the pain felt when working on large, long lived changesets; a practice I like to call “continuous isolation”.

You can’t automate away the human feedback available when integrating your changes with the rest of the team—without continuous integration you miss out on others noticing that they’re working in the same area, or spotting problems with your approach early.

You also can’t replace the production feedback possible from integrating small changes all the way to production (or a canary deployment) frequently.

Sophisticated build infrastructure can give you the illusion of safety by hiding the pain from your un-integrated code. By continuing to work in isolation you risk more substantial pain later when you integrate and deploy your larger, riskier changeset. You’ll have a higher risk of breaking production, a higher risk of merge conflicts, as well as a higher risk of feedback from colleagues being late, and thus requiring substantial re-work.

Painful Alerting

Over-alerting is a serious problem; paging people spuriously for non-existent problems or issues that do not require immediate attention undermines confidence, just like flaky test suites.

It’s easy to respond to overalerting by paying less and less attention to production alerts until they are all but ignored. Learning to ignore the pain rather than listening to its feedback.

Another popular reaction is to desire increasingly sophisticated tooling to handle the flakiness—from flap detection algorithms, to machine learning, to people doing triage. These often work for a while—tools can assuage some of the pain, but they don’t address the underlying causes.

The situation won’t significantly improve without a feedback mechanism in place, where you improve both your production infrastructure and approach to alerting based on reality.

The only effective strategy for reducing alerting noise that I’ve seen is: every alert results in somebody taking action to remediate it and stop it happening again—even if that action is to delete the offending alerting rule or amend it. Analyse the factors that resulted in the alert firing, and make a change to improve the reliability of the system.

Yes, this sometimes does mean more sophisticated tooling when it’s not possible to prevent the alert firing in similar spurious circumstances with the tooling available.

However it also means considering the alerts themselves. Did the alert go off because there was an impact to users, the business, or a threat to our error budget that we consider unacceptable? If not, how can we make it more reliable or relevant?

Are we alerting on symptoms and causes rather than things that people actually care about?
Who cares about a server dying if no users are affected? Who cares about a traffic spike if our systems handle it with ease?

We can also consider the reliability of the production system itself. Was the alert legitimate? Maybe our production system isn’t reliable enough to run (without constant human supervision) at the level of service we desire? If improving the sophistication of our monitoring is challenging, maybe we can make the system being monitored simpler instead?

Getting alerted or paged is painful, particularly if it’s in the middle of the night. It’ll only get less painful long-term if you address the factors causing the pain rather than trying hard to ignore it.

Painful Deployments

If you’ve been developing software for a while you can probably regale us with tales of breaking production. These anecdotes are usually entertaining, and people enjoy telling them once enough time has passed that it’s not painful to re-live the situation. It’s fantastic to learn from other people’s painful experiences without having to live through them ourselves.

It’s often painful when you personally make a change and it results in a production problem, at least at the time—not something you want to repeat.

Making a change to a production system is a risky activity. It’s easy to associate the pain felt when something goes wrong, with the activity of deploying to production, and seek to avoid the risk by deploying less frequently.

It’s also common to indulge in risk-management theatre: adding rules, processes, signoff and other bureaucracy—either because we mistakenly believe it reduces the risk, or because it helps us look better to stakeholders or customers. If there’s someone else to blame when things go wrong, the pain feels less acute.

Unfortunately, deploying less frequently results in bigger changes that we understand less well; inadvertently increasing risk in the long run.

Risk-management theatre can even threaten the ability of the organisation to respond quickly to the kind of unavoidable incidents it seeks to protect against.

Yes, most production issues are caused by an intentional change made to the system, but not all are. Production issues get caused by leap second bugs, changes in user behaviour, spikes in traffic, hardware failures and more. Being able to rapidly respond to these issues and make changes to production systems at short notice reduces the impact of such incidents.

Responding to the pain of deployments that break production by changing production less often, is pain avoidance rather than addressing the cause.

Deploying to production is like bike maintenance. If you do it infrequently it’s a difficult job each time and you’re liable to break something. Components seize together, the procedures are unfamiliar, and if you don’t test-ride it when you’re done then it’s unlikely to work when you want to ride. If this pain leads you to postpone maintenance, then you increase the risk of an accident from a worn chain or ineffective brakes.

A better response with both bikes and production systems is to keep them in good working order through regular, small, safe changes.

With production software changes we should think about how we can make it a safe and boring activity—-how can we reduce the risk of deploying changes to production, or how can we reduce the impact of deploying bad changes to production.

Could the production failure have been prevented through better tests?

Would the problem have been less severe if our production monitoring had caught it sooner?

Might we have spotted the problem ourselves if we had a culture of testing in production and were actually checking that our stuff worked once in production?

Perhaps canary deploys would reduce the risk of a business-impacting breakage?

Would blue-green deployments reduce the risk by enabling swift recovery?

Can we improve our architecture to reduce the risk of data damage from bad deployments?

There are many many ways to reduce the risk of deployments, we can channel the pain of bad deployments into improvements to our working practices, tooling, and architecture.

Painful Change

After spending days or weeks building a new product or feature, it’s quite painful to finally demo it to the person who asked for it and discover that it’s no longer what they want. It’s also painful to release a change into production and discover it doesn’t achieve the desired result, maybe no-one uses it, or it’s not resulting in an uptick to your KPI.

It’s tempting to react to this by trying to nail down requirements first before we build. If we agree exactly what we’re building up front and nail down the acceptance criteria then we’ll eliminate the pain, won’t we?

Doing so may reduce our own personal pain—we can feel satisfied that we’ve consistently delivered what was asked of us. Unfortunately, reducing our own pain has not reduced the damage to our organisation. We’re still wasting time and money by building valueless things. Moreover, we’re liable to waste even more of our time now that we’re not feeling the pain.

Again, we need to listen to what the pain’s telling us; what are the underlying factors that are leading to us building the wrong things?

Fundamentally, we’re never going to have perfect knowledge about what to build, unless we’re building low value things that have been built many times before. So instead let’s try to create an environment where it’s safe to be wrong in small ways. Let’s listen to the feedback from small pain signals that encourage us to adapt, and act on it, rather than building up a big risky bet that could result in a serious injury to the organisation if we’re wrong.

If we’re frequently finding we’re building the wrong things, maybe there are things we can change about how we work, to see if it reduces the pain.

Do we need to understand the domain better? We could spend time with domain experts, and explore the domain using a cheaper mechanism than software development, such as eventstorming.

Perhaps we’re not having frequent and quality discussions with our stakeholders? Sometimes minutes of conversation can save weeks of coding.

Are we not close enough to our customers or users? Could we increase empathy using personas, or attending sales meetings, or getting out of the building and doing some user testing?

Perhaps having a mechanism to experiment and test our hypotheses in production cheaply would help?

Are there are lighter-weight ways we can learn that don’t involve building software? Could we try selling the capabilities optimistically, or get feedback from paper prototypes, or could we hack together a UI facade and put it in front of some real users?

We can listen to the pain we feel when we’ve built something that doesn’t deliver value, and feed it into improving not just the product, but also our working practices and habits. Let’s make it more likely that we’ll build things of value in the future.

Acute Pain

Many people do not have the privilege of living pain-free most of the time, sadly we have imperfect bodies and many live with chronic pain. Acute pain, however, can be a useful feedback mechanism.

When we find experiences and day to day work painful, it’s often helpful to think about what’s causing that pain and, what we can do to eliminate the underlying causes, before we reach for tools and processes to work around or hide the pain.

Listening to small amounts of acute pain, looking for the cause and taking action sets up feedback loops that help us improve over time; ignoring the pain leads to escalating risks that build until something far more painful happens.

What examples do you have of people treating the pain rather than the underlying causes?

Posted by & filed under ContinuousDelivery, XP.

This week will be my last at Unruly; I’ll be moving on just shy of nine years from when I joined a very different company at the start of an enthralling journey.

Unruly’s grown from around a dozen people when I joined to hundreds, with the tech team growing proportionally. Team growth driven by needs arising from commercial success with revenue growth, investment, being acquired, and continued success today.

A constant over the past few years has been change. We had continued success partly because we successfully adapted products to rapidly changing commercial contexts. Success in turn instigated change that required more adaptation.

It’s been a privilege to be part of a company that was successful, affording me with many opportunities and remaining interesting for nine years; I’d like to think I’ve played some small part in making it so.

It’s almost a meme in tech that one “should” move on to a new organisation every 2 years to be successful and learn. Those who stick in the same place for longer are sometimes even judged as lacking ambition or being content with not learning new things. “Do they have 9 years of experience or one year of experience 9 times?” people quip.

There are, however, benefits of staying at the same company for an extended period of time that don’t get talked about a great deal.

Witness Tech Lifecycle

A cliched reaction when reading code is “who [what idiot] wrote this?”. It’s easy to blame problems on the previous administration. However, to do so is to miss a learning opportunity. If we followed Norm Kerth’s prime directive:

“Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.”

We could see code or systems that surprise us as an opportunity to understand what the context was that led people to build things in this way. Yes, perhaps they did not have the skill to see your “obviously better” solution. On the other hand maybe they had no idea that what they were building would be used for its current application. Maybe they had cost, or technological constraints that are now invisible to you.

Understanding the history of our software and systems can help us shape them into the future, avoid past mistakes, and improve our understanding of the domain at the current point in time.

It has been particularly interesting to see first hand how things play out with tech over an extended period of time, such as

  • How early design decisions enable or limit longevity
  • TDDed codebases supporting substantial change safely for many years
  • Hot new hyped tech becoming tech nobody wants to touch
  • Tech being used for drastically different purposes to what it was built for
  • Code that is habitable and is “lived-in” out of necessity remaining easily maintainable for many years
  • Highly reliable and valuable systems suffering from operational-underload. Having little need to change they fade from memory to the point that no-one knows how to make a change when it’s needed.
  • Seeing the industry change rate outpace the rate at which software can be refactored.

Sticking around at the same place for a while makes it possible to observe all this happening. Even if you haven’t had the luxury of being a witness to the history, it’s an interesting exercise to dig through artifacts such as code, systems, documents, as well as speaking to those who were there to understand how things got to where they are today.

Witness Lifecycle of Practices

It’s been interesting to observe the cycle of teams trying new things to work more effectively. It often goes something like

  1. Frustration with the ineffectiveness of an aspect of how the team is working
  2. Experiment proposed
  3. Adoption of new working practice
  4. Cargo culted as “how we work”
  5. The original intent is forgotten
  6. The practice changes as people copy what they observe imperfectly
  7. Context changes
  8. The practice is no longer helpful; we keep doing it anyway out of habit
  9. Repeat

It seems to be relatively easy to communicate traditions and rituals through time—the things that we do that can be observed by new colleagues.

It appears much harder to retain organisational memory of the intent behind practices. This can lead to practices being continued after they stop being useful, or being twisted into a semblance of the original practice that doesn’t achieve the same benefits.

This happens on trivial things e.g. a team found they were recording meetings just because other teams were doing so, even though no-one was listening to their recordings.

It also happens in more dangerous contexts—we observed our practice of continuous deployment drifting from a safe, tight feedback loop to a fire and forget strategy of hope. Newcomers had observed regular, confident deploys, but missed the checking and responding part of the feedback loops.

Even well documented XP practices are not immune to this: the practice of continuous integration becoming synonymous with tooling and then used to support isolation rather than integration. TDD becoming synonymous with writing tests first rather than a feedback loop—creating resistance to refactoring rather than enabling it.

Various things help teams pick up on these sort of problems and adapt, but it takes longer to recognise there’s a problem when intent has been forgotten.

Our teams have regular retrospectives with facilitators from other teams. We’ve encouraged blogging & speaking about the way we work, both internally and externally. We even have a team of coaches who work to help teams continuously improve.

None of these are sufficient. I think where we’ve been most effective at retaining both practices and understanding of intent is where there’s a clear narrative that can be retold to new people in the team. e.g. tales of wins originating from Gold Cards (20% time), help people to understand why they’re valuable.

Sticking in the same place for a while gives the luxury of remembering the original intent behind working practices. Even if you’re new to a team it’s worth questioning things the team is doing, rather than assuming there’s a good reason; try to understand the intent and see if it’s still achieving that today. Fresh eyes are valuable too.

Observe Teams Grow

Seeing the same organisation at different stages of growth is quite interesting. Observing practices that worked at one scale ceasing to be effective.

It’s easy to look at things that work at other organisations and assume that they’ll work where you are as well. However, it’s enlightening to see things that used to work in your own organisation cease to work because the context has changed.

Take deployment strategies: when all your users are within earshot you can maybe just shout that there’s going to be an outage and see if anyone objects. At a larger scale, zero-downtime deployments become important. When risk is higher, things like canary deploys and blue-green deployments become necessary (if you want to continue to deliver continuously).

Take communication: if the team is small and co-located perhaps everyone can know what’s going on through osmosis. As the team grows, more deliberate communication is needed to keep people informed. As scale increases, more and more effort is needed to distil meaning from the noise of information.

Safely Explore Different Roles

Sticking in one place for a while affords one the luxury of not having to learn a new tech stack, domain, and culture. There’s of course plenty to learn just to keep up with the pace of change within the same tech stack and domain, but enough remains constant to create space for other learning.

For me it created space to learn leadership skills, change management skills, people management skills, coaching skills, facilitation skills and more.

In a supportive organisation it may even be possible to try out different sorts of roles without risking being out of a job if it doesn’t work out. Charity Majors’ post on the engineer manager pendulum really resonates with me. I’ve enjoyed the the opportunity to switch between very different roles within product development over the past few years. Others have even switched between BizDev, Adops, Product, Data and Development roles.

The last few years

I’ve been privileged to work for a supportive company that has provided me with opportunities without hopping around. I’ve had the honour of working with many brilliant people from whom I’ve learnt a great deal.

In the last nine years I’ve made many mistakes, and lived to correct them. I’ve helped build products that failed, and helped turn them into a success. I’ve hurt people, and been forgiven. I’ve created conflicts, and resolved them. I’ve seen code become legacy, and salvaged it. I’ve caused outages, and recovered from them.

I’m not suggesting that everyone should stick at the same place for a long time, just that it can be fulfilling if you find yourself in a place as great as Unruly.

Posted by & filed under XP.

At Unruly we have a quarterly whole-company hack day that we call Oneruly day. Hackdays allow the whole company to focus on one thing for a day.

Unlike our 20% time, which is time for individuals to work on what is most important to them, Hackdays are time for everyone to rally around a common goal.

In product development we run this in true Unruly style: avoiding rules or control. We do have a lightweight process that seems to work well for self-organisation in a group this size (~50 people).

Self Organisation

During the week in the run up to Oneruly day we set up a whiteboard in the middle of the office with the topic written on. Anyone with an idea of something related to that topic that we could work on writes it on an oversized postit note and pops it up on the board.

On the day itself there’s usually a last minute flurry of ideas added to the board, and the whole product development team (some 50-60 people) all gather around. We go through the ideas one by one. The proposer pitches their idea for around 60 seconds, explaining why it’s important/interesting, and why others might want to work on it.

Once we’ve heard the pitches, the proposers take their oversized postit and spread out, everyone else goes and joins one of the people with postits—forming small teams aligned around an interest in a topic.

Each group then finds a desk/workstation to use and starts discussing what they want to achieve & the best way of going about it.

This is facilitated by our pair-programming friendly office space—having workstations that are all set up the same, with large desks and plenty of space for groups to gather round, in any part of the office.

Usually each group ends up self-organising into either a large mob (multiple developers all working on the same thing, at the same time, on the same workstation), or a couple of smaller pairs or mobs (depending on the tasks at hand). Sometimes people will decide that their group has too many people, or they’re not adding value and and go and help another group instead.

Teams will often split up to investigate different things, explore different options or tackle sub-problems, and then come back together later.

Inspiring Results

We usually wrap up with a show and tell for the last hour of the day. It’s pretty inspiring to see the results from a very few hours work…

  • There’s great ideas that are commercially strong and genuinely support the goal.
  • We get improvements all the way from idea to production.
  • People step up and on leadership roles, regardless of their seniority
  • We learn things in areas that we wouldn’t normally explore.
  • People work effectively in teams that didn’t exist until that morning.
  • There’s a variety of activities from lightweight lean-startup style experiments to improving sustainability of existing systems.

All this despite the lack of any top down direction other than choosing the high level theme for the day.

What can we learn?

Seeing such a large group consistently self-organise to achieve valuable outcomes in a short space of time begets the question: How much are our normally more heavyweight processes and decision-making stifling excellence rather than improving outcomes?

What rules (real or imaginary) could we get rid of? What would happen if we did?

Hackdays are a great opportunity to run a timeboxed experiment of a completely different way of working.

Posted by & filed under Java, Testing, XP.

End to end automated tests written with Webdriver have a reputation for being slow, unreliable (failing for spurious reasons), and brittle (breaking with any change).

So much so that many recommend not using them. They can become a maintenance burden, making it harder, rather than easier, to make changes to the user interface.

However, these tests can be invaluable. They can catch critical bugs before they hit production. They can identify browser-specific bugs, are implementation-agnostic, can check invariants, be used for visual approval tests, can even be used for production monitoring, not to mention retrofitting safety to poorly tested systems.

Despite their reputation, these tests can be kept reliable, fast, and maintainable. There’s no “one weird trick”—it’s mostly a matter of applying the same good practices and discipline that we ought to be applying to any automated tests; end to end tests really turn up the pain from doing it wrong.

Avoid

When I polled a few people what their top tip for writing reliable, fast, and maintainable webdriver tests was, the most common suggestion was, simply…

“Don’t”

They are indeed hard to write well, they are indeed expensive to maintain, there are easier, better testing tools for checking behaviour.

So don’t use them if you don’t need them. They are easier to retrofit later if you change your mind than most other forms of automated testing.

Certainly they don’t replace other types of automated tests. Nor can they be a replacement for manual exploratory testing.

Often subcutaneous testing (testing just under the UI layer) can be sufficient to cover important behaviours—if you are disciplined about keeping logic out of your UI.

Unfortunately, that’s particularly hard with web tech, where the presentation itself is often complex enough to need testing; while behaviour can work perfectly in one browser or in a simulated environment, it can still fail spectacularly in just one browser.

We often see the pain of maintaining end to end tests, but there’s also lots of value…

Tackling Risk

I work in adtech, where the real user experience in real browsers is really, really important.

This might sound like an odd statement, who likes ads? who would mind if they didn’t work?

I’m sure you can remember a poor user experience with an ad. Perhaps it popped up in front of the content you were trying to read, perhaps it blasted sound in your ears and you had to go hunting through your tabs to find the culprit.

I’m guessing these experiences didn’t endear you to the brand that was advertising? User experience is important, politeness is important. Impolite ads come not only from intentionally obnoxious advertisers, but from bugs, and even browser specific bugs.

We also have an elevated risk, we’re running code out in the wild, on publisher pages, where it interacts with lots of other people’s code. There’s lots that could go wrong. We have a heavy responsibility to avoid any possibility of breaking publisher pages.

However simple our UI, we couldn’t take the risk of not testing it.

Extra Value

If you have invested in end to end tests, there’s lots of opportunities for extracting extra value from them, beyond the obvious.

Multi-device

Once a test has been written, that same test case can be run same across multiple browsers & devices. Checking that behaviour has at least some semblance of working on different devices can be incredibly valuable to increase confidence in changes.

Who has time and money to manually test every tiny change with a plethora of devices? Even if you did, how slow would it make your team, do you want to measure your release lead time in minutes or months?

Approval Tests

Webdriver tests don’t actually check that a user is able to complete an action—they check whether a robot can; they won’t always pick up on visual defects that make a feature unusable.

Approval Tests can help here. Approval tests flag a change in a visual way that a person can quickly evaluate to either approve or reject the change.

We can store a known-good screenshot of how a feature should look, and then automatically compare it to a screenshot generated by a testcase. If they differ (beyond agreed tolerances) flag the change to a somebody to review.

Webdriver can take screenshots, and can be easily integrated with various approval tests tools & services. If you have an existing suite of webdriver tests, using a selected few for visual approval tests can significantly reduce risk.

Approval tests are deliberately brittle, you don’t want many of them. They require someone to manually intervene every time there’s a change. However, they can really help spot unexpected changes.

Legacy

Not everyone is fortunate enough to get to work with systems with high levels of automated test coverage. For those who aren’t, tests that drive the UI provide a mechanism to adding some automated test coverage without invasive changes to the application to introduce seams for testing.

Even a few smoke end to end tests for key workflows can significantly increase a team’s confidence to make changes. Lots of diagnosis time can be saved if breakages are identified close to the point in time at which they were introduced.

Invariants

With a suite of end to end tests, one can check invariants—things that should be true in every single testcase; including things that would be hard to test in other ways. These can be asserted in the test suite or with hooks like junit rules, without modifying each testcase.

Sound

People understandably really don’t like it when they get unsolicited sound while they’re browsing.

By capturing audio to the sound device during every webdriver test execution we are able to assert that we don’t have any features that unintentionally trigger sound.

Security

Preexisting test suites can be run with a proxy attached to the browser, such as OWASP ZAP and the recordings from the proxy can be used to check for common security vulnerabilities.

Download Size

Rules such as “no page may be over 1MB in total size” can be added as assertions across every test.

Implementation Independent

We have webdriver tests that have survived across multiple implementations & technology stacks.

Desired behaviours often remain the same even when the underlying technology changes.

Webdriver tests are agnostic to the technology used for implementation, and can live longer as a result.

They can also provide confidence that behaviour is unchanged during a migration to a new technology stack. They support incremental migration with the strangler pattern or similar techniques.

Production Monitoring

End to end tests usually check behaviour that should exist and work in production. We usually run these tests in an isolated environment for feedback pre-production.

However, it’s possible to run the same test suites against the production instances of applications and check that the behaviour works there. Often just by changing the URL your tests point to.

This unlocks extra value—there’s so many reasons that features may not work as expected in production, regardless of whether your application is “up”.

It does require you to find a way to isolate your test data in production, to avoid your tests polluting your production environment.

Inventory Cost

Browser based tests can be made reasonably reliable and kept reasonably fast, but they do have a significant inventory cost. The more tests we have, the more time we need to invest in keeping them reliable and fast.

A 0.01% failure rate might be tolerable with 10 tests but probably isn’t with 1,000 tests.

Testcases that take 5 seconds each to run might be tolerable with 10 tests, but probably aren’t with 1,000 tests (unless they parallelise really well).

There’s also a maintenance cost to keeping the tests working as you change your application. It takes effort to write your tests such that they don’t break with minor UI changes.

The cost of tests can spiral out of control to the point that they’re no longer a net benefit. To stay on top of it requires prioritising test maintenance as seriously as keeping production monitoring checks working; it means deleting tests that aren’t worth fixing “right now” lest they undermine our confidence in the whole suite.

Reliability

End to end tests have a reputation for being unreliable, for good reason.

They’re difficult to get right due to asynchronicity, and have to be tolerant of failure due to the main moving parts and unreliable infrastructure they tend to depend upon.

Test or Implementation?

One of the most common causes for flakey tests is a non-deterministic implementation. It’s easy to blame the test for being unreliable when it fails one in a hundred times.

However, it’s just as likely, if not more likely, to be your implementation that is unreliable.

Could your flakey test be caused by a race condition in your code? Does your code still work when network operations are slow? Does your code behave correctly in the face of errors?

Good diagnostics are essential to answer this question; see below.

Wait for interactivity

A common cause of the tests themselves being unreliable seems to be failing to wait for elements to become interactive.

It’s not always possible to simply click on an element on the page, the element might not have been rendered yet, or it might not be visible yet. Instead, one should wait for an element to become visible and interactive, and then click on it.

These waits should be implicit, not explicit. If you instruct your test to sleep for a second before attempting to click a button, that might work most of the time, but will still fail when there’s a slow network connection. Moreover, your test will be unnecessarily slow most of the time when the button becomes clickable in milliseconds.

WebDriver provides an API for implicit waits that allows you to wait for a condition to be true before proceeding. Under the hood it will poll for a condition.

I prefer defining a wrapper around these waits that allows using a lambda to check a condition – it means we can say something like

6c9ceeacf7b57f1d6da683572597f16c023

Under the hood this polls a page object to check whether the message is displayed or not, and blocks until it is (or a timeout is reached)

Wait, don’t Assert

We’re used to writing assertions in automated tests like

waitUntil(confirmationMessage::isDisplayed);

or

assertEquals("Hello World", confirmationMessage.text());

This kind of assertion tends to suffer from the same problem as failing to wait for interactivity. It may take some amount of elapsed time before the condition you wish to assert becomes true.

It’s generally more reliable to wait /until/ a condition becomes true in the future, and fail with an assertion error if a timeout is hit.

It can help make this the general pattern by combining the waiting and the assertion into a single step.

assertThat(confirmationMessage.text(), is("Hello World"));

Poll confirmationMessage.text() until it becomes equal to Hello World, or a timeout is reached.

This means your tests will continue to pass, even if it takes some time to reach the state you wish to assert.

Stub Dependencies

Browser-controlling tests can be unreliable because they rely on unreliable infrastructure and third parties.

We once discovered that the biggest contributor to test flakiness was our office DNS server, which was sometimes not resolving dns requests correctly.

If your tests load resources (images, javascript, html, etc) over the internet, you rely on infrastructure outside your control. What happens if there is packet loss? What happens if the server you’re loading assets from has a brief outage? Do your tests all fail?

The most reliable option seems to be to host the assets your browser tests load on same machine that the tests are running on, so there is no network involved.

Sometimes you have requests to hardcoded URIs in your application, that can’t be easily changed to resolve to localhost for testing purposes. An HTTP proxy server like browsermob can be used to stub out HTTP requests to resolve to a local resource for test purposes. Think of it like mocking dependencies in unit tests.

Quarantine and Delete

Tests that are unreliable are arguably worse than missing tests. They undermine your confidence in the test suite. It doesn’t take many flakey tests to change your default reaction from seeing a failing test from “Something must be broken” to “Oh the tests are being unreliable”

To avoid this erosion of confidence, it’s important to prioritise fixing problematic tests. This may mean deleting the test if it’s not possible to make it reliable within the amount of time it’s worth spending on it. It’s better to delete tests than live with non-determinism.

A downside to “just” deleting non-deterministic tests is that you lose the opportunity to learn what made them non-deterministic, which may apply to other tests that you have not yet observed being flakey.

An alternative is quarantining the failing tests, so they no longer fail your build when non-deterministic, but still run on a regular basis to help gather more diagnostics as to why they might be failing.

This can be done in JUnit with rules, where you annotate the test method as @NonDeterministic and the framework retries it.

It’s possible to have the tests fail the build if they fail deterministically (i.e. if the feature is genuinely broken), but collect diagnostics if they fail and subsequently pass (non-deterministically).

waitUntilEquals("Hello World", confirmationMessage::text);

This approach needs to be combined with discipline. e.g. collecting the test failures in tickets that the team treats as seriously as a broken build. If these failures are ignored the non-determinism will just increase until the test suite doesn’t work at all.

Diagnosis is harder the longer you leave between introducing a problem and fixing it, and your buggy approach may end up getting proliferated into other tests if you leave it in place.

Diagnostics

It’s hard to work out why our tests are unreliable if all we get out as diagnostics is the occasional assertion error or timeout.

This is a particular problem when tests only fail one time in a thousand runs; we don’t get to see them fail, we have only the diagnostics we were prescient enough to collect.

This means it’s particularly important to gather as much diagnostics as possible each time a test fails. In particular, I’ve found it useful to collect

  • Browser JS console output
  • HTTP requests made by the test (HAR file)
  • Screenshots captured between steps in the test

This information could simply be logged as part of your test run. I’ve used Unit rules to tag this information onto test failure messages by wrapping the AssertionErrors thrown by junit.

@Test
@NonDeterministic
public void my_unreliable_test() {
 
}

This gives us a lot of information to diagnose what’s gone on. It’s not as good as having a browser open with devtools to investigate what’s going on, but it’s pretty good.

You could even record the entire test run as a video that can be reviewed later, there are services that can do this for you.

Stress testing new tests

Given it’s very easy to write unreliable webdriver tests, it’s a good idea to run it many times before pushing your changes.

I’ve found a junit rule handy for this too, to re-run the test many times and fail the test run if the test fails a single time.

public class AdditionalDiagnostics extends RuntimeException {
 
    public AdditionalDiagnostics(Browser browser, Throwable e) {
        super(
		e.getMessage() +  
		consoleLog(browser) + 
		httpRequests(browser), 
		collectedScreenshots(browser), 
		e
	);
    }
 
}

Another approach is to Junit’s Parameterized test feature to generate many repetitions.

Harder problems

Alas, not all causes of non-determinism in webdriver tests are straightforward to fix. Once you’ve resolved the most common issues you may still experience occasional failure that are outside your control.

Browser Bugs

Browser bugs sometimes cause the browsers to spontaneously crash during test runs.

This can sometimes be mitigated by building support into your tests for restarting browsers when they crash—if you can detect it.

Headless browsers seem less prone to crashing, but also may not yet support everything you might want to test. Headless chrome still has issues with proxies, extensions, and video playback at time of writing.

Treat like Monitoring

Everything from buggy graphics drivers, to lying DNS servers, to slow clocks, to congested networks can cause unexpected test failures.

A production system is never “up”. It is in a constant state of degradation. The same applies to end to end tests to some extent, as they also tend to rely on infrastructure and many moving parts.

When we build production monitoring we take this into account. It’s unrealistic to say things must be up. Instead we look for our system to be healthy. We tolerate a certain amount of failure.

A 0.01% failure rate may be tolerable to the business; what’s the cost? If it’s someone viewing a tweet the cost of failure is probably acceptable. If it’s a transfer of a million dollars it’s probably not. We determine the failure rate that’s acceptable given the context.

We can apply that to our tests as well. If a 1% failure rate is acceptable for a test, and it happens to fail once, perhaps it’s acceptable if it passes for the next 100 times in a row – this can happen, just needs a small infrastructure blip.

You can achieve this kind of measurement/control with junit rules as well. Run tests multiple times and measure its failure rate and see if it’s within a tolerable level

A benefit of treating your tests like production monitoring checks, is that you can also re-use them as production monitoring checks. Don’t you want to know whether users can successfully log-in in production as well as in your test environment? (See above)

Speed

Writing a lot of automated tests brings a lot of nice-to-have problems. End to end tests are relatively slow as tests go. It doesn’t need many tests before running them starts to get tediously slow.

One of the main benefits of automated tests is that they enable agility, by letting you build, deploy, release, experiment—try things out quickly with some confidence that you’re not breaking important things.

If it takes you hours, even several minutes to run your test suite then you’re not learning as fast as you could, and not getting the full benefits of test automation. You’ll probably need to do something else while you wait for production feedback rather than getting it straight away.

It is possible to keep test suites fast over time, but like with reliability, it requires discipline.

Synchronicity

A sometimes unpopular, but effective way to incentivise keeping test suites fast is to make them (and keep them) a synchronous part of your development process.

As developers we love making slow things asynchronous so that we can ignore the pain. We’ll push our changes to a build server to run the tests in the background while we do something else for an hour.

We check back in later to find that our change has broken the test suite, and now we’ve forgotten the context of our change.

When tests are always run asynchronously like this, there’s little incentive to keep them fast. There’s little difference between a 5 min and a 15min test run, even an hour.

On the other hand if you’re sitting around waiting for the tests to run so inform the next change you want to make, then you feel the pain when they slow down and have a strong incentive to keep them fast—and fast tests enable agility.

If your tests are fast enough to run synchronously after each change then they can give you useful feedback that truly informs the next thing you do: Do you do that refactoring because they’re green, or fix the regression you just introduced?

Of course this only works if you actually listen to the pain and prioritise accordingly. If you’re quite happy sitting around bored and twiddling your thumbs then you’ll get no benefit.

Delete

Tests have an inventory cost. Keeping them around means we have to keep them up to date as things change, keep them reliable, and do performance work to keep our entire test suite fast.

Maybe the cost of breaking certain things just isn’t that high, or you’re unsure why the test exists in the first place. Deleting tests is an ok thing to do. If it’s not giving more value than its cost then delete it.

There’s no reason our test suites only have to get bigger over time, perhaps we can trim them. After all, your tests are only covering the cases you’ve thought about testing anyway, we’re always missing things. Which of the things we are testing are really important not to break?

Monitoring / Async Tests

I argued above that keeping tests fast enough that they can be part of a synchronous development feedback loop is valuable. However, maybe there’s some tests that are less important, and could be asynchronous—either as production monitoring or async test suites.

Is it essential that you avoid breaking everything? Is there anything that isn’t that bad to break? Perhaps some features are more important than others? It might be really crucial that you never release a change that calculates financial transactions incorrectly, but is it as crucial that people can upload photos?

How long could you live with any given feature being broken for? What’s the cost? If half of your features could be broken for an hour with minimal business impact, and you can deploy a change in a few minutes, then you could consider monitoring the health of those features in production instead of prior to production.

If you can be notified, respond, and fix a production problem and still maintain your service level objective, then sometimes you’re better off not checking certain things pre-production if it helps you move faster.

On the other hand if you find yourself regularly breaking certain things in production and having to roll back then you probably need to move checks the other way, into pre-production gates.

Stubbing Dependencies

Stubbing dependencies helps with test reliability—eliminating network round trips eliminates the network as a cause of failure.

Stubbing dependencies also helps with test performance. Network round trips are slow, eliminating them speeds up the tests. Services we depend on may be slow, if that service is not under test in this particular case then why not stub it out?

When we write unit tests we stub out slow dependencies to keep them fast, we can apply the same principles to end to end tests. Stub out the dependencies that are not relevant to the test.

Move test assets onto the same machine that’s executing the tests (or as close as possible) to reduce round trip times. Stub out calls to third party services that are not applicable to the behaviour under test with default responses to reduce execution time.

Split Deployables

A slow test suite for a system is a design smell. It may be telling us that this has too many responsibilities and could be split up into separate independently deployable components.

The web is a great platform for integration. Even the humble hyperlink is a fantastic integration tool.

Does all of your webapp have to be a single deployable? Perhaps the login system could be deployed separately to the photo browser? Perhaps the financial reporting pages could be deployed separately to the user administration pages?

Defining smaller, independent components that can be independently tested and deployed, helps keep the test suites for each fast. It helps us keep iterating quickly as the overall system complexity grows.

It’s often valuable to invest in a few cross-system integration smoke tests when breaking systems apart like this.

Parallelise

The closest thing to a silver bullet for end to end test performance is parallelisation. If you have 1,000 tests that take 5 seconds each, but you can run all 1,000 in parallel, then your test suite still only takes a few seconds.

This can sometimes be quite straightforward, if you avoid adding state to your tests then what’s stopping you running all of them in parallel?

There are, however, some roadblocks that appear in practice.

Infrastructure

On a single machine there’s often a fairly low limit to how many tests you can execute in parallel, particularly if you need real browsers as opposed to headless. Running thousands of tests concurrently in a server farm also requires quite a bit of infrastructure setup.

All that test infrastructure also introduces more non-deterministic failure scenarios that we need to be able to deal with. It may of course be worth it if your tests are providing enough value.

AWS lambda is very promising for executing tests in parallel, though currently limited to headless browsers.

State

Application state is a challenge for test parallelisation. It’s relatively easy to parallelise end to end tests of stateless webapp features, where our tests have no side-effect on the running application. It’s more of a challenge when our tests have side effects such as purchasing a product, or signing-up as a new user.

The result of one test can easily affect another by changing the state in the application. There’s a few techniques that can help:

Multiple Instances

Perhaps the conceptually simplest solution is to run one instance of the application you’re testing for each test runner, and keep the state completely isolated.

This may of course be impractical. Spinning up multiple instances of the app and all its associated infrastructure might be easier said than done—perhaps you’re testing a legacy application that can’t easily be provisioned.

Side-Effect Toggles

This is a technique that can also be used for production monitoring. Have a URL parameter (or other way of passing a flag to your application under test) that instructs the application to avoid triggering certain side effects. e.g. ?record_analytics=false

This technique is only useful if the side effects are not necessary to the feature that you’re trying to test. It’s also only applicable if you have the ability to change the implementation to help testing.

Application Level Isolation

Another approach is to have some way of isolating the state for each test within the application. For example, each test could create itself a new user account, and all data created by that user might be isolated from access by other users.

This also enables cleanup after the test run by deleting all data associated with the temporary user.

This can also be used for production monitoring if you build in a “right to be forgotten” feature for production users. However, again it assumes you have the ability to change the implementation to make it easier to test.

Maintainability

Performance is one of the nice-to-have problems that comes from having a decently sized suite of end to end tests. Another is maintainability over the long term.

We write end to end tests to make it easier to change the system rapidly and with confidence. Without care, the opposite can be true. Tests that are coupled to implementations create resistance to change rather than enabling it.

If you re-organise your HTML and need to trawl through hundreds of tests fixing them all to match the new page structure, you’re not getting the touted benefits, you might even be better off without such tests.

If you change a key user journey such as logging into the system and as a result need to update every test then you’re not seeing the benefits.

There are two patterns that help avoid these problems: the Page Object Pattern and the Screenplay Pattern.

Really, both of these patterns are explaining what emerges if you were to ruthlessly refactor your tests—factoring out unnecessary repetition and creating abstractions that add clarity

Page Objects

Page Objects abstract your testcases themselves away from the mechanics of locating and interacting with elements on the page. If you’ve got strings and selectors in your test cases, you may be coupling your tests to the current implementation.

If you’re using page objects well, then when you redesign your site, or re-organise your markup you shouldn’t have to update multiple testcases. You should just need to update your page objects to map to the new page structure.

@ReliabilityCheck(runs=1000)

I’ve seen this pay off: tests written for one ad format being entirely re-usable with a built-from-scratch ad format that shared behaviours. All that was needed was re-mapping the page objects.

Page objects can be a win for reliability. There’s fewer places to update when you realise you’re not waiting for interactivity of a component. A small improvement to your page objects can improve many tests at once.

Screenplay Pattern

For a long time our end to end testing efforts were focused on Ads—with small, simple, user journeys. Standard page objects coped well with the complexity.

When we started end to end testing more complex applications we took what we’d learnt the hard way from our ad tests and introduced page objects early.

However, this time we started noticing code smells—the page objects themselves started getting big and unwieldy, and we were seeing repetition of interactions with the pageobjects in different tests.

You could understand what the tests were doing by comparing the tests to what you see on the screen—you’d log in, then browse to a section. However, they were mechanical, they were written in the domain of interacting with the page, not using the language the users would use to describe the tasks they were trying to accomplish.

That’s when we were introduced to the screenplay pattern by Antony Marcarno (tests written in this style tend to read a little like a screenplay)

There are other articles that explain the screenplay pattern far more eloquently than I could. Suffice to say that it resolved many of the code smells we were noticing applying page objects to more complex applications.

Interactions & Tasks become small re-usable functions, and these functions can be composed into higher level conceptual tasks.

You might have a test where a user performs a login task, while another test might perform a “view report” task that composes the login and navigation tasks.

// directly interacting with page
driver.findElement(By.id("username")).sendKeys(username);
 
// using a page object
page.loginAs(username);

Unruly has released a little library that emerged when we started writing tests in the screenplay pattern style, and there’s also gold standard of Serenity BDD.

Summary

End to end tests with webdriver present lots of opportunities—reducing risks, checking across browsers & devices, testing invariants, and reuse for monitoring.

Like any automated tests, there are performance, maintainability, and reliability challenges that can be overcome.

Most of these principles are applicable to any automated tests, with end to end tests we tend to run into the pain earlier, and the costs of test inventory are higher.

Posted by & filed under Java.

Having benefited from “var” for many years when writing c#, I’m delighted that Java is at last getting support for local variable type inference in JDK 10.

From JDK 10 instead of saying

6c9ceeacf7b57f1d6da683572597f16c032

we can say

ArrayList<String> foo = new ArrayList<String>();

and the type of “foo” is inferred as ArrayList<String>

While this is nice in that it removes repetition and reduces boilerplate slightly, the real benefits come from the ability to have variables with types that are impractical or impossible to represent.

Impractical Types

When transforming data it’s easy to be left with intermediary representations of the data that have deeply nested generic types.

Let’s steal an example from a c# linq query, that groups a customer’s orders by year and then by month.

While Java doesn’t have LINQ, we can get fairly close thanks to lambdas.

var foo = new ArrayList<String>();

While not quite as clean as the c# version, it’s relatively similar. But what happens when we try to assign our customer order groupings to a local variable?

from(customerList)
    .select(c -> tuple(
        c.companyName(),
        from(c.orders())
            .groupBy(o -> o.orderDate().year())
            .select(into((year, orders) -> tuple(
                year,
                from(orders)
                    .groupBy(o -> o.orderDate().month())
            )))
       ));

Oh dear, that type description is rather awkward. The Java solutions to this have tended to be one of

  • Define custom types for each intermediary stage—perhaps here we’d define a CustomerOrderGroup type.
  • Chaining many operations together—adding more transformations onto the end of this chain
  • Lose the type information

Now we don’t have to work around the problem, and can concisely represent our intermediary steps

CollectionLinq<Tuple<String, CollectionLinq<Tuple<Integer, Group<Integer, Order>>>>> customerOrderGroups =
   from(customerList)
   .select(c -> tuple(
       c.companyName(),
       from(c.orders())
           .groupBy(o -> o.orderDate().year())
           .select(into((year, orders) -> tuple(
               year,
               from(orders)
                   .groupBy(o -> o.orderDate().month())
           )))
   ));

Impossible Types

The above example was impractical to represent due to being excessively long and obscure. Some types are just not possible to represent without type inference as they are anonymous.

The simplest example is an anonymous inner class

var customerOrderGroups =
   from(customerList)
   .select(c -> tuple(
       c.companyName(),
       from(c.orders())
           .groupBy(o -> o.orderDate().year())
           .select(into((year, orders) -> tuple(
               year,
               from(orders)
                   .groupBy(o -> o.orderDate().month())
           )))
   ));

There’s no type that you could replace “var” with in this example that would enable this code to continue working.

Combining with the previous linq-style query example, this gives us the ability to have named tuple types, with meaningful property names.

var person = new Object() {
   String name = "bob";
   int age = 5;
};
 
System.out.println(person.name + " aged " + person.age);

This also means it becomes more practical to create and use intersection types by mixing together interfaces and assigning to local variables

Here’s an example mixing together a Quacks and Waddles interface to create an anonymous Duck type.

var lengthOfNames  =
    from(customerList)
        .select(c -> new Object() {
            String companyName = c.companyName();
            int length = c.companyName().length();
        });
 
lengthOfNames.forEach(
    o -> System.out.println(o.companyName + " length " + o.length)
);

This has more practical applications, such as adding behaviours onto existing types, ala extension methods

Encouraging Intermediary Variables

It’s now possible to declare variables with types that were erstwhile impractical or impossible to represent.

I hope that this leads to clearer code as it’s practical to add variables that explain the intermediate steps of transformations, as well as enabling previously impractical techniques such as the above.


A Russian translation of this post has been provided at Softdroid

Posted by & filed under XP.

How does your team prioritise work? Who gets to decide what is most important? What would happen if each team member just worked on what they felt like?

I’ve had the opportunity to observe an experiment: over the past 8 years at Unruly, developers have had 20% of their time to work on whatever they want.

This is not exactly like Google’s famed 20% time for what “will most benefit Google” or “120% time”.

Instead, developers genuinely have 20% of their time (typically a day a week) to work on whatever they choose—whatever they deem most important to themselves. There are no rules, other than the company retains ownership of anything produced (which does not preclude open sourcing).

We call 20% time “Gold Cards” after the Connextra practice it’s based upon. Initially we represented the time using yellow coloured cards on our team board.

It’s important to us—if the team fails to take close to 20% of their time on gold cards it will be raised in retrospectives and considered a problem to address.

While it may seem like an expensive practice, it’s an investment in individuals that I’ve seen really pay off, time after time.

Antidote to Prioritisation Systems

If you’re working in a team, you’ll probably have some mechanism for making prioritisation decisions about what is most important to work on next; whether that be a benevolent dictatorship, team consensus, voting, cost of delay, or something else.

However much you like and trust the decision making process in your team, does it always result in the best decisions? Are there times when you thought the team was making the wrong decision and you turned out to be right?

Gold cards allow each individual in the team time to work on things explicitly not prioritised by the team, guilt free.

This can go some way to mitigating flaws in the team’s prioritisation. If you feel strongly enough that a decision is wrong, then you can explore it further on your gold card time. You can build that feature that you think is more important, or you can create a proof-of-concept to demonstrate an approach is viable.

This can reduce the stakes in team prioritisation discussions, taking some of the stress away; you at least have your gold card time to allocate how you see fit.

Here’s some of the ways it’s played out.

Saving Months of Work

I can recall multiple occasions when gold card activities have saved literally team-months of development work.

Avoiding Yak Shaving

One was a classic yak-shaving scenario. Our team discovered that a critical service could not be easily reprovisioned, and to make matters worse, was over capacity.

Fast forward a few weeks and we were no longer just reprovisioning a service, but creating a new base operating system image for all our infrastructure, a new build pipeline for creating it, and attempting to find/build alternatives for components that turned out to be incompatible with this new software stack.

We were a couple of months in, and estimated another couple of months work to complete the migration.

We’d retrospected a few times, we thought we’d fully considered our other options and we were just best off ploughing on through the long, but now well-understood path to completion.

Someone disagreed, and decided to use their gold card to go back and re-visit one of the early options the team thought they’d ruled out.

Within a day they’d demonstrated a solution to the original problem using our original tech stack, without needing most of the yak shaving activities.

Innovative Solutions

I’ve also seen people spotting opportunities in their gold cards that the team had not considered, saving months of work.

We had a need to store a large amount of additional data. We’d estimated a it would take the team some months to build out a new database cluster for the anticipated storage needs.

A gold card used to look for a mechanism for compressing the data, ended up yielding a solution that enabled us to indefinitely store the data, using our existing infrastructure.

Spawning new Products

Gold cards give people space to innovate, time to try new things, wild ideas that might be too early.

Our first mobile-web compatible ad formats came out of a gold card. We had mobile compatible ads considerably before we had enough mobile traffic to make it seem worthwhile.

Someone wanted to spend time working on mobile on their gold card, which resulted in having a product ready to go when mobile use increased, we weren’t playing catch up.

On another occasion a feature we were exploring had a prohibitively large download size for the bandwidth available at the time. A gold card yielded a far more bandwidth-efficient mechanism, contributing to the success of the product.

“How hard can it be?”

It’s easy to underestimate the complexity involved in building new features. “How hard can it be?” is often a dangerous phrase, uttered before discovering just how hard it really is, or embroiling oneself in large amounts of work.

Gold cards make this safe. If it’s hard enough that you can’t achieve it in your gold card, then you’ve only spent a small amount of time, and only your own discretionary time.

Gold cards also make it easy to experiment—you don’t need to convince anyone else that it will work. Sometimes, just sometimes, things actually turn out to be as easy, or even easier, than our hopes.

For a long time we had woeful reporting capabilities on our financial data. The team believed that importing this data to our data warehouse would be a large endeavour, involving redesigning our entire data pipeline.

A couple of developers disagreed, and decided to spend their gold card time working together, attempting to making this data reportable. They ended up coming up with a simple solution, that was compatible with the existing technology, and has withstood the test of time. Huge value unlocked from just one day spent speculatively.

That thing that bothers you

Whether it’s a code smell you want to get rid of, some UX debt that irritates you every time you see it, or the lack of automation in a task you perform regularly; there are always things that irritate us.

We ought to be paying attention to these irritations and addressing them as we notice them, but sometimes the team has deemed something else is more important or urgent.

Gold cards give you an opportunity to fix the things that matter to you. Not only does this help avoid frustration, but sometimes individuals fixing things they find annoying actually produces better outcomes than the wisdom of the crowd.

On one occasion a couple of developers spent their gold card just deleting code. They ended up deleting thousands of unneeded lines of code. Did this cleanup pay off yet? I honestly don’t know, but it may well have done, we have less inventory cost as a result.

Exploring New Tech

When tasked with solving a problem, we have a bias towards tools & technology that we know and understand. This is generally a good thing, exploring every option is often costly and if we pick something new, then the team has to learn it before we become productive.

Sometimes this means we miss out on tech that makes our lives much easier.

People often spend their gold card time playing around with speculative new technologies that they were unfamiliar with.

Much of the tech our teams now rely upon was first investigated and evangelised by someone who tried it out in gold card time; from build systems to monitoring tools, from to databases to test frameworks.

Learning

Tech changes fast; as developers we need to be constantly learning to stay competitive. Sometimes this can present a conflict of interest between the needs of the team to achieve a goal (safer to use known and reliable technology), and your desires to work with new cutting-edge tech.

Gold cards allow you to to prioritise your own learning for at least a day a week. It’s great for you, and it’s great for the team too as it brings in new ideas, techniques, and skills. It’s an investment that increases the skill level of the team over time.

Do you feel like you’d be able to be a better member of the team if you understood the business domain better? What if you knew the programming language you’re working in to a deeper level? If these feel important to you, then gold cards give you dedicated time that you can choose to spend in that way, without needing anyone else’s approval.

Sharing Knowledge

Some people use gold card time to prepare talks they want to give at conferences, or internally at our fortnightly tech-talks. Others write blog posts.

Sharing in this way not only helps others internally, but also gives back to the wider community. It raises people’s individual profiles as excellent developers, and raises the company’s profile as a potential employer.

Furthermore, many find that preparing content in this way improves their own understanding of a topic.

We’re so keen on this that we now give people extra days for writing blog posts.

Remote Working

Many of our XP practices work really well in co-located teams, but we’ve struggled to apply them to remote working. It’s definitely possible to do things like pair and mob-programming remotely, but it can be challenging for teams used to working together in the same space.

We’ve found that gold card time presented an easy opportunity to experiment with remote working—an opportunity to address some of the pain points as we look for ways to introduce more flexibility.

Remote working makes it easier to hire, and helps avoid excluding people who would be unable to join us without this flexibility

Side Projects

Sometimes people choose to work on something completely not work related, like a side project, a game, or a new app. This might not seem immediately valuable to the team, but it’s an opportunity for people to learn in a different context—gaining experience in greenfield development, starting a project from scratch and choosing technologies.

The more diverse our team’s experience & knowledge, the more likely we are to make good decisions in the future. Change is a constant in the industry—we won’t we’ll be working with the tech we’re currently using indefinitely.

Side projects bring some of this learning forward and in-house; we get new perspectives without having to hire new people.

Gold cards allow people to grow without expecting them to spend all their evenings and weekends writing code, encouraging a healthy work/life balance.

Sometimes a change is just what one needs. We spend a lot of our time pair programming; pairing can be intense and tiring. Gold cards give us an opportunity to work on something completely different at least once a week.

Open Source

Most of what we’re working on day to day is not suitable for open sourcing, or would require considerable work to open up.

Gold cards mean we can choose to spend some of our time working on open source software—giving back to the community by working on existing open source code, or working on opening up internal tools.

Hiring & Retention

Having the freedom to spend a day a week working on whatever you want is a nice perk. Offering it helps us hire, and makes Unruly a hard place to leave. The flexibility introduced by gold cards to do the kinds of things outlined above also contribute towards happiness and retention.

Given the costs of recruitment, hiring, onboarding & training, gold cards are worth considering as a benefit even if you didn’t get any of the extra benefits from these anecdotes.

Pitfalls

One trap to avoid is only doing the activities outlined above on gold card days. Many of the activities above should be things the team is doing anyway.

I’ve seen teams start to rely on others—not cleaning up things as a matter of course during their day to day work, because they expect someone will want to do it on their gold card.

I’ve seen teams not set time aside for learning & exploring because they rely on people spending their gold cards on it.

I’ve seen teams ineffectually ploughing ahead with their planned work without stepping back to try to spike some alternative solutions.

These activities should not be restricted to gold cards. Gold cards just give each person the freedom to work on what is most important to them, rather than what’s most important to the team.

There’s also the opposite challenge: new team members may not realise the full range possible uses for gold cards. Gold card use can drift over time to focus more and more on one particular activity, becoming seen as “Learning days” or “Spike days”.

Gold cards seem to be most beneficial when they are used for a wide variety of activities, helping the team notice the benefits of things they hadn’t seen as important.

Gold card time doesn’t always pay off, but it only has to pay off occasionally to be worthwhile.

Can we turn it up?

We learn from extreme programming to look for things that are good and turn them up to the max, to get the most value out of them.

If gold cards can bring all these benefits, what would happen if we made them more than 20% time?

Can we give individuals more autonomy without losing the benefits of other things we’ve seen to work well?

What’s the best balance between individual autonomy and the benefits of teams working collaboratively, pair programming, team goals, and stakeholder prioritisation?

We’ve turned things up a little: giving people extra days for conference speaking and blogging, carving out extra time for code dojos, talk preparation, and learning.

I’m sure there’s more we can do to balance the best of individuals working independently, with the benefits of teams.

What have you tried that works well?