Posted by & filed under Java.

Some time ago I wrote a post about creating an embedded dsl for Html in Java. Sadly, it was based on an abuse of lambda name reflection that was later removed from Java.

I thought I should do a followup because a lot of people still visit the old article. While it’s no longer possible to use lambda parameter names in this way, we can still get fairly close. 

The following approach is slightly less concise. That said, it does have some benefits over the original:

a) You no longer need to have parameter name reflection enabled at compile time.

b) The compiler can check your attribute names are valid, and you can autocomplete them.

What does it look like? 

       title("Hello Html World"),
       meta($ -> $.charset = "utf-8"),
       link($->{ $.rel=stylesheet; $.type=css; $.href="/my.css"; }),
       script($->{ $.type= javascript; $.src="/some.js"; })
       div($-> $.cssClass = "article",
           a($-> $.href="",
               span($->$.cssClass="label", "Click Here"),
               img($->{$.src="/htmldsl2.png"; $.width=px(25); $.height=px(25); })
           p(span("some text"), div("block"))

This generates the following html

   <title>Hello Html World</title>
   <meta charset="utf-8" />
   <link rel="stylesheet" type="css" href="/my.css" />
   <script type="text/javascript" src="/some.js" ></script>
   <div class="article">
     <a href="">
       <span class="label">Click Here</span>
       <img src="/htmldsl2.png" width="25" height="25" />
       <span>some text</span>

You get nice autocompletion, and feedback if you specify inappropriate values:

You’ll also get a helping hand from the types to not put tags in inappropriate places:

Generating Code

As it’s Java you can easily mix other code to generate markup dynamically:

           <meta charset="utf-8" />
           <p>Paragraph one</p>
           <p>Paragraph two</p>
           <p>Paragraph three</p>
               meta($ -> $.charset = "utf-8")
                   .map(number -> "Paragraph " + number)
                   .map(content -> p(content))

And the code can help you avoid injection attacks by escaping literal values: 

           <meta charset="utf-8" />
           <p>&lt;script src="attack.js"&gt;&lt;/script&gt;</p>
               meta($-> $.charset = "utf-8")
               p("<script src=\"attack.js\"></script>")

How does it work?

There’s only one “trick” here that’s particularly useful for DSLs. Using the Parameter Objects pattern from my lambda type references post. 

The lambdas used for specifying the tag attributes are “aware” of their own types. And capable of instantiating the configuration they specify.

When we call 

meta($ -> $.charset="utf-8")

We make a call to 

default Meta meta(Parameters<Meta> params, Tag... children) {}

The lambda specifying the attribute config is structurally equivalent to the Parameters<Meta> type. This provides a get() function that instantiates an instance of Meta, and then passes the new instance to the lambda function to apply the config.

public interface Parameters<T> extends NewableConsumer<T> {
   default T get() {
       T t = newInstance();
       return t;

Under the hood the newInstance() method uses reflection to examine the SerializedLambda contents and find the type parameter (in this case “Meta”) before instantiating it.

You can follow the code or see the previous post which explains it in a bit more detail.

Add Mixins

It’s helpful to use interfaces as mixins to avoid having to have one enormous class with all the builder definitions. 

public interface HtmlDsl extends
   Img.Dsl {}

Each tag definition then contains its own builder methods. We compose them together into a single HtmlDsl interface for convenience. This saves having to import hundreds of different methods. By implementing the Dsl interface a consumer gets access to all the builder methods.

Show me the code

It’s all on github. I’d start from the test examples. Bear in mind that it’s merely a port of the old proof of concept to a slightly different approach. I hope it helps illustrate the technique. It’s in no way attempting to be a complete implementation.

This approach can also be useful as an alternative to the builder pattern for passing a specification or configuration to a method. There’s another example on the type references article.

What else could you use this technique for?

Posted by & filed under XP.

“How was your day?” “Ugh, I spent all day in meetings, didn’t get any work done!” 

How often have you heard this exchange?

It makes me sad because someone’s day has not been joyful; work can be fun. 

I love a whinge as much as the next Brit; maybe if we said what we mean rather than using the catch-all “meetings” we could make work joyful.

Meetings are work

Meetings are work. It’s a rare job where you can get something done alone without collaborating with anyone else. There are some organisations that thrive with purely async communication. Regardless, if you’re having meetings let’s recognise that they are work. 

What was it about your meeting-full day that made you sad? It doesn’t have to be that way.

Working together can be fun

I’ve seen teams after a day of ensemble (mob) programming. Exhausted, yet elated at the amount they’ve been able to achieve together; at the breakthroughs they’ve made. Yet a group of people, working together, on the same thing, sounds an awful lot like a meeting. Aren’t those bad‽

Teams who make time together for a full day of planning, who embrace the opportunity to envision the future together, can sometimes come away filled with hope. Hope that better things are within their grasp than they previously believed possible.

Yet the more common experience of meetings seems synonymous with “waste of time” or “distraction from real work”. Why is this? Why weren’t they useful?

One team’s standup can be an energising way to kick off the day. Hearing interesting things we collectively learned since yesterday Deciding together what that means for today’s plan; who will work with whom on what? 

For another team it may be a depressing round of status updates that shames people who feel bad that they’ve not achieved as much as they’d hoped.

How do we make meetings better?

A first step is talking about what did or didn’t work, rather than accepting they have to be this way. Because there’s no “one weird trick” that will make your meetings magical. You’ll need to find what works for your team.

Why should you care? You probably prefer fun work. If you could make your meetings a little more fun you might enjoy your work a lot more.

Meetings beget meetings. Running out of time. Follow-ups. Clarifying things that were confusing from the first meeting… Ineffective meetings breed. Tolerating bad leads to more misery.

Saying what we mean

Here’s some things we could say that are more specific

We didn’t need a meeting for that

Was it purely a broadcast of information with no interactivity? Could we have handled it asynchronously via email/irc/slack etc?

I didn’t need to be there

No new information to you? Nothing you could contribute? If you’re not adding value how about applying the law of mobility and leave / opt out in future. Or feed back to the organiser. Maybe they’re seeing value you’re adding that you’re oblivious to.

I don’t know what that meeting was for

How about we clarify the goal for next time. Make it a ground rule for future meetings. If it’s worth everyone making time for it’s worth stating the purpose clearly so people can prepare & participate.

It wasn’t productive

Was the meeting to make a decision and we came out without either deciding anything or learning anything? 

Was the meeting to make progress towards a shared goal and it feels like we talked for an hour and achieved nothing.

Perhaps we’d benefit from a facilitator next time.

It was boring

Can we try mixing up the format? Could you rotate the facilitator to get different styles? How can you engage everyone? Or does the boredom indicate that the topic is not worth a meeting?

If it is important but still boring how do we make it engaging? It’s telling that “workshop” “retrospective” “hackathon” and other more specific names don’t have the same connotation as the catch-all “meetings”. Just giving the activity a name shows that someone has thought about an appropriate activity that will engage the participants to achieve a goal.

I needed more time to think

Could we share proposals for consideration beforehand? Suggest background reading to enable people to come prepared? Allocate time for reading and thinking in the meeting?

It was too long

We could have achieved the same outcome in 5 minutes but ended up talking in circles for an hour. 

I didn’t hear from ____

Did we exclude certain people from the conversation, intentionally or unintentionally? What efforts can you make to create a space for everyone to participate?

Not enough focus time

Do you need to defragment your calendar? Cramming in activities that need deep focus in gaps between meetings is not a recipe for success. Do you need to ask your manager for help rescheduling meetings that you can’t control? Should you be going to them all or can you trust someone else to represent you?

Too many context switches

Even if you don’t need focus time, context switching from one meeting to another can be exhausting. Are you or your team actively involved in too many different things? Can you say no to more? Can you work with others and reschedule meetings to give each day a focus?

It wasn’t as important as other work

Maybe you’re wasting lots of time planning things you might never get to and would be better off focusing on what’s important right now? Is your whole team attending something that you could send a representative to? Perhaps reading the minutes will be enough. 

Highlight the value

We decided on a database for the next feature 

We learned how the production incident occurred

We heard the difficulty the customer is having with…

We made a plan for the day 

We shared how we halved our production lead time

We realised that our solution won’t work

We agreed some coding principles

Tackling your meetings

What’s your least valuable meeting? Which brings you the least joy? 

What’s your most valuable meeting? Which brings you the most joy? 

What’s the difference between these? What made them good or bad. 

Turn up the good; vote with your feet on the bad.

Meandering path towards value

Posted by & filed under ContinuousDelivery, XP.

We design systems around the size of delays that are expected. You may have seen the popular table “latency numbers every programmer should know” which lists some delays that are significant in technology systems we build.

Teams are systems too. Delays in operations that teams need to perform regularly are significant to their effectiveness. We should know what they are.

Ssh to a server on the other side of the world and you will feel frustration; delay in the feedback loop from keypress to that character displayed on the screen. 

Here’s some important feedback loops for a team, with feasible delays. I’d consider these delays tolerable by a team doing their best work (in contexts I’ve worked in). Some teams can do better, lots do worse.

Run unit tests for the code you’re working on< 100 Milliseconds
Run all unit tests in the codebase< 20 Seconds
Run integration tests< 2 Minutes
From pushing a commit to live in production< 5 Minutes
Breakage to Paging Oncallper SLO/Error Budget
Team Feedback< 2 Hours
Customer Feedback< 1 Week
Commercial Bet Feedback< 1 Quarter

What are the equivalent feedback mechanisms for your team? How long do they take? How do they influence your work?

Feedback Delays Matter

They represent how quickly we can learn. Keeping the delays as low as the table above means we can get feedback as fast as we have made any meaningful progress. Our tools/system do not hold us back. 

Feedback can be synchronous if you keep them this fast. You can wait for feedback and immediately use it to inform your next steps. This helps avoid the costs of context switching. 

With fast feedback loops we run tests, and fix broken behaviour. We integrate our changes and update our design to incoporate a colleague’s refactoring.

Fast is deploying to production and immediately addressing the performance degradation we observe. It’s rolling out a feature to 1% of users and immediately addressing errors some of them see.

With slow feedback loops we run tests and respond to some emails while they run, investigate another bug, come back and view the test results later. At this point we struggle to build a mental model to understand the errors. Eventually we’ll fix them and then spend the rest of the afternoon trying to resolve conflicts with a branch containing a week’s changes that a teammate just merged.

With slow deploys you might have to schedule a change to production. Risking being surprised by errors reported later that week, when it has finally gone live, asynchronously. Meanwhile users have been experiencing problems for hours.

Losing Twice

As feedback delays increase, we lose twice:

a) We waste more time waiting for these operations (or worse—incur context switching costs as we fill the time waiting)

b) We are incentivised to seek feedback less often, since it is costly to do so. Thereby wasting more time & effort going in the wrong direction.

I picture this as a meandering path towards the most value. Value often isn’t where we thought it was at the start. Nor is the route to it often what we envisioned at the start.

We waste time waiting for feedback. We waste time by following our circuitous route. Feedback opportunities can bring us closer to the ideal line.

When feedback is slow it’s like setting piles of money on fire. Investment in reducing feedback delays often pays off surprisingly quickly—even if it means pausing forward progress while you attend to it.

This pattern of going in slightly the wrong direction then correcting repeats at various granularities of change. From TDD, to doing (not having) continuous integration. From continuous deployment to testing in production. From customers in the team, to team visibility of financial results.

Variable delays are even worse

In recent times you may have experienced the challenge of having conversations over video links with significant delays. This is even harder when the delay is variable. It’s hard to avoid talking over each other. 

Similarly, it’s pretty bad if we know it’s going to take all day to deploy a change to production. But it’s so worse if we think we can do it in 10 minutes, when it actually ends up taking all day. Flaky deployment checks, environment problems, change conflicts create unpredictable delays.

It’s hard to get anything done when we don’t know what to expect. Like trying to hold a video conversation with someone on a train that’s passing through the occasional tunnel. 

Measure what Matters 

The time it takes for key types of feedback can be a useful lead indicator on the impact a team can have over the longer term. If delays in your team are important to you why not measure them and see if they’re getting better or worse over time? This doesn’t have to be heavyweight.

How about adding a timer to your deploy process and graphing the time it takes from start to production over time? If you don’t have enough datapoints to plot deploy delay over time that probably tells you something ;)

Or what about a physical/virtual wall for waste. Add to a tally or add a card every time you have wasted 5 mins waiting. Make it visible. How big did the tally get each week?

What do the measurements tell you? If you stopped all feature work for a week and instead halved your lead time to production, how soon would it pay off?

Would you hit your quarterly goals more easily if you stopped sprinting and first removed the concrete blocks strapped to your feet?

What’s your experience?

Every team has a different context. Different sorts of feedback loops will be more or less important to different teams. What’s important enough for your team to measure? What’s more important than I’ve listed here?

What is difficult to keep fast? What gets in the way? What is so slow in your process that synchronous feedback seems like an unattainable dream? 

Posted by & filed under XP.

Extreme Programming describes five values: communication, feedback, simplicity, courage, and respect. I think that humility might be more important than all of these. 

Humility enables compassion. Compassion both provides motivation for and maximises the return on technical practices. Humility pairs well with courage, helps us keep things simple, and makes feedback valuable.

Humility enables Compassion 

Humility helps you respect the people you’re working with and see what they bring. We can’t genuinely respect them if we’re feeling superior; if we think we have all the answers. 

If we have compassion for our teammates (and ourselves) we will desire to minimise their suffering. 

We will want to avoid inflicting difficult merges on anyone. We will want to avoid wasting their time, or forcing them to re-work; having been surprised by our changes. The practice of Continuous Integration can come from the desire to minimise suffering in this way.

We will want those who come after us in the future to be able to understand our work—understand the important behaviour and decisions we made. We’ll want them to have the best safety net possible. Tests and living documentation such as ADRs can come from this desire. 

We’d desire the next person to have the easiest possible job to change or build upon what we’ve started, regardless of their skill and knowledge. Simplicity and YAGNI can come from this desire.

Humility and compassion can drive us to be curious: what are the coding and working styles and preferences of our team mates? What’s the best way to collaborate to maximise my colleagues’ effectiveness?

Without compassion we might write code that is easiest for ourselves to understand—using our preferred idioms and style without regard for how capable the rest of the team is to engage with it.

Without humility our code might show off our cleverness.

Humility to keep things simple 

To embrace simplicity we have to admit that we might be wrong about what we’ll need in the future.

Humility helps us acknowledge that we will find this harder and harder to maintain in the future. Even if we’re still part of the team. We all have limited capacity to deal with complexity.

We need humility to realise we will likely be wrong about what we’ll need in the future. We’ll have courage to try to predict our direction, but strive for the simplest possible code to support what we have now. This will make it easier for whomever must change it when we realise how we’re wrong. 

Humility to value feedback

To value feedback we have to admit that we might be wrong

Why pair program if you already know what is best and have nothing to learn from others? They’ll just slow you down!

Why talk with the customer regularly to understand their needs. We’re the experts!

Why do user testing, anybody could use this!

So many tech practices are about getting feedback fast so we can iterate on code, on product, and on our team ways of working. Humility helps us accept that we can be better.

Letting design emerge from the tests with TDD requires the humility to accept that we might not have the best design already in mind. We can’t have foreseen all the interactions with the rest of the code and necessary behaviours.

Humility maximises blamelessness and learning opportunities. We talk about blameless post incident reviews and retrospectives: focusing on understanding and learning from things that happen. Even if we don’t outwardly blame those involved it’s easy to feel slightly superior: that there’s no way we would have made the mistake that triggered the incident. A humble participant would have more compassion for those involved. A humble participant would see that they are themselves part of the system of people that has resulted in this outcome. There is always something to learn about the consequences of our own actions and inactions.

Humility pairs well with Courage

Courage is not overconfidence. Courage is not fearlessness. Courage is being able to do something even though it might be hard or scary.

With humility we know we are fallible and may be wrong. We courageously seek out feedback to learn as early as possible.

Deploying changes to production always carries certain risk; even with safety nets like tests and canary deploys. (Delaying deploys creates even more risk).

An overconfident person might avoid deploying to production until they’re finished with a large chunk of work. After all, they know what they’re doing! Figuring out how to break it down into separately deployable chunks will take more time and be inefficient.

A fearless person might fire and forget changes into production. This is a safe change after all. Click deploy; go to the pub!

A humble person on the other hand understands they’re working with a complex system; bigger than they can fit in their head. They understand that they can’t be certain of the results of their change, no matter the precautions they’ve taken. Having courage to deploy anyway. Acting to observe reality and find out whether their fallible prediction was correct. 

Posted by & filed under Java.

A few years back I posted about how to implement state machines that only permit valid transitions at compile time in Java.

This used interfaces instead of enums, which had a big drawback—you couldn’t guarantee that you know all the states involved. Someone could add another state elsewhere in your codebase by implementing the interface.

Java 15 brings a preview feature of sealed classes. Sealed classes enable us to solve this downside. Now our interface based state machines can not only prevent invalid transitions but also be enumerable like enums.

If you’re using jdk 15 with preview features enabled you can try out the code. This is how it looks to define a state machine with interfaces.

sealed interface TrafficLight
       extends State<TrafficLight>
       permits Green, SolidAmber, FlashingAmber, Red {}
static final class Green implements TrafficLight, TransitionTo<SolidAmber> {}
static final class SolidAmber implements TrafficLight, TransitionTo<Red> {}
static final class Red implements TrafficLight, TransitionTo<FlashingAmber> {}
static final class FlashingAmber implements TrafficLight, TransitionTo<Green> {}

The new part is “sealed” and “permits”. Now it becomes a compile failure to define a new implementation of TrafficLight 

As well as the existing behaviour where it’s a compile time failure to perform a transition that traffic lights do not allow. 

n.b. you can also skip the compile time checked version and still use the type definitions to runtime check the transitions

Multiple transitions are possible from a state too

static final class Pending 
  implements OrderStatus, BiTransitionTo<CheckingOut, Cancelled> {}

Thanks to sealed classes we can also now do enum style enumeration and lookups on our interface based state machines.

sealed interface OrderStatus
       extends State<OrderStatus>
       permits Pending, CheckingOut, Purchased, Shipped, Cancelled, Failed, Refunded {}
@Test public void enumerable() {
    array(Pending.class, CheckingOut.class, Purchased.class, Shipped.class, Cancelled.class, Failed.class, Refunded.class),
  assertEquals(0, new Pending().ordinal());
  assertEquals(3, new Shipped().ordinal());
  assertEquals(Purchased.class, State.valueOf(OrderStatus.class, "Purchased"));
  assertEquals(Cancelled.class, State.valueOf(OrderStatus.class, "Cancelled"));

These are possible because JEP 360 provides a reflection API with which one can enumerate the permitted subclasses of an interface. ( side note the JEP says getPermittedSubclasses() but the implementation seems to use permittedSubclasses() ) 
We can add use this to add the above convenience methods to our State interface to allow the values(), ordinal(), and valueOf() lookups.

static <T extends State<T>> List<Class> valuesList(Class<T> stateMachineType) {
   return Stream.of(stateMachineType.permittedSubclasses())
static <T extends State<T>> Class<T> valueOf(Class<T> stateMachineType, String name) {
   return valuesList(stateMachineType)
       .filter(c -> Objects.equals(c.getSimpleName(), name))
static <T extends State<T>, U extends T> int ordinal(Class<T> stateMachineType, Class<U> instanceType) {
   return valuesList(stateMachineType).indexOf(instanceType);

There are more details on how the transition checking works and more examples of where this might be useful in the original post. Code is on github.

Posted by & filed under XP.

It’s become less common to hear people referred to as “resources” in recent times. There’s more trendy “official vocab guidelines”, but what’s really changed? There’s still phrases in common use that sound good but betray the same mindset.

I often hear people striving to hire and retain the best talent as if that is a strategy for success, or as if talent is a limited resource we must fight over. 

Another common one is to describe employees as your “greatest asset”.

I’d like to believe both phrases come from the good intentions of valuing people. Valuing individuals as per the agile manifesto. I think these phrases betray a lack of consideration of the “…and  interactions”

The implication is organisations are in a battle to win and then protect as big a chunk as they can of a finite resource called “talent”. It’s positioned as a zero-sum game. There’s an implication that the impact of an organisation is a pure function of the “talent” it has accumulated. 

People are not Talent. An organisation can amplify or stifle the brilliance of people. It can grow skills or curtail talent.

Talent is not skill. Talent gets you so far but skills can be grown. Does the team take the output that the people in it have the skill to produce? Or does the team provide an environment in which everyone can increase their skills and get more done than they could alone? 

We might hire the people with the most pre-existing talent, and achieve nothing if we stifle them with a bureaucracy that prevents them from getting anything done. Organizational scar tissue that gets in the way; policies that demotivate.  

Even without the weight of bureaucracy many teams are really just collections of individuals with a common manager. The outcomes of such groups are limited by the talent and preexisting skill of the people in them. 

Contrast this with a team into which you can hire brilliant people who’ve yet to have the opportunity of being part of a team that grows them into highly skilled individuals. A team that gives everyone space to learn, provides challenges to stretch everyone, provides an environment where it’s safe to fail. Teams that have practices and habits that enable them to achieve great things despite the fallibility and limitations of the talent of each of the people in the team. 

“when you are a Bear of Very Little Brain, and you Think of Things, you find sometimes that a Thing which seemed very Thingish inside you is quite different when it gets out into the open and has other people looking at it.”—AA Milne 

While I’m a bear of very little brain, I’ve had the opportunity to be part of great teams that have taught me habits that help me achieve more than I can alone.

Habits like giving and receiving feedback. Like working together to balance each others weaknesses and learn from each other faster. Like making predictions and observing the results. Like investing in keeping things simple so they can fit into my brain. Like working in small steps. Like scheduled reflection points to consider how to improve how we’re working. Like occasionally throwing away the rules and seeing what happens.

Habits like thinking less and sensing more. 

Posted by & filed under Java.

A while back I promised to follow up from this tweet to elaborate on the fun I was having with Java’s new Records (currently preview) feature.

Records, like lambdas and default methods on interfaces are tremendously useful language features because they enable many different patterns and uses beyond the obvious.

Java 8 brought lambdas, with lots of compelling uses for streams. What I found exciting at the time was that for the first time lots of things that we’d previously have to have waited for as new language features could become library features. While waiting for lambdas we had a Java 7 release with try-with-resources. If we’d had lambdas we could have implemented something similar in a library without needing a language change.

There’s often lots one can do with a bit of creativity. Even if Brian Goetz does sometimes spoil one’s fun ¬_¬

Records are another such exciting addition to Java. They provide a missing feature that’s hard to correct for in libraries due to sensible limitations on other features (e.g. default methods on interfaces not being able to override equals/hashcode)

Here’s a few things that records help us do that would otherwise wait indefinitely to appear in the core language.

Implicitly Implement (Forwarding) Interfaces

Java 8 gave us default methods on interfaces. These allowed us to mix together behaviour defined in multiple interfaces. One use of this is to avoid having to re-implement all of a large interface if you just want to add a new method to an existing type. For example, adding a .map(f) method to List. I called this the Forwarding Interface pattern.

Using forwarding interface still left us with a fair amount of boilerplate, just to delegate to a concrete implementation. Here’s a MappableList definition using a ForwardingList.

class MappableList<T> implements List<T>, ForwardingList<T>, Mappable<T> {
   private List<T> impl;
   public MappableList(List<T> impl) {
       this.impl = impl;
   public List<T> impl() {
       return impl;

The map(f) implementation is defined in Mappable<T> and the List<T> implementation is defined in ForwardingList<T>. All the body of MappableList<T> is boilerplate to delegate to a given List<T> implementation.

We can improve on this a bit using anonymous types thanks to Jdk 10’s var. We don’t have to define MappableList<T> at all. We can define it inline with intersection casts and structural equivalence with a lambda that returns the delegate type.

var y = (IsA<List<String>> & Mappable<String> & FlatMappable<String> & Joinable<String>)
    () -> List.of("Anonymous", "Types");

Full implementation

This is probably a bit obscure for most people. Intersection casts aren’t commonly used. You’d also have to define your desired “mix” of behaviours at each usage site.

Records give us a better option. The implementation of a record definition can implicitly implement the boilerplate in the above MappableList definition

public record EnhancedList<T>(List<T> inner) implements
       Filterable<T, EnhancedList<T>>,
       Groupable<T> {}
interface ForwardingList<T> extends List<T>, Forwarding<List<T>> {
   List<T> inner();

Here we have defined a record with a single field named “inner“. This automatically defines a getter called inner() which implicitly implements the inner() method on ForwardingList. None of the boilerplate on the above MappableList is needed. Here’s the full code. Here’s an example using it to map over a list.

Decomposing Records

Let’s define a Colour record

public record Colour(int red, int green, int blue) {}

This is nice and concise. However, what if we want to get the constituent parts back out again.

Colour colour = new Colour(1,2,3);
var r =;
var g =;
var b =;
assertEquals(1, r.intValue());
assertEquals(2, g.intValue());
assertEquals(3, b.intValue());

Can we do better? How close can we get to object destructuring?

How about this.

Colour colour = new Colour(1,2,3);
colour.decompose((r,g,b) -> {
   assertEquals(1, r.intValue());
   assertEquals(2, g.intValue());
   assertEquals(3, b.intValue());

How can we implement this in a way that requires minimal boilerplate? Default methods on interfaces come to the rescue again. What if we could get any of this additional sugary goodness on any record, simply by implementing an interface.

public record Colour(int red, int green, int blue) 
   implements TriTuple<Colour,Integer,Integer,Integer> {}

Here we’re making our Colour record implement an interface so it can inherit behaviour from that interface.

Let’s make it work…

We’re passing the decompose method a lambda function that accepts three values. We want the implementation to invoke the lambda and pass our constituent values in the record (red, green, blue) as arguments when invoked.

Firstly let’s declare a default method in our TriTuple interface that accepts a lambda with the right signature.

interface TriTuple<TRecord extends Record & TriTuple<TRecord, T, U, V>,T,U,V>
    default void decompose(TriConsumer<T,U,V> withComponents) {

Next we need a way of extracting the component parts of the record. Fortunately Java allows for this. There’s a new method Class::getRecordComponents that gives us an array of the constituent parts.

This lets us extract each of the three parts of the record and pass to the lambda.

var components = this.getClass().getRecordComponents();
return withComponents.apply(
    (T) components[0].getAccessor().invoke(this),
    (U) components[1].getAccessor().invoke(this),
    (V) components[2].getAccessor().invoke(this)

There’s some tidying we can do, but the above works. A very similar implementation would allow us to return a result built with the component parts of the record as well.

Colour colour = new Colour(1,2,3);
var sum = colour.decomposeTo((r,g,b) -> r+g+b);
assertEquals(6, sum.intValue());

Structural Conversion

Sometimes the types get in the way of people doing what they want to do with the data. However wrong it may be ¬_¬

Let’s see if we can allow people to convert between Colours and Towns

public record Person(String name, int age, double height) 
    implements TriTuple<Person, String, Integer, Double> {}
public record Town(int population, int altitude, int established)
    implements TriTuple<Town, Integer, Integer, Integer> { }
Colour colour = new Colour(1, 2, 3);
Town town =;
assertEquals(1, town.population());
assertEquals(2, town.altitude());
assertEquals(3, town.established());

How to implement the “to(..)” method? We’ve already done it! It’s accepting a method reference to Town’s constructor. This is the same signature and implementation of our decomposeTo method above. So we can just alias it.

default <R extends Record & TriTuple<R, T, U, V>> R to(TriFunction<T, U, V, R> ctor) {
   return decomposeTo(ctor);

Replace Property

We’ve now got a nice TriTuple utility interface allowing us to extend the capabilities that tri-records have.

Another nice feature would be to create a new record with just one property changed. Imagine we’re mixing paint and we want a variant on an existing shade. We could just add more of one colour, not start from scratch.

Colour colour = new Colour(1,2,3);
Colour changed = colour.with(Colour::red, 5);
assertEquals(new Colour(5,2,3), changed);

We’re passing the .with(..) method a method reference to the property we want to change, as well as the new value. How can we implement .with(..) ? How can it know that the passed method reference refers to the first component value?

We can in fact match by name.

The RecordComponent type from the standard library that we used above can give us the name of each component of the record.

We can get the name of the passed method reference by using a functional interface that extends from Serializable. This lets us access the name of the method the lambda is invoking. In this case giving us back the name “red”

default <R> TRecord with(MethodAwareFunction<TRecord, R> prop, R newValue) { 

MethodAwareFunction extends another utility interface MethodFinder which provides us access to the Method invoked and from there, the name.

The last challenge is reflectively accessing the constructor of the type we’re trying to create. Fortunately we’re passing the type information to our utility interface at declaration time

public record Colour(int red, int green, int blue)
    implements TriTuple<Colour,Integer,Integer,Integer> {}

We want the Colour constructor. We can get it from Colour.class. We can get this by reflectively accessing the first type parameter of the TriTuple interface. Using Class::getGenericInterfaces() then ParameterizedType::getActualTypeArguments() and taking the first to get a Class<Colour>

Here’s a full implementation.

Automatic Builders

We can extend the above to have some similarities with the builder pattern, without having to create a builder manually each time.

We’ve already got our .with(namedProperty, value) method to build a record step by step. All we need is a way of creating a record with default values that we can replace with our desired values one at a time.

Person sam = builder(Person::new)
   .with(Person::name, "Sam")
   .with(Person::age, 34)
   .with(Person::height, 83.2);
assertEquals(new Person("Sam", 34, 83.2), sam);
static <T, U, V, TBuild extends Record & TriTuple<TBuild, T, U ,V>> TBuild builder(Class<TBuild> cls) {

This static builder method invokes the passed constructor reference passing it appropriate default values. We’ll use the same SerializedLambda technique from above to access the appropriate argument types.

static <T, U, V, TBuild extends Record & TriTuple<TBuild, T, U ,V>> TBuild builder(MethodAwareTriFunction<T,U,V,TBuild> ctor) {
   var reflectedConstructor = ctor.getContainingClass().getConstructors()[0];
   var defaultConstructorValues = Stream.of(reflectedConstructor.getParameterTypes())
   return ctor.apply(

Once we’ve invoked the constructor with default values we can re-use the .with(prop,value) method we created above to build a record up one value at a time.

Example Usage

public record Colour(int red, int green, int blue) 
    implements TriTuple<Colour,Integer,Integer,Integer> {}
public record Person(String name, int age, double height) 
    implements TriTuple<Person, String, Integer, Double> {}
public record Town(int population, int altitude, int established) 
    implements TriTuple<Town, Integer, Integer, Integer> {}
public record EnhancedList<T>(List<T> inner) implements
    Mappable<T> {}
public void map() {
    var mappable = new EnhancedList<>(List.of("one", "two"));
        List.of("oneone", "twotwo"), -> s + s)
public void decomposable_record() {
   Colour colour = new Colour(1,2,3);
   colour.decompose((r,g,b) -> {
       assertEquals(1, r.intValue());
       assertEquals(2, g.intValue());
       assertEquals(3, b.intValue());
   var sum = colour.decomposeTo((r,g,b) -> r+g+b);
   assertEquals(6, sum.intValue());
public void structural_convert() {
   Colour colour = new Colour(1, 2, 3);
   Town town =;
   assertEquals(1, town.population());
   assertEquals(2, town.altitude());
   assertEquals(3, town.established());
public void replace_property() {
   Colour colour = new Colour(1,2,3);
   Colour changed = colour.with(Colour::red, 5);
   assertEquals(new Colour(5,2,3), changed);
   Person p1 = new Person("Leslie", 12, 48.3);
   Person p2 = p1.with(Person::name, "Beverly");
   assertEquals(new Person("Beverly", 12, 48.3), p2);
public void auto_builders() {
   Person sam = builder(Person::new)
           .with(Person::name, "Sam")
           .with(Person::age, 34)
           .with(Person::height, 83.2);
   assertEquals(new Person("Sam", 34, 83.2), sam);

Code is all in this test and this other test. Supporting records with arities other than 3 is left as an exercise to the reader ¬_¬

Posted by & filed under XP.

A recent twitter discussion reminded me of an interesting XTC discussion last year. The discussion topic was refactoring code to make it worse. We discussed why this happens, and what we can do about it.

I found the most interesting discussion arose from the question “when might this be a good thing?”—when is it beneficial to make code worse?

Refactorings are small, safe, behaviour-preserving transformations to code. Refactoring is a technique to improve the design of existing code without changing the behaviour. The refactoring transformations are merely a tool. The result may be either better or worse. 

Make it worse for you; make it better for someone else

Refactoring ruthlessly can keep code habitable, inline with our best understanding of the domain, even aesthetically pleasing. 

They can also make the code worse. Whether the result is better or worse is in the eye of the beholder. What’s better to one person may be worse to another. What’s better for one team may be worse for another team. 

For example, some teams may be more comfortable with abstraction than others. Some teams prefer code that more explicitly states how it is working at a glance. Some people may be comfortable with OO design patterns and find functional programming idioms unfamiliar, and vice versa.

You may refactor the code to a state you’re less happy with but the team as a whole prefers. 

Refactoring the code through different forms also allows for conversations to align on a preferred style in a team. After a while you can often start to predict what others on the team are going to think of a given refactoring even without asking them. 

Making refactoring a habit, e.g. as part of the TDD cycle accelerates this, as do mechanisms for fast feedback between each person in the team—such as pairing with rotation or collective group code review.

Learning through Exploration

Changing the structure of code without changing its behaviour can help to understand what the code’s doing, why it’s written in that way, how it fits into the rest of the system. 

In his book “Working effectively with legacy code” Michael feathers calls this “Scratch Refactoring”. Refactor the code without worrying about whether your changes are safe, or even better.

Then throw those refactorings away. 

Exploratory refactoring can be done even when there’s no tests, even when you don’t have enough understanding of the system to know if your change is better or worse, even when you don’t know the acceptance criteria for the system.

Moulding the code into different forms that have the same behaviour can increase your understanding of what that core behaviour is.

A sign it’s safe to take risks

If every refactoring you perform makes the code better, it seems likely that we could be more courageous in our refactoring attempts. 

If we only tackle the changes where we know what better looks like and leave scary code alone the system won’t stay simple.

If we’re attempting to improve code we don’t fully understand and don’t intuitively know the right design for we’ll get it wrong some of the time. 

It’s easy to try so hard to avoid the risk of bad things happening that we also get in the way of good things happening.

Many teams use gating code review before code may make its way to production. Establishing a gate to stop bad code making it into production, that also slows down good code getting to production.

Refactorings are often small steps towards a deeper insight into the domain of the code we’re working on. Sometimes those steps will be in a useful direction, sometimes wrong. All of them will build up understanding in the team. Not all of them will be unquestionably better at each integration point, and could easily be filtered out by a risk-averse code review gate. Avoiding the risk that a refactoring might be taking us in the wrong path may rob us of the chance of a breakthrough in the next refactoring, or the one after. 

A team that’s not afraid to make improvements to the system will also get it wrong some of the time. That has to be ok. We learn as much or more from the failures.

Making it safe to make code worse

Extreme programming practices really help create an environment where it’s safe experiment with code in this manner.

Pair programming means you’ve got a second person to catch some of the riskiest things that could happen and give immediate feedback in the moment. It gives two perspectives on the shape the code should be in. Tom Johnson calls this optician-style “Do you prefer this… or this”. Refactorings are small changes so it’s feasible to switch back and forth between each structure to compare and consider together.

Group code review. (Reviewing code together as a team, after it’s already in production) can build a shared understanding of what the team considers good code. Help you foresee the preferences of the rest of your team. Between you build a better understanding of the code than you could even in a pair. Spot the refactoring paths we’ve embarked on that have made code worse rather than better. Highlight changes to make the next time we’re in the area.

Continuous integration means we’re only making small steps before getting feedback from integrating the code. The size of our mistakes is limited.

Test Driven Development gives us a safety net that tells us when our refactoring may have not just changed the structure of the code but also inadvertently the behaviour. i.e. it wasn’t a refactoring. Test suites going red during a refactoring is a “surprise” we can learn from. We predict the suite will stay green. If it goes red then there’s something we didn’t fully understand about the code. Surprises are where learning happens.

Test Driven Development also makes refactoring habitual. Every micro-iteration of behaviour we perform to the system includes refactoring. Tidying the implementation, trying out another approach, simplifying the test, improving its diagnostic power (maybe not strictly a refactoring). If you never move onto writing the next test without doing at least some refactoring you’ll build up the habit and skill at refactoring fast. If you do lots of refactorings some of them will make things worse, and that’s ok. 

Posted by & filed under XP.

There are many reasons to consider hiring inexperienced software engineers into your team, beyond the commonly discussed factors of cost and social responsibility.

Hire to maximise team effectiveness; not to maximise team size. Adding more people increases the communication and synchronisation overhead in the team. Growing a team has rapidly diminishing returns.

However, adding the right people, perspectives, skills, and knowledge into a team can transform that team’s impact. Instantly unblocking problems that would have taken days of research. Resolving debates that would have paralysed. The right balance between planning and action.

It’s easy to undervalue inexperienced software engineers as part of a healthy team mix. While teams made up of entirely senior software engineers can be highly effective. There are many benefits beyond cost and social responsibility for hiring entry level and junior software engineers onto your team.

Fresh Perspectives

Experienced engineers have learned lots of so called “best practices” or dogma. Mostly these are good habits that are safer ways of working, save time, and aid learning. On the other hand sometimes the context has changed and these practices are no longer useful, but we carry on doing them anyway out of habit. Sometimes there’s a better way, now that tech has moved on, and we haven’t even stopped to consider.

There’s a lot of value in having people on the team who’ve yet to develop the same biases. People who’ll force you to think through and articulate why you do the things you’ve come to take for granted. The reflection may help you spot a better way.

To take advantage you need sufficient psychological safety that anyone can ask a question without fear of ridicule. This also benefits everyone.

Incentive for Simplicity and Safety

A team of experienced engineers may be able to tolerate a certain amount of accidental code complexity. Their expertise may enable them to work relatively safely without good test safety nets and gaps in their monitoring. I’m sure you know better ;)

Needing to make our code simple enough to understand for a new software engineer to be able to understand and change it exerts positive pressure on our code quality.

Having to make it safe to fail. Protecting everyone on the team from being able to make a change that takes down production or corrupts data helps us all. We’re all human.

Don’t have any junior engineers? What would you do differently if you knew someone new to programming was joining your team next week? Which of those things should you be doing anyway? How many would pay back their investment even with experienced engineers? How much risk and complexity are you tolerating? What’s its cost?

Growth opportunity for others

Teaching, advising, mentoring, coaching less experienced people on the team can be a good development opportunity for others. Teaching helps deepen your own understanding of a topic. Practising your ability to lift others up will serve you well.

Level up fast

It can be humbling how swiftly new developers can get up to speed and become highly productive. Particularly in an environment that really values learning. Pair programming can be a tremendous accelerator for learning through doing. True pairing, i.e. solving problems together, rather than spoonfeeding or observing.


Amount of software engineering experience is one indicator for the impact an individual can have. Amount of experience within your organisation is also relevant. If you only hire senior people and your org is not growing fast enough to provide them with further career development opportunities they are more likely to leave to find growth opportunities. It can be easier to find growth opportunities for people earlier in their career.

A mix of seniorities can help increase the average tenure of developers in your organisation—assuming you will indeed support them with their career development.

Action over Analysis

Junior engineers often bring a healthy bias towards getting on with doing things over excessive analysis. Senior engineers sometimes get stuck evaluating foreseen possibilities, finding “the best tool for the job”, or debating minutiae ad nauseam. Balancing the desires to do the right things right, with the desire to do something, anything quickly on a team can be transformational.

Hire Faster

There’s more inexperienced people. It’s quicker to find people if we relax our experience and skill requirements. Some underrepresented minorities may be less underrepresented at more junior levels.

The inexperienced engineer you hire today could be the senior engineer you need in years to come.

To ponder

What other reasons have I missed? In what contexts is the opposite true? When would you only hire senior engineers?

Posted by & filed under ContinuousDelivery, XP.

When I ask ask people about their approach to continuous integration, I often hear a response like

“yes of course, we have CI, we use…”.

When I ask people about doing continuous integration I often hear “that wouldn’t work for us…”

It seems the practice of continuous integration is still quite extreme. It’s hard, takes time, requires skill, discipline and humility.

What is CI?

Continuous integration is often confused with build tooling & automation. CI is not something you have, it’s something you do.

Continuous integration is about continually integrating. Regularly (several times a day) integrating your changes (in small & safe chunks) with the changes being made by everyone else working on the same system.

Teams often think they are doing continuous integration, but are using feature branches that live for hours or even days to weeks.

Code branches that live for much more than an hour are an indication you’re not continually integrating. You’re using branches to maintain some degree of isolation from the work done by the rest of the team.

I like the current Wikipedia definition: “continuous integration (CI) is the practice of merging all developer working copies to a shared mainline several times a day.”

I like this description. It’s worth calling out a few bits.

CI is a practice. Something you do, not something you have. You might have “CI Tooling”. Automated build/test running tooling that helps check all changes.

Such tooling is good and helpful, but having it doesn’t mean you’re continually integrating.

Often the same tooling is even used to make it easier to develop code in isolation from others. The opposite of continuous integration.

I don’t mean to imply that developing in isolation and using the tooling this way is bad. It may be the best option in context. Long lived branches and asynchronous tooling has enabled collaboration amongst large groups of people across distributed geographies and timezones.

CI is a different way of working. Automated build and test tooling may be a near universal good. (even a hygiene factor). The practice of Continuous Integration is very helpful in some contexts, even if less universally beneficial.

…all developer working copies…

All developers on the team integrating their code. Not just small changes. If bigger features are worked on in isolation for days or until they’re complete you’re not integrating continuously.

…to a shared mainline…

Code is integrated into the same branch. Often “master” or “main” in git parlance. It’s not just about everyone pushing their code to be checked by a central service. It’s about knowing it works when combined with everyone else’s work in progress, and visible to the rest of the team.

…several times a day

This is perhaps the most extreme part. The part that highlights just how unusual a practice continuous integration really is. Despite everyone talking about it.

Imagine you’re in a team of five developers, working independently, practising CI. Aiming to integrate your changes roughly once an hour. You might see 40 commits to main in a single day. Each commit representing a functional, working, potentially releasable state of the system.

(Teams I’ve worked on haven’t seen quite such a high commit rate. It’s reduced by pairing and non-coding work; nonetheless CI means high rate of commits to the mainline branch)

Working in this way is hard, requires a lot of discipline and skill. It might seem impossible to make large scale changes this way at first glance. It’s not surprising it’s uncommon.

To visualise the difference

Why CI?

Get Feedback

Why would work work in such a way? Integrating our changes will incur some overhead. It likely means taking time out every single hour to review changes so far, tidy, merge, and deal with any conflicts arising.

Continuously integrating helps us get feedback as fast as possible. Like most Extreme Programming practices. It’s worth practising CI if that feedback is more valuable to you than the overhead.

Team mates

We may get feedback from other team members—who will see our code early when they pull it. Maybe they have ideas for doing things better. Maybe they’ll spot a conflict or an opportunity from their knowledge and perspective. Maybe you’ve both thought to refactor something in subtly different ways and the difference helps you gain a deeper insight into your domain.


CI amplifies feedback from the code itself. Listening to this feedback can help us write more modular, supple code that’s easier to change.

If our very-small change conflicts with another working on a different feature it’s worth considering whether the code being changed has too many responsibilities. Why did it need to change to support both features? Modularity is promoted by CI creating micro-pain from multiple people changing the same thing at the same time.

Making a large-scale change to our system via small sub-hour changes forces us to take a tidy-first approach. Often the next change we want to make is hard, not possible in less than an hour. Instead of taking our preconceived path towards our preconceived design, we are pressured to first make the change we want to make easier. Improve the design of the existing code so that the change we want to make becomes simple.

Even with this approach we’re unlikely to be able to make large scale changes in a single step. CI encourages mechanisms for integrating the code for incomplete changes. Such as branch by abstraction which further encourages modularity.

CI also exerts pressure to do more and better automated testing. If we don’t have automated checks for the behaviour of our code it may break when changed rapidly.

If our tests are brittle—coupled to the current structure of the code rather than the important behaviour then they will fail frequently when the code is changed. If our tests are slow then we’d waste lots of time running regularly, hopefully incentivising us to invest in speeding them up.

Continuous integration of small changes exposes us to this feedback regularly.

If we’re integrating hourly then this feedback is also timely. We can get feedback on our code structure and designs before it becomes expensive to change direction.


CI is a useful foundation for continuous delivery, and continuous deployment. Having the code always in an integrated state that’s safe to release.

Continuously deploying (not the same as releasing) our changes to production enables feedback from customers, users, its impact on production health.

Combat Risk

Arguably the most significant benefit of CI is that it forces us to make our changes in small, safe, low-risk steps. Constant practice ensures it’s possible when it really matters.

It’s easy to approach a radical change to our system from the comforting isolation of a feature branch. We can start pulling things apart across the codebase and shaping them into our desired structure. Freed from the constraints of keeping tests passing or even our code compiling. Coming back to getting it working, the code compiling, and the tests compiling afterwards.

The problem with this approach is that it’s high risk. There’s a high risk that our change takes a lot longer than expected and we’ll have nothing to integrate for quite some time. There’s a high risk that we get to the end and discover unforeseen problems only at integration time. There’s a high risk that we introduce bugs that we don’t detect until after our entire change is complete. There’s a high risk that our product increment and commercial goals are missed because they are blocked by our big radical change. There’s a risk we feel pressured into rushing and sacrificing code quality when problems are only discovered late during an integration phase.

CI liberates us from these risks. Rather than embarking on a grand plan all at once, we break it down into small steps that we can complete and integrate swiftly. Steps that only take a few mins to complete.

Eventually the accumulation of these small changes unlock product capabilities and enable releasing value. Working in small steps becomes predictable. No longer is there a big delay from “we’ve got this working” to “this is ready for release”

This does not require us to be certain of our eventual goal and design. Quite the opposite. We start with a small step towards our expected goal. When we find something hard, hard to change then we stop and change tack. First making a small refactoring to try and make our originally intended change easy to make. Once we’ve made it easy we can go back and make the actual change.

What if we realise we’re going in the wrong direction? Well we’ve refactored our code to make it easier to change. What if we’ve made our codebase better for no reason? We’ve still won.

Collaborate Effectively

Meetings are not always popular. Especially ceremonies such as standups. Nevertheless it’s important for a team of people working towards a common goal to understand where each other have got to. To be able to react to new information, change direction if necessary, help each other out.

The more we work separately in isolation, the more costly and painful synchronisation points like standups can become. Catching each other up on big changes in order to know whether to adjust the plan.

Contrast this with everyone working in small, easy to digest steps. Making their progress visible to everyone else on the team frequently. It’s more likely that everyone already has a good idea of where the rest of the team is at and less time must be spent catching up. When everyone on the team is aware of where everyone else has got to the team can actually work as team. Helping each other out to speed a goal.

No-one likes endless discussions that get in the way of making progress. No-one likes costly re-work when they discover their approach conflicts with other work in the team. No-one likes wasting time duplicating work. CI enables constant progress of the whole team, at a rate the whole team can keep up with.

Arguably the most extreme continuous integration is mob programming. The whole team working on the same thing, at the same time, all the time.


“but we’re making a large scale change”

We touched on this above. It’s usually possible to make a large scale change via small, safe, steps. First making the change easier, then making the change. Developing new functionality side by side in the same codebase until we’re satisfied it can replace older functionality.

Indeed the discipline required to make changes this way can be a positive influence on code quality.

“but code review”

Many teams have a process of blocking code review prior to integrating changes into a mainline branch. If this code review requires interrupting someone else every few minutes this may be impractical.

Continuous integration like requires being comfortable with changes being integrated without such a blocking pull-request review style gate.

It’s worth asking yourself why you do such review and whether a blocking approach is the only way. There are alternatives that may even achieve better results.

Pair programming means all code is reviewed at the point in time it was written. It also gives the most timely feedback from someone else who fully understands the context. Pairing tends to generate feedback that improves the code quality. Asynchronous reviews all too often focus on whether the code meets some arbitrary bar—focusing on minutiae such as coding style and the contents of the diff, rather than the implications of the change on our understanding of the whole system.

Pair programming doesn’t necessarily give all the benefits of a code review. It may be beneficial for more people to be aware of each change, and to gain the perspective of people who are fresher or more detached. This can be achieved to a large extent by rotating people through pairs, but review may still be useful.

Another mechanism is non-blocking code review. Treating code review more like a retrospective. Rather than “is this code good enough to be merged” ask “what can we learn from this change, and what can we do better?”.

Consider starting each day reviewing as a team the changes made the previous day and what you can learn from them. Or stopping and reviewing recent changes when rotating who you are pair-programming with. Or having a team retrospective session where you read code together and share ideas for different approaches.

“but main will be imperfect”

Continuous integration implies the main branch is always in an imperfect state. There will be incomplete features. There may be code that would have been blocked by a code review. This may seem uncomfortable if you strive to maintain a clean mainline that the whole team is happy with and is “complete”.

Imperfection in the main branch is scary if you’re used to the main branch representing the final state of code. Once it’s there being unlikely to change any time soon. In such a context being protective of it is a sensible response. We want to avoid mistakes we might need to live with for a long time.

However, an imperfect mainline is less of a problem in a CI context. What is the cost of a coding style violation that only lives for a few hours? What is the cost of temporary scaffolding (such as a branch by abstraction) living in the codebase for a few days?

CI suggests instead a habitable mainline branch. A workspace that’s being actively worked in. It’s not clinically clean, it’s a safe and useful environment to get work done in. An environment you’re comfortable spending lots of time in. How clean a workspace needs to be depends on the context. Compare a gardeners or plumbers’ work environment to a medical work environment.

“but how will we test it?”

Some teams separate the activities of software development from software testing. One pattern is testing features when each feature is complete, during an integration and stabilisation phase.

This allows teams to maintain a main branch that they think works, with uncertain work in progress in isolation.

However thorough our automated, manual, and exploratory testing we’re never going to have perfect software quality. Integration-testing might be a pattern to ensure integrated code meets some arbitrary quality bar but it won’t be perfect.

CI implies a different approach. Continuous exploratory testing of the main version. Continually improving our understanding of the current state of the system. Continuously improving it as our understanding improves. Combine this with TDD and high levels of automated checks and we can have some confidence that each micro change we integrate works as intended.

Again, this sort of approach requires being comfortable with main being imperfect. Or perhaps a recognition that it is always going to be imperfect, whatever we do.

“but we need to be able to do bugfixes”

Many teams work in batches. Deploying and releasing one set of features, working on more features in feature branches, then integrating, deploying, and releasing the next batch.

Under this model they can keep a branch that represents the current deployed version of the software. When an urgent bug is discovered in production they can fix it on this branch and deploy just that change.

From such a position the prospect of making a bugfix on top of a bunch of other already integrated changes might seem alarming. What if one of our other changes causes a regression.

CI is a fundamentally different way of working. Where our current state of main always captures the team’s current understanding of the most progressed, safest, least buggy system. Always deployable. Zero-bugs (bugs fixed when they’re discovered). Constantly evolving through small, safe steps.

A good way to make it safe to deploy bugfixes in a CI context is to also practise continuous deployment. Every micro-change deployed to production (not necessarily released). Doing this we’ll always have confidence we can deploy fixes rapidly. We’re forced to ensure that main is always safe for bugfixes.


There’s also plenty of circumstances in which CI is not feasible or not the right approach for you. Maybe you’re the only developer! Occasional integration works well for sporadic collaboration between people with spare time open source contributions. For teams distributed across wide timezones there’s less benefits to CI. You’re not going to get fast feedback while your colleague is asleep! You can still work in and benefit from small steps regardless of whether anyone is watching.

Sometimes feedback is less important than hammering out code. If you’re working on something that you could do in your sleep and all that holds you back is how fast you can hammer out lines of code. The value of CI is much less.

Perhaps your team is very used to working with long lived branches. Used to having the code/tests broken for extended periods while working on a problem. It’s not feasible to “just” switch to a continuous integration style. You need to get used to working in small, safe, steps.


Try it

Make “could we integrate what we’ve done” a question you ask yourself habitually. It fits naturally into the TDD cycle. When the tests are green consider integration. It should be safe.

Listen to the feedback

Listen to the feedback. Ok, you tried integrating more frequently and something broke, or things were slower. Why was that really? How could you avoid similar problems occurring while still being able to integrate regularly?

Tips when it’s hard

Combine with other Extreme Programming practices.

CI is easier with other Extreme Programming practices, not just TDD—which makes it safer and lends a useful cadence to development .

It’s easier when pair programming. Someone else helping remember the wider context. Someone to suggest stepping back and integrating a smaller set before going down a rabbit hole. Pairing also helps our chances of each change being safe to make. It’s more likely that others on the team will be happy with our change if our pair is on board.

CI is a lot easier with collective ownership. Where you are free to change any part of the codebase to make your desired change easy.

When your change is hard to do in small steps, first tackle one thing that makes it hard. “First make the change easy”

Separate expanding and contracting. Start building your new functionality in several steps alongside the old, then migrate existing usages, then finally remove the old. This can be done in several steps.

Separate integrating and releasing. Integrating your code should not mean that the code necessarily affects your users. Make releasing a product/business decision with feature toggles.

Invest in fast tooling. If your build and test suite takes more than 5 minutes you’re going to struggle to do continuous integration. A 5 min build and test run is feasible even with tens of thousands of tests. However, it does require constant investment in keeping the tooling fast. This is a cost of CI, but it’s also a benefit. CI requires you to keep the tooling you need to safely integrate and release a change fast and reliable. Something you’ll be thankful for when you need to make a change fast.

That’s a lot of work…

Unlike having CI [tooling], doing CI is not for all teams. It seems uncommonly practised. In some contexts it’s impractical. In others it’s not worth the overhead. Maybe worth considering whether the feedback and risk reduction would help your team.

If you’re not doing CI and you try it out, things will likely be hard. You may break things. Try to reflect deeper than “we tried it and it didn’t work”. What made it hard to work in and integrate small changes? Should you address those things regardless?