Posted by & filed under sql.

I love SQL, despite its many flaws.

Much is argued about functional programming vs object oriented. Different ways of instructing computers.

SQL is different. SQL is a language where I can ask the computer a question and it will figure out how to answer it for me.

Fluency in SQL is a very practical skill. It will make your life easier day to day. It’s not perfect, it has many flaws (like null) but it is in widespread use (unlike, say, prolog or D).

Useful in lots of contexts

As an engineer, sql databases often save me writing lots of code to transform data. They save me worrying about the best way to manage finite resources like memory. I write the question and the database (usually) figures out the most efficient algorithm to use, given the shape of the data right now, and the resources available to process it. Like magic.

SQL helps me think about data in different ways, lets me focus on the questions I want to ask of the data; independent of the best way to store and structure data.

As a manager, I often want to measure things, to know the answer to questions. SQL lets me ask lots of questions of computers directly without having to bother people. I can explore my ideas with a shorter feedback loop than if I could only pose questions to my team.

SQL is a language for expressing our questions in a way that machines can help answer them; useful in so many contexts.

It would be grand if even more things spoke SQL. Imagine you could ask questions in a shell instead of having to teach it how to transform data 

Why do we avoid it?

SQL is terrific. So why is there so much effort expended in avoiding it? We learn ORM abstractions on top of it. We treat SQL databases as glorified buckets of data: chuck data in, pull data out.

Transforming data in application code gives a comforting amount of control over the process, but is often harder and slower than asking the right question of the database in the first place.

Do you see SQL as a language for storing and retrieving bits of data, or as a language for expressing questions?

Let go of control 

The database can often figure out the best way of answering the question better than you.

Let’s take an identical query with three different states of data.

Here’s two simple relations with 1 attribute each. a and b. With a single tuple in each relation. 

CREATE TABLE a(id INT);
CREATE TABLE b(id INT);
INSERT INTO a VALUES(1);
INSERT INTO b VALUES(1);
EXPLAIN analyze SELECT * FROM a NATURAL JOIN b;

“explain analyze” is telling us how postgres is going to answer our question. The operations it will take, and how expensive they are. We haven’t told it to use quicksort, it has elected to do so.

Looking at how the database is doing things is interesting, but let’s make it more interesting by changing the data. Let’s add in a boatload more values and re-run the same query.

INSERT INTO a SELECT * FROM generate_series(1,10000000);
EXPLAIN analyze SELECT * FROM a NATURAL JOIN b;

We’ve used generate_series to generate ten million tuples in relation ‘a’. Note the “Sort method” has changed to use disk because the data set is larger compared to the resources the database has available. I haven’t had to tell it to do this. I just asked the same question and it has figured out that it needs to use a different method to answer the question now that the data has changed.

But actually we’ve done the database a disservice here by running the query immediately after inserting our data. It’s not had a chance to catch up yet. Let’s give it a chance by running analyze on our relations to force an update to its knowledge of the shape of our data. 

analyze a;
analyze b;
EXPLAIN analyze SELECT * FROM a NATURAL JOIN b;

Now re-running the same query is a lot faster, and the approach has significantly changed. It’s now using a Hash Join not a Merge Join. It has also introduced parallelism to the query execution plan. It’s an order of magnitude faster. Again I haven’t had to tell the database to do this, it has figured out an easier way of answering the question now that it knows more about the data.

Asking Questions

Let’s look at some of the building blocks SQL gives us for expressing questions. The simplest building block we have is asking for literal values.

SELECT 'Eddard';
SELECT 'Catelyn';

A value without a name is not very useful. Let’s rename them.

SELECT 'Eddard' AS forename;
SELECT 'Catelyn' AS forename;

What if we wanted to ask a question of multiple Starks: Eddard OR Catelyn OR Bran? That’s where UNION comes in. 

SELECT 'Eddard' AS forename 
UNION SELECT 'Catelyn' AS forename 
UNION SELECT 'Bran' AS forename;

We can also express things like someone leaving the family. With EXCEPT.

SELECT 'Eddard' AS forename 
UNION SELECT 'Catelyn' AS forename 
UNION SELECT 'Bran' AS forename 
EXCEPT SELECT 'Eddard' AS forename;

What about people joining the family? How can we see who’s in both families. That’s where INTERSECT comes in.

(
  SELECT 'Jamie' AS forename 
  UNION SELECT 'Cersei' AS forename 
  UNION SELECT 'Sansa' AS forename
) 
INTERSECT 
(
  SELECT 'Sansa' AS forename
);

It’s getting quite tedious having to type out every value in every query already. 

SQL uses the metaphor “table”. We have tables of data. To me that gives connotations of spreadsheets. Postgres uses the term “relation” which I think is more helpful. Each “relation” is a collection of data which have some relation to each other. Data for which a predicate is true. 

Let’s store the starks together. They are related to each other. 

CREATE TABLE stark AS 
SELECT 'Sansa' AS forename  
UNION SELECT 'Eddard' AS forename  
UNION SELECT 'Catelyn' AS forename  
UNION SELECT 'Bran' AS forename ;
 
CREATE TABLE lannister AS 
SELECT 'Jamie' AS forename 
UNION SELECT 'Cersei' AS forename 
UNION SELECT 'Sansa' AS forename;

Now we have stored relations of related data that we can ask questions of. We’ve stored the facts where “is a member of house stark” and “is a member of house lannister” are true. What if we want people who are in both houses. A relational AND. That’s where NATURAL JOIN comes in.

NATURAL JOIN is not quite the same as the set based and (INTERSECT above). NATURAL JOIN will work even if there are different arity tuples in the two relations we are comparing.

Let’s illustrate this by creating a relation pet with two attributes.

create table pet as 

CREATE TABLE pet AS 
SELECT 'Sansa' AS forename, 'Lady' AS pet
UNION SELECT 'Bran' AS forename, 'Summer' AS pet;

Now we have an AND, what about OR? We have a set-or above (UNION). I think the closest thing to a relational OR is a full outer join. 

CREATE TABLE animal AS SELECT 'Lady' AS forename, 'Wolf' AS species UNION SELECT 'Summer' AS forename, 'Wolf' AS species;
SELECT * FROM stark FULL OUTER JOIN animal USING(forename);

Ok so we can ask simple questions with ands and ors. There are also equivalents of most of the relational algebra operations

What if I want to invade King’s Landing?

What about more interesting questions? We can do those too. Let’s jump ahead a bit.

What if we’re wanting to plan an attack on Kings Landing and need to consider the routes we could take to get there. Starting from just some facts about the travel options between locations, let’s ask the database to figure out routes for us.

First the data. 

CREATE TABLE move (place text, method text, newplace text);
INSERT INTO move(place,method,newplace) VALUES
('Winterfell','Horse','Castle Black'),
('Winterfell','Horse','White Harbour'),
('Winterfell','Horse','Moat Cailin'),
('White Harbour','Ship','King''s Landing'),
('Moat Cailin','Horse','Crossroads Inn'),
('Crossroads Inn','Horse','King''s Landing');

Now let’s figure out a query that will let us plan routes between origin and destination as below

We don’t need to store any intermediate data, we can ask the question all in one go. Here “route_planner” is a view (a saved question)

CREATE VIEW route_planner AS
WITH recursive route(place, newplace, method, LENGTH, path) AS (
	SELECT place, newplace, method, 1 AS LENGTH, place AS path FROM move --starting point
		UNION -- or 
	SELECT -- next step on journey
		route.place, 
		move.newplace, 
		move.method, 
		route.length + 1, -- extra step on the found route 
		path || '-[' || route.method || ']->' || move.place AS path -- describe the route
	FROM move 
	JOIN route ON route.newplace = move.place -- restrict to only reachable destinations from existing route
) 
SELECT 
	place AS origin, 
	newplace AS destination, 
	LENGTH, 
	path || '-[' || method ||  ']->' || newplace AS instructions 
FROM route;

I know this is a bit “rest of the owl” compared to what we were doing above. I hope it at least illustrates the extent of what is possible. (It’s based on the prolog tutorial). We have started from some facts about adjacent places and asked the database to figure out routes for us.

Let’s talk it through…

CREATE VIEW route_planner AS

this saves the relation that’s the result of the given query with a name. We did this above with

CREATE TABLE lannister AS 
SELECT 'Jamie' AS forename 
UNION SELECT 'Cersei' AS forename 
UNION SELECT 'Sansa' AS forename;

While create table will store a static dataset, a view will re-execute the query each time we interrogate it. It’s always fresh even if the underlying facts change.

WITH recursive route(place, newplace, method, LENGTH, path) AS (...);

This creates a named portion of the query, called a “common table expression“. You could think of it like an extract-method refactoring.  We’re giving part of the query a name to make it easier to understand. This also allows us to make it recursive, so we can build answers on top of partial answers, in order to build up our route.

SELECT place, newplace, method, 1 AS LENGTH, place AS path FROM move

This gives us all the possible starting points on our journeys. Every place we know we can make a move from. 

We can think of two steps of a journey as the first step OR the second step. So we represent this OR with a UNION

JOIN route ON route.newplace = move.place

Once we’ve found our first and second steps, the third step is just the same—treating the second step as the starting point. “route” here is the partial journey so far, and we look for feasible connected steps. 

path || '-[' || route.method || ']->' || move.place AS path;

here we concatenate instructions so far through the journey. Take the path travelled so far, and append the next mode of transport and next destination.

Finally we select the completed journey from our complete route

SELECT 
	place AS origin, 
	newplace AS destination, 
	LENGTH, 
	path || '-[' || method ||  ']->' || newplace AS instructions 
FROM route;

Then we can ask the question

SELECT instructions FROM route_planner 
WHERE origin = 'Winterfell' 
AND destination = 'King''s Landing';

and get the answer

                                 instructions                                   
-------------------------------------------------------------------------------
Winterfell-[Horse]->White Harbour-[Ship]->King's Landing
Winterfell-[Horse]->Moat Cailin-[Horse]->Crossroads Inn-[Horse]->King's Landing
(2 rows)

Thinking in Questions

Learning SQL well can be a worthwhile investment of time. It’s a language in widespread use, across many underlying technologies. 

Get the most out of it by shifting your thinking from “how can I get at my data so I can answer questions” to “How can I express my question in this language?”. 

Let the database figure out how to best answer the question. It knows most about the data available and the resources at hand.

Posted by & filed under Leadership, Star Trek, XP.

Growing up, an influential television character for me was Jean Luc Picard from Star Trek the Next Generation.

Picard was a portrayal of a different sort of leader to most. Picard didn’t order people about. He didn’t assume he knew best.  He wasn’t seeking glory. In his words: “we work to better ourselves, and the rest of humanity”. The Enterprise was a vehicle for his crew to better themselves and society. What a brilliant metaphor for organisations to aspire to.

My current job title is “Director of engineering”. I kind of hate it. I don’t want to direct anybody. I represent them as part of my first team. People don’t report to me; I support people. My mission is to clarify so they can decide for themselves, and to help them build skills so they can succeed. 

Director is just a word, but words matter. Language matters.

Picard was an effective leader in part due to the language he used. Here’s a few lessons we can learn from the way he talked.

“Make it So!”

Picard is probably best known for saying “make it so!”

This catchphrase says so much about his leadership style. He doesn’t bark orders. He gives the crew the problems to solve. He listens to his crew and supports their ideas. His crew state their intent and he affirms (or not) their decisions. 

I think “Make it so” is even more powerful than the more common “very well” or “do it”, which are merely agreeing with the action being proposed.

“Make it so” is instead an agreement with the outcome being proposed. The crew are still free to adjust their course of action to achieve that outcome. They won’t necessarily have to come back for approval if they have to change their plan to achieve the same outcome. They understand their commander’s intent.  

And of course it’s asking for action “wishing for a thing does not make it so”

“Oh yes?” 

Picard’s most common phrase was not affirming decisions, but “oh yes”. Because he was curious, he would actively listen to his crew. He sought first to understand.  

It’s fitting that he says this more than make it so. Not everything learned requires action. It’s easy to come out of retrospectives with a long list of actions. I’d rather we learned something and took no actions than took action without learning anything. 

“Suggestions?” 

In “Cause and Effect” (The one with Captain Fraiser Crane) there’s an imminent crisis. The Enterprise is on a collision course with another ship and are unable to maneuver. What does Picard do? Resort to command and control? No; despite the urgency he asks for suggestions from the crew. Followed by a “make it so” agreement to act.

Asking for suggestions during a crisis requires enough humility to realise your crew or team is collectively smarter than you are. Picard trusts his team to come up with the best options. 

He is also willing to show vulnerability, even during a crisis. His ego doesn’t get in the way. 

In this episode, the crew did not automatically follow the suggestion of the most senior person in the room. The solution to the crisis is eventually found in the second suggestion, after they tried the first. They succeeded because they had diverged and discovered options first, before converging on a solution.

The crew were aware of the other options open to them, and when the first failed they acted to try another (successful) option. Crucially, they did not wait for their captain to approve it. There wasn’t time to let the captain make the decision, but they were free to try the other options because they’d been told to “make it so” not “do it”.

To achieve the best outcomes as a team, intentionally seek divergent ideas first before converging on a decision. Avoid converging too soon on an idea that sounds promising, when others in the group may have better ideas. If your first choice doesn’t work out you will be able to try the other ideas that came up. 

“Nicely done!”

Picard made sure his crew knew when he thought they had done well. Even when they violated orders! He was not one to blindly follow orders himself and he praised his crew when they violated orders for good reasons.

Data’s violation of orders is met not with a reprimand but a “nicely done!”; when Data questions it Picard responds “The claim ‘I was only following orders’ has been used to justify too many tragedies in our history”

How different might the tech industry be if more people carefully considered whether doing what they’ve been told is the right or wrong thing to do? 

Posted by & filed under Java.

Java 16 brings Pattern Matching for instanceof. It’s a feature with exciting possibilities, though quite limited in its initial incarnation. 

Basic Pattern Matching

We can now do things like 

Object o = "hello";
if (o instanceof String s) {
    System.out.println(s.toUpperCase());
}

Note the variable “s”, which is then used without any casting needed.

Deconstructing Optionals

It would be even nicer if we could deconstruct structured types at the same time. I think it’s coming in a future version, but I’m impatient. Let’s see how close we can get with the current tools.

One thing we can do is use a function on the left hand side of this expression.

Let’s consider the example of dealing with unknown values in Java. There’s a fair amount of legacy to deal with. Sometimes you’ll get a value, sometimes you’ll get a null. Sometimes you’ll get an Optional<T> , sometimes you’ll get an Optional<Optional<T>> etc.

How can we make unknowns nicer to deal with?

We could create a little utility function that lets us convert all of these forms of unknown into either a value or not. Then match with an instanceof test.

Object unknown = Optional.of("Hello World");
assertEquals(
       "hello world",
       unwrap(unknown) instanceof String s
               ? s.toLowerCase()
               : "absent"
);

Thanks to instanceof pattern matching we can just use the string directly, without having to resort to passing method references i.e optional.map(String::toLowercase) 

The unwrap utility itself uses pattern matching against Optional to recursively unwrap values from nested optionals. It also converts nulls and Optional.empty() to a non-instantiatable type for the purposes of ensuring they can never match the above pattern.

static Object unwrap(Object o) {
   if (o instanceof Optional<?> opt) {
       return opt.isPresent() ? unwrap(opt.get()) : None;
   } else if (o != null) {
       return o;
   } else {
       return None;
   }
}
static class None {
   private None() {}
   public static final None None = new None();
}

Here’s several more examples, if you’d like to explore further.

Deconstructing Records

What about more complex structures? Now that we have record types, wouldn’t it be great if we can deconstruct them to work with individual components more easily. I think until more powerful type patterns exist in the language we’ll have to diverge from the instanceof approach. 

I previously showed how we could do this for records we control, by having them implement an interface. What about records we do not control? How can we deconstruct those? 

This is about the closest I can get to what I’d hope would be possible as a first class citizen in the language in future.

record Name(String first, String last) {}
Object name = new Name("Benji", "Weber");
If.instance(name, (String first, String last) -> {
   System.out.println(first.toLowerCase() + last.toLowerCase()); // prints benjiweber
});

It takes a record (Name) and a lambda where the method parameters are of the same types as the component types in the record. It deconstructs the record component parts and passes them to the lambda to use (assuming the record really matches).

We could also use as an expression to return a value, as long as we provide a fallback for the case when the pattern does not match.

Object zoo = new Zoo(new Duck("Quack"), new Dog("Woof"));
 
String result = withFallback("Fail").
    If.instance(zoo, (Duck duck, Dog dog) ->
       duck.quack() + dog.woof()
    ); // result is QuackWoof

So how does this work? 

If.instance is a static method which takes an Object of unknown type (we hope it will be a Record), and a lambda function that we want to pattern match against the provided object.

How can we use a lambda as a type pattern? We can use the technique from my lambda type references article—have the lambda type be a SerializableLambda which will allow us to use reflection to read the types of each parameter. 

static <T,U,V> void instance(Object o, MethodAwareTriConsumer<T,U,V> action) { 
 
}

So we start with something like the above, a method taking an object and a reflectable lambda function.

Next we can make use of pattern matching again to check if it’s a record.

static <T,U,V> void instance(Object o, MethodAwareTriConsumer<T,U,V> action) {
   if (o instanceof Record r) {
	// now we know it's a record
   }
}

Records allow reflection on their component parts. Let’s check whether we have enough component parts to match the pattern.

static <T,U,V> void instance(Object o, MethodAwareTriConsumer<T,U,V> action) {
   if (o instanceof Record r) {
       if (r.getClass().getRecordComponents().length < 3) {
           return;
       }
 
	 // at this point we have a record with enough components and can use them.
   }
}

Now we can invoke the passed action itself 

action.tryAccept((T) nthComponent(0, r), (U) nthComponent(1, r), (V) nthComponent(2, r));

Where nthComponent uses reflection to access the relevant component property of the record.

private static Object nthComponent(int n, Record r)  {
   try {
       return r.getClass().getRecordComponents()[n].getAccessor().invoke(r);
   } catch (Exception e) {
       throw new RuntimeException(e);
   }
}

tryAccept is a helper default method I’ve added in MethodAwareTriConsumer. It checks whether the types of the provided values match the method signature before trying to pass them. Avoiding ClassCastException

interface MethodAwareTriConsumer<T,U,V> extends TriConsumer<T,U,V>, ParamTypeAware {
   default void tryAccept(T one, U two, V three) {
       if (acceptsTypes(one, two, three)) {
           accept(one, two, three);
       }
   }
   default boolean acceptsTypes(Object one, Object two, Object three) {
       return paramType(0).isAssignableFrom(one.getClass())
               && paramType(1).isAssignableFrom(two.getClass())
               && paramType(2).isAssignableFrom(three.getClass());
   }
 
   default Class<?> paramType(int n) {
       int actualParameters = method().getParameters().length; // captured final variables may be prepended
       int expectedParameters = 3;
       return method().getParameters()[(actualParameters - expectedParameters) + n].getType();
   }
}

Then put all this together and we can pattern match against Objects of unknown type, and deconstruct them if they’re records matching the provided lambda type-pattern.

record Colour(Integer r, Integer g, Integer b) {}
 
Object unknown = new Colour(5,6,7); // note the Object type
 
int result = withFallback(-1).
    If.instance(unknown, (Integer r, Integer g, Integer b) ->
       r + g + b
    );
 
assertEquals(18, result);

Degrading safely if the pattern does not match

Object unknown = new Name("benji", "weber");
 
int result = withFallback(-1).
    If.instance(unknown, (Integer r, Integer g, Integer b) ->
       r + g + b
    );
 
assertEquals(-1, result);

Code for the record deconstruction and several more examples all in this test on github. Hopefully all this will be made redundant by future enhancements to Java’s type patterns :)

Posted by & filed under ContinuousDelivery, XP.

“We got lucky”—it’s one of those phrases I listen out for during post incident or near-miss reviews. It’s an invitation to dig deeper; to understand what led to our luck. Was it pure happenstance? …or have we been doing things that increased or decreased our luck?   

There’s a saying of apparently disputed origin: “Luck is when preparation meets opportunity”. There will always be opportunity for things to go wrong in production. What does the observation “we got lucky” tell us about our preparation? 

How have we been decreasing our luck?

What unsafe behaviour have we been normalising? It can be the absence of things that increase safety. What could we start doing to increase our chances of repeating our luck in a similar incident? What will we make time for? 

“We were lucky that Amanda was online, she’s the only person who knows this system. It would have taken hours to diagnose without her” 

How can we improve collective understanding and ownership? 

Post incident reviews are a good opportunity for more of the team to understand, but we don’t need to wait for something to go wrong. Maybe we should dedicate a few hours a week to understanding one of our systems together? What about trying pair programming? Chaos engineering?

How can we make our systems easier to diagnose without relying on those who already have a good mental model of how they work? Without even relying on collaboration? How will we make time to make our systems observable? What would be the cost of “bad luck” here? maybe we should invest some of it in tooling? 

If “we got lucky” implies that we’d be unhappy with the unlucky outcome, then what do we need to stop doing to make more time for things that can improve safety? 

How have we been increasing our luck? 

I love the extreme programming idea of looking for what’s working, and then turning up the dials

Let’s seek to understand what preparation led to the lucky escape, and think how we can turn up the dials.

“Sam spotted the problem on our SLIs dashboard”

Are we measuring what matters on all of our services? Or was part of “we got lucky” that it happened to be one of the few services where we happen to be measuring the things that matter to our users? 

“Liz did a developer exchange with the SRE team last month and learned how this worked”

Should we make more time for such exchanges or and personal learning opportunities? 

“Emily remembered she was pairing with David last week and made a change in this area”

Do we often pair? What if we did more of it?

How frequently do we try our luck?

If you’re having enough production incidents to be able to evaluate your preparation, you’re probably either unlucky or unprepared ;)

If you have infrequent incidents you may be well prepared but it’s hard to tell. Chaos engineering experiments are a great way to test your preparation, and practice incident response in a less stressful context. It may seem like a huge leap from your current level of preparation to running automated chaos monkeys in production, but you don’t need to go straight there. 

Why not start with practice drills? You could have a game host who comes up with a failure scenario. You can work up to chaos in production. 

Dig deeper: what are the incentives behind your luck?

Is learning incentivised in your team, or is there pressure to get stuff shipped? 

What gets celebrated in your team? Shipping things? Heroics when production falls over? Or time spent thinking, learning, working together?

Service Level Objectives (SLOs) are often used to incentivise (enough) reliability work vs feature work…if the SLO is at threat we need to prioritise reliability. 

I like SLOs, but by the time the SLO is at risk it’s rather late. Adding incentives to counter incentives risks escalation and stress. 

What if instead we removed (or reduced) the existing incentives to rush & sacrifice safety. Remove rather than try to counter them with extra incentives for safety? 🤔

Posted by & filed under Java.

Some time ago I wrote a post about creating an embedded dsl for Html in Java. Sadly, it was based on an abuse of lambda name reflection that was later removed from Java.

I thought I should do a followup because a lot of people still visit the old article. While it’s no longer possible to use lambda parameter names in this way, we can still get fairly close. 

The following approach is slightly less concise. That said, it does have some benefits over the original:

a) You no longer need to have parameter name reflection enabled at compile time.

b) The compiler can check your attribute names are valid, and you can autocomplete them.

What does it look like? 

html(
   head(
       title("Hello Html World"),
       meta($ -> $.charset = "utf-8"),
       link($->{ $.rel=stylesheet; $.type=css; $.href="/my.css"; }),
       script($->{ $.type= javascript; $.src="/some.js"; })
   ),
   body(
       div($-> $.cssClass = "article",
           a($-> $.href="https://benjiweber.com/",
               span($->$.cssClass="label", "Click Here"),
               img($->{$.src="/htmldsl2.png"; $.width=px(25); $.height=px(25); })
           ),
           p(span("some text"), div("block"))
       )
   )
)

This generates the following html

<html>
 <head>
   <title>Hello Html World</title>
   <meta charset="utf-8" />
   <link rel="stylesheet" type="css" href="/my.css" />
   <script type="text/javascript" src="/some.js" ></script>
 </head>
 <body>
   <div class="article">
     <a href="https://benjiweber.com/">
       <span class="label">Click Here</span>
       <img src="/htmldsl2.png" width="25" height="25" />
     </a>
     <p>
       <span>some text</span>
       <div>block</div>
     </p>
   </div>
 </body>
</html>

You get nice autocompletion, and feedback if you specify inappropriate values:

You’ll also get a helping hand from the types to not put tags in inappropriate places:

Generating Code

As it’s Java you can easily mix other code to generate markup dynamically:

assertEquals(
       """
       <html>
         <head>
           <meta charset="utf-8" />
         </head>
         <body>
           <p>Paragraph one</p>
           <p>Paragraph two</p>
           <p>Paragraph three</p>
         </body>
       </html>
       """.trim(),
       html(
           head(
               meta($ -> $.charset = "utf-8")
           ),
           body(
               Stream.of("one","two","three")
                   .map(number -> "Paragraph " + number)
                   .map(content -> p(content))
           )
       ).formatted()
);

And the code can help you avoid injection attacks by escaping literal values: 

assertEquals(
       """
       <html>
         <head>
           <meta charset="utf-8" />
         </head>
         <body>
           <p>&lt;script src="attack.js"&gt;&lt;/script&gt;</p>
         </body>
       </html>
       """.trim(),
       html(
           head(
               meta($-> $.charset = "utf-8")
           ),
           body(
               p("<script src=\"attack.js\"></script>")
           )
       ).formatted()
);

How does it work?

There’s only one “trick” here that’s particularly useful for DSLs. Using the Parameter Objects pattern from my lambda type references post. 

The lambdas used for specifying the tag attributes are “aware” of their own types. And capable of instantiating the configuration they specify.

When we call 

meta($ -> $.charset="utf-8")

We make a call to 

default Meta meta(Parameters<Meta> params, Tag... children) {}

The lambda specifying the attribute config is structurally equivalent to the Parameters<Meta> type. This provides a get() function that instantiates an instance of Meta, and then passes the new instance to the lambda function to apply the config.

public interface Parameters<T> extends NewableConsumer<T> {
   default T get() {
       T t = newInstance();
       accept(t);
       return t;
   }
}

Under the hood the newInstance() method uses reflection to examine the SerializedLambda contents and find the type parameter (in this case “Meta”) before instantiating it.

You can follow the code or see the previous post which explains it in a bit more detail.

Add Mixins

It’s helpful to use interfaces as mixins to avoid having to have one enormous class with all the builder definitions. 

public interface HtmlDsl extends
   Html.Dsl,
   Head.Dsl,
   Title.Dsl,
   Meta.Dsl,
   Link.Dsl,
   Script.Dsl,
   Body.Dsl,
   Div.Dsl,
   Span.Dsl,
   A.Dsl,
   P.Dsl,
   Img.Dsl {}

Each tag definition then contains its own builder methods. We compose them together into a single HtmlDsl interface for convenience. This saves having to import hundreds of different methods. By implementing the Dsl interface a consumer gets access to all the builder methods.

Show me the code

It’s all on github. I’d start from the test examples. Bear in mind that it’s merely a port of the old proof of concept to a slightly different approach. I hope it helps illustrate the technique. It’s in no way attempting to be a complete implementation.

This approach can also be useful as an alternative to the builder pattern for passing a specification or configuration to a method. There’s another example on the type references article.

What else could you use this technique for?

Posted by & filed under XP.

“How was your day?” “Ugh, I spent all day in meetings, didn’t get any work done!” 

How often have you heard this exchange?

It makes me sad because someone’s day has not been joyful; work can be fun. 

I love a whinge as much as the next Brit; maybe if we said what we mean rather than using the catch-all “meetings” we could make work joyful.

Meetings are work

Meetings are work. It’s a rare job where you can get something done alone without collaborating with anyone else. There are some organisations that thrive with purely async communication. Regardless, if you’re having meetings let’s recognise that they are work. 

What was it about your meeting-full day that made you sad? It doesn’t have to be that way.

Working together can be fun

I’ve seen teams after a day of ensemble (mob) programming. Exhausted, yet elated at the amount they’ve been able to achieve together; at the breakthroughs they’ve made. Yet a group of people, working together, on the same thing, sounds an awful lot like a meeting. Aren’t those bad‽

Teams who make time together for a full day of planning, who embrace the opportunity to envision the future together, can sometimes come away filled with hope. Hope that better things are within their grasp than they previously believed possible.

Yet the more common experience of meetings seems synonymous with “waste of time” or “distraction from real work”. Why is this? Why weren’t they useful?

One team’s standup can be an energising way to kick off the day. Hearing interesting things we collectively learned since yesterday Deciding together what that means for today’s plan; who will work with whom on what? 

For another team it may be a depressing round of status updates that shames people who feel bad that they’ve not achieved as much as they’d hoped.

How do we make meetings better?

A first step is talking about what did or didn’t work, rather than accepting they have to be this way. Because there’s no “one weird trick” that will make your meetings magical. You’ll need to find what works for your team.

Why should you care? You probably prefer fun work. If you could make your meetings a little more fun you might enjoy your work a lot more.

Meetings beget meetings. Running out of time. Follow-ups. Clarifying things that were confusing from the first meeting… Ineffective meetings breed. Tolerating bad leads to more misery.

Saying what we mean

Here’s some things we could say that are more specific

We didn’t need a meeting for that

Was it purely a broadcast of information with no interactivity? Could we have handled it asynchronously via email/irc/slack etc?

I didn’t need to be there

No new information to you? Nothing you could contribute? If you’re not adding value how about applying the law of mobility and leave / opt out in future. Or feed back to the organiser. Maybe they’re seeing value you’re adding that you’re oblivious to.

I don’t know what that meeting was for

How about we clarify the goal for next time. Make it a ground rule for future meetings. If it’s worth everyone making time for it’s worth stating the purpose clearly so people can prepare & participate.

It wasn’t productive

Was the meeting to make a decision and we came out without either deciding anything or learning anything? 

Was the meeting to make progress towards a shared goal and it feels like we talked for an hour and achieved nothing.

Perhaps we’d benefit from a facilitator next time.

It was boring

Can we try mixing up the format? Could you rotate the facilitator to get different styles? How can you engage everyone? Or does the boredom indicate that the topic is not worth a meeting?

If it is important but still boring how do we make it engaging? It’s telling that “workshop” “retrospective” “hackathon” and other more specific names don’t have the same connotation as the catch-all “meetings”. Just giving the activity a name shows that someone has thought about an appropriate activity that will engage the participants to achieve a goal.

I needed more time to think

Could we share proposals for consideration beforehand? Suggest background reading to enable people to come prepared? Allocate time for reading and thinking in the meeting?

It was too long

We could have achieved the same outcome in 5 minutes but ended up talking in circles for an hour. 

I didn’t hear from ____

Did we exclude certain people from the conversation, intentionally or unintentionally? What efforts can you make to create a space for everyone to participate?

Not enough focus time

Do you need to defragment your calendar? Cramming in activities that need deep focus in gaps between meetings is not a recipe for success. Do you need to ask your manager for help rescheduling meetings that you can’t control? Should you be going to them all or can you trust someone else to represent you?

Too many context switches

Even if you don’t need focus time, context switching from one meeting to another can be exhausting. Are you or your team actively involved in too many different things? Can you say no to more? Can you work with others and reschedule meetings to give each day a focus?

It wasn’t as important as other work

Maybe you’re wasting lots of time planning things you might never get to and would be better off focusing on what’s important right now? Is your whole team attending something that you could send a representative to? Perhaps reading the minutes will be enough. 

Highlight the value

We decided on a database for the next feature 

We learned how the production incident occurred

We heard the difficulty the customer is having with…

We made a plan for the day 

We shared how we halved our production lead time

We realised that our solution won’t work

We agreed some coding principles

Tackling your meetings

What’s your least valuable meeting? Which brings you the least joy? 

What’s your most valuable meeting? Which brings you the most joy? 

What’s the difference between these? What made them good or bad. 

Turn up the good; vote with your feet on the bad.

Meandering path towards value

Posted by & filed under ContinuousDelivery, XP.

We design systems around the size of delays that are expected. You may have seen the popular table “latency numbers every programmer should know” which lists some delays that are significant in technology systems we build.

Teams are systems too. Delays in operations that teams need to perform regularly are significant to their effectiveness. We should know what they are.

Ssh to a server on the other side of the world and you will feel frustration; delay in the feedback loop from keypress to that character displayed on the screen. 

Here’s some important feedback loops for a team, with feasible delays. I’d consider these delays tolerable by a team doing their best work (in contexts I’ve worked in). Some teams can do better, lots do worse.

OperationDelay
Run unit tests for the code you’re working on< 100 Milliseconds
Run all unit tests in the codebase< 20 Seconds
Run integration tests< 2 Minutes
From pushing a commit to live in production< 5 Minutes
Breakage to Paging Oncallper SLO/Error Budget
Team Feedback< 2 Hours
Customer Feedback< 1 Week
Commercial Bet Feedback< 1 Quarter

What are the equivalent feedback mechanisms for your team? How long do they take? How do they influence your work?

Feedback Delays Matter

They represent how quickly we can learn. Keeping the delays as low as the table above means we can get feedback as fast as we have made any meaningful progress. Our tools/system do not hold us back. 

Feedback can be synchronous if you keep them this fast. You can wait for feedback and immediately use it to inform your next steps. This helps avoid the costs of context switching. 

With fast feedback loops we run tests, and fix broken behaviour. We integrate our changes and update our design to incoporate a colleague’s refactoring.

Fast is deploying to production and immediately addressing the performance degradation we observe. It’s rolling out a feature to 1% of users and immediately addressing errors some of them see.

With slow feedback loops we run tests and respond to some emails while they run, investigate another bug, come back and view the test results later. At this point we struggle to build a mental model to understand the errors. Eventually we’ll fix them and then spend the rest of the afternoon trying to resolve conflicts with a branch containing a week’s changes that a teammate just merged.

With slow deploys you might have to schedule a change to production. Risking being surprised by errors reported later that week, when it has finally gone live, asynchronously. Meanwhile users have been experiencing problems for hours.

Losing Twice

As feedback delays increase, we lose twice:

a) We waste more time waiting for these operations (or worse—incur context switching costs as we fill the time waiting)

b) We are incentivised to seek feedback less often, since it is costly to do so. Thereby wasting more time & effort going in the wrong direction.

I picture this as a meandering path towards the most value. Value often isn’t where we thought it was at the start. Nor is the route to it often what we envisioned at the start.

We waste time waiting for feedback. We waste time by following our circuitous route. Feedback opportunities can bring us closer to the ideal line.

When feedback is slow it’s like setting piles of money on fire. Investment in reducing feedback delays often pays off surprisingly quickly—even if it means pausing forward progress while you attend to it.

This pattern of going in slightly the wrong direction then correcting repeats at various granularities of change. From TDD, to doing (not having) continuous integration. From continuous deployment to testing in production. From customers in the team, to team visibility of financial results.

Variable delays are even worse

In recent times you may have experienced the challenge of having conversations over video links with significant delays. This is even harder when the delay is variable. It’s hard to avoid talking over each other. 

Similarly, it’s pretty bad if we know it’s going to take all day to deploy a change to production. But it’s so worse if we think we can do it in 10 minutes, when it actually ends up taking all day. Flaky deployment checks, environment problems, change conflicts create unpredictable delays.

It’s hard to get anything done when we don’t know what to expect. Like trying to hold a video conversation with someone on a train that’s passing through the occasional tunnel. 

Measure what Matters 

The time it takes for key types of feedback can be a useful lead indicator on the impact a team can have over the longer term. If delays in your team are important to you why not measure them and see if they’re getting better or worse over time? This doesn’t have to be heavyweight.

How about adding a timer to your deploy process and graphing the time it takes from start to production over time? If you don’t have enough datapoints to plot deploy delay over time that probably tells you something ;)

Or what about a physical/virtual wall for waste. Add to a tally or add a card every time you have wasted 5 mins waiting. Make it visible. How big did the tally get each week?

What do the measurements tell you? If you stopped all feature work for a week and instead halved your lead time to production, how soon would it pay off?

Would you hit your quarterly goals more easily if you stopped sprinting and first removed the concrete blocks strapped to your feet?

What’s your experience?

Every team has a different context. Different sorts of feedback loops will be more or less important to different teams. What’s important enough for your team to measure? What’s more important than I’ve listed here?

What is difficult to keep fast? What gets in the way? What is so slow in your process that synchronous feedback seems like an unattainable dream? 

Posted by & filed under XP.

Extreme Programming describes five values: communication, feedback, simplicity, courage, and respect. I think that humility might be more important than all of these. 

Humility enables compassion. Compassion both provides motivation for and maximises the return on technical practices. Humility pairs well with courage, helps us keep things simple, and makes feedback valuable.

Humility enables Compassion 

Humility helps you respect the people you’re working with and see what they bring. We can’t genuinely respect them if we’re feeling superior; if we think we have all the answers. 

If we have compassion for our teammates (and ourselves) we will desire to minimise their suffering. 

We will want to avoid inflicting difficult merges on anyone. We will want to avoid wasting their time, or forcing them to re-work; having been surprised by our changes. The practice of Continuous Integration can come from the desire to minimise suffering in this way.

We will want those who come after us in the future to be able to understand our work—understand the important behaviour and decisions we made. We’ll want them to have the best safety net possible. Tests and living documentation such as ADRs can come from this desire. 

We’d desire the next person to have the easiest possible job to change or build upon what we’ve started, regardless of their skill and knowledge. Simplicity and YAGNI can come from this desire.

Humility and compassion can drive us to be curious: what are the coding and working styles and preferences of our team mates? What’s the best way to collaborate to maximise my colleagues’ effectiveness?

Without compassion we might write code that is easiest for ourselves to understand—using our preferred idioms and style without regard for how capable the rest of the team is to engage with it.

Without humility our code might show off our cleverness.

Humility to keep things simple 

To embrace simplicity we have to admit that we might be wrong about what we’ll need in the future.

Humility helps us acknowledge that we will find this harder and harder to maintain in the future. Even if we’re still part of the team. We all have limited capacity to deal with complexity.

We need humility to realise we will likely be wrong about what we’ll need in the future. We’ll have courage to try to predict our direction, but strive for the simplest possible code to support what we have now. This will make it easier for whomever must change it when we realise how we’re wrong. 

Humility to value feedback

To value feedback we have to admit that we might be wrong

Why pair program if you already know what is best and have nothing to learn from others? They’ll just slow you down!

Why talk with the customer regularly to understand their needs. We’re the experts!

Why do user testing, anybody could use this!

So many tech practices are about getting feedback fast so we can iterate on code, on product, and on our team ways of working. Humility helps us accept that we can be better.

Letting design emerge from the tests with TDD requires the humility to accept that we might not have the best design already in mind. We can’t have foreseen all the interactions with the rest of the code and necessary behaviours.

Humility maximises blamelessness and learning opportunities. We talk about blameless post incident reviews and retrospectives: focusing on understanding and learning from things that happen. Even if we don’t outwardly blame those involved it’s easy to feel slightly superior: that there’s no way we would have made the mistake that triggered the incident. A humble participant would have more compassion for those involved. A humble participant would see that they are themselves part of the system of people that has resulted in this outcome. There is always something to learn about the consequences of our own actions and inactions.

Humility pairs well with Courage

Courage is not overconfidence. Courage is not fearlessness. Courage is being able to do something even though it might be hard or scary.

With humility we know we are fallible and may be wrong. We courageously seek out feedback to learn as early as possible.

Deploying changes to production always carries certain risk; even with safety nets like tests and canary deploys. (Delaying deploys creates even more risk).

An overconfident person might avoid deploying to production until they’re finished with a large chunk of work. After all, they know what they’re doing! Figuring out how to break it down into separately deployable chunks will take more time and be inefficient.

A fearless person might fire and forget changes into production. This is a safe change after all. Click deploy; go to the pub!

A humble person on the other hand understands they’re working with a complex system; bigger than they can fit in their head. They understand that they can’t be certain of the results of their change, no matter the precautions they’ve taken. Having courage to deploy anyway. Acting to observe reality and find out whether their fallible prediction was correct. 

Posted by & filed under Java.

A few years back I posted about how to implement state machines that only permit valid transitions at compile time in Java.

This used interfaces instead of enums, which had a big drawback—you couldn’t guarantee that you know all the states involved. Someone could add another state elsewhere in your codebase by implementing the interface.

Java 15 brings a preview feature of sealed classes. Sealed classes enable us to solve this downside. Now our interface based state machines can not only prevent invalid transitions but also be enumerable like enums.

If you’re using jdk 15 with preview features enabled you can try out the code. This is how it looks to define a state machine with interfaces.

sealed interface TrafficLight
       extends State<TrafficLight>
       permits Green, SolidAmber, FlashingAmber, Red {}
static final class Green implements TrafficLight, TransitionTo<SolidAmber> {}
static final class SolidAmber implements TrafficLight, TransitionTo<Red> {}
static final class Red implements TrafficLight, TransitionTo<FlashingAmber> {}
static final class FlashingAmber implements TrafficLight, TransitionTo<Green> {}

The new part is “sealed” and “permits”. Now it becomes a compile failure to define a new implementation of TrafficLight 

As well as the existing behaviour where it’s a compile time failure to perform a transition that traffic lights do not allow. 

n.b. you can also skip the compile time checked version and still use the type definitions to runtime check the transitions

Multiple transitions are possible from a state too

static final class Pending 
  implements OrderStatus, BiTransitionTo<CheckingOut, Cancelled> {}

Thanks to sealed classes we can also now do enum style enumeration and lookups on our interface based state machines.

sealed interface OrderStatus
       extends State<OrderStatus>
       permits Pending, CheckingOut, Purchased, Shipped, Cancelled, Failed, Refunded {}
 
 
@Test public void enumerable() {
  assertArrayEquals(
    array(Pending.class, CheckingOut.class, Purchased.class, Shipped.class, Cancelled.class, Failed.class, Refunded.class),
    State.values(OrderStatus.class)
  );
 
  assertEquals(0, new Pending().ordinal());
  assertEquals(3, new Shipped().ordinal());
 
  assertEquals(Purchased.class, State.valueOf(OrderStatus.class, "Purchased"));
  assertEquals(Cancelled.class, State.valueOf(OrderStatus.class, "Cancelled"));
}

These are possible because JEP 360 provides a reflection API with which one can enumerate the permitted subclasses of an interface. ( side note the JEP says getPermittedSubclasses() but the implementation seems to use permittedSubclasses() ) 
We can add use this to add the above convenience methods to our State interface to allow the values(), ordinal(), and valueOf() lookups.

static <T extends State<T>> List<Class> valuesList(Class<T> stateMachineType) {
   assertSealed(stateMachineType);
 
   return Stream.of(stateMachineType.permittedSubclasses())
       .map(State::classFromDesc)
       .collect(toList());
}
 
static <T extends State<T>> Class<T> valueOf(Class<T> stateMachineType, String name) {
   assertSealed(stateMachineType);
 
   return valuesList(stateMachineType)
       .stream()
       .filter(c -> Objects.equals(c.getSimpleName(), name))
       .findFirst()
       .orElseThrow(IllegalArgumentException::new);
}
static <T extends State<T>, U extends T> int ordinal(Class<T> stateMachineType, Class<U> instanceType) {
   return valuesList(stateMachineType).indexOf(instanceType);
}

There are more details on how the transition checking works and more examples of where this might be useful in the original post. Code is on github.

Posted by & filed under XP.

It’s become less common to hear people referred to as “resources” in recent times. There’s more trendy “official vocab guidelines”, but what’s really changed? There’s still phrases in common use that sound good but betray the same mindset.

I often hear people striving to hire and retain the best talent as if that is a strategy for success, or as if talent is a limited resource we must fight over. 

Another common one is to describe employees as your “greatest asset”.

I’d like to believe both phrases come from the good intentions of valuing people. Valuing individuals as per the agile manifesto. I think these phrases betray a lack of consideration of the “…and  interactions”

The implication is organisations are in a battle to win and then protect as big a chunk as they can of a finite resource called “talent”. It’s positioned as a zero-sum game. There’s an implication that the impact of an organisation is a pure function of the “talent” it has accumulated. 

People are not Talent. An organisation can amplify or stifle the brilliance of people. It can grow skills or curtail talent.

Talent is not skill. Talent gets you so far but skills can be grown. Does the team take the output that the people in it have the skill to produce? Or does the team provide an environment in which everyone can increase their skills and get more done than they could alone? 

We might hire the people with the most pre-existing talent, and achieve nothing if we stifle them with a bureaucracy that prevents them from getting anything done. Organizational scar tissue that gets in the way; policies that demotivate.  

Even without the weight of bureaucracy many teams are really just collections of individuals with a common manager. The outcomes of such groups are limited by the talent and preexisting skill of the people in them. 

Contrast this with a team into which you can hire brilliant people who’ve yet to have the opportunity of being part of a team that grows them into highly skilled individuals. A team that gives everyone space to learn, provides challenges to stretch everyone, provides an environment where it’s safe to fail. Teams that have practices and habits that enable them to achieve great things despite the fallibility and limitations of the talent of each of the people in the team. 

“when you are a Bear of Very Little Brain, and you Think of Things, you find sometimes that a Thing which seemed very Thingish inside you is quite different when it gets out into the open and has other people looking at it.”—AA Milne 

While I’m a bear of very little brain, I’ve had the opportunity to be part of great teams that have taught me habits that help me achieve more than I can alone.

Habits like giving and receiving feedback. Like working together to balance each others weaknesses and learn from each other faster. Like making predictions and observing the results. Like investing in keeping things simple so they can fit into my brain. Like working in small steps. Like scheduled reflection points to consider how to improve how we’re working. Like occasionally throwing away the rules and seeing what happens.

Habits like thinking less and sensing more.