Posted by & filed under XP.

How many better ways of working have you uncovered lately?

This past year a lot of people have been forced into an unplanned and unwanted experiment. Many teams have had to figure out how to work in a remote-first way for the first time. 

I was privileged to be working in a team that was already distributed, making the transition for us easier than many found it. But adapting to work without office equipment was still new. 

Going without the offices that we were used to was uncomfortable in many ways. It was particularly difficult for those not privileged enough to have good working environments at home. 

While nobody would wish the challenges of the past year upon anyone, disruption to the way we were used to working, helped us uncover some better ways. Nothing groundbreaking. But some things were driven home.

Like how much easier video discussions are when everyone is fully visible, at the same size on the screen, and close-mic’ed. Better than the hybrid experience when part of the team are using a conference room with shared microphone and camera.

We also learned how bad our open plan office environment was for collaborative working. Compared with the experience of two people pairing over a video chat, each in their own quiet spaces.

We had some resilience because we were already distributed. How much more resilient might we have been, had we been more used to experimenting and adapting the ways we worked, safely.

The opening sentence of the agile manifesto doesn’t seem to be as much discussed as the values & principles. I think it’s arguably the most interesting bit.

“We are uncovering better ways of developing software by doing it and helping others do it.”

The Agile Manifesto

This brings three questions to mind that might be useful reflection prompts:

  1. How many better ways of developing software have you uncovered recently? Have we already found the best ways of developing software? I hope not!
  2. Are the people who are developing software in your organisation, the same people who are uncovering better ways of doing it? Or does management tell people how to work?
  3. When deciding how to work, are you thinking foremost of what’s easiest for you? or what’s most helpful for others?

We didn’t need a pandemic to try the experiment of no office, but for many folks it was the first time trying that experiment. What if we experimented more? Experimenting with self-selected, smaller, low-risk experiments?

How much has your team changed the way it works lately? Could you be stuck in a local maximum? Are your ways of working best for your team as it is now, or tradition inherited from those who came before, who may have been in a different context?

Every person is different. Every team is different. Transplanting ways of working from one team to another may not yield the best working environment. 

Ways of working often come from “things I’ve seen work before” or your preferences. Why would they be best for this team now? What are you doing to “uncover” what works best for you now?

Uncovering is a great metaphor. Move things around, remove some things, see what you discover! 

Teams often limit themselves by only trying experiments where they believe the outcome will be an improvement. Improvement is then limited by the experience and imagination of those present.

How about trying experiments where you expect the result to be no different? Or where you have no idea what will happen? Or even where you expect the result to be worse?

Here’s some practical experiments you could try in your team to see what you learn:

Try removing something

What activities, ceremonies, processes, and practices do you have? What would happen if you stopped doing one of them? They may seem like purely beneficial things. Even if they are, by trying to remove them you might learn more about their value (or lack thereof) to you. As well as their influence on your behaviour.

  • Imagine there’s No Offices (We’ve had this experiment forced upon us lately)
  • Imagine there’s No Meetings. Maybe we’d expect more miscommunications and wasted effort. Perhaps worth testing—maybe we’ll understand the value of meetings more clearly by abstaining for a time. Maybe we’d also learn to communicate asynchronously through writing more clearly. 
  • Imagine there’s No Estimates. Do you estimate all your upcoming tasks to measure progress? What would happen if you stopped doing it? If you’re required to give estimates what if you estimated everything as one day?
  • Imagine there’s No Staging. Do you test your changes in a staging environment for a time before promoting to prod? What if you didn’t? How would you be forced to adapt to mitigate the risks you had been mitigating in staging.
  • Imagine there’s No Branches. Do you work independently from others on your team? What if you had to all work on the same branch? How would you adapt?
  • Imagine there’s No Backlogs. What if you discarded everything you’re not doing now or next? What would happen? Maybe worth trying.
  • Imagine there’s No Managers. Do you know what your manager is doing? What would happen if they were unavailable for a while? How would your team need to adapt?

Removing things might seem risky and likely to make things worse. But what can we know about our resilience if we only do experiments where we think we know the outcomes.

It can be interesting to try removing something and see if we achieve similar outcomes without it.

It can help you understand the real value of the practice or process to your team. Help you understand your team better, and how things normally go right.

Try constraining something

Add an artificial limit on something you do often and see what you learn. Here’s some popular examples.

  • Keep tests green at all times
  • Implementation code may only be factored out of tests, not written directly. 
  • No more than 1 test in a commit
  • Deploy every time tests go green
  • Only one level of code indentation, or one dot per line
  • Only mock types you own
  • No fewer than 3 people work on a task if you usually work alone, or no more than 1 person working on a task if you usually work paired.
  • No more than n tasks in progress
  • Don’t start a new task until previous is in production.
  • No changing code owned by another team. 

Try the opposite

What ways of working do you tend to choose? What if you tried the opposite for a time and see what you learn? Try to prove it’s no worse and see what happens.

  • Do you always take on the team’s frontend tasks? What if anyone but you took them?
  • Work together in pairs? Try working apart individually.
  • Work individually? Try working together in pairs.
  • Feel we need to plan better? What if we just started without planning, together.
  • Hiring to go faster? What if you tried having fewer people involved instead.

Try the extreme 

An idea from extreme programming is to look for what’s working and then try to turn it up to the maximum; turn the dials up to 11. 

A few ideas:

  • Pair programming on complicated tasks? Try pairing on boring tasks too.
  • Pairing? Try the whole team worked together as an ensemble/mob.
  • Deploying daily? Try to deploy hourly.
  • Do you declare a production incident when there’s a problem with customer impact? What if every “operational surprise” were treated as an incident.

Reflecting

Only run experiments that your team all consent to. Set a time limit and a reflection point to review. 

Ask yourselves: What was surprising? Are we having more or less fun? What were the upsides? What were the downsides? Are we achieving more or less? 

Make it the default to revert to the previous ways of working. Normalise trying things you don’t expect to work, and reverting. Celebrate learning that something is not right for you. If every experiment becomes a permanent change for your team, then you’re not really experimenting you’re just changing. 

Be bold enough to try things that probably won’t work. You might uncover new insights into your team, your colleagues, and what brings you joy and success at work.

Posted by & filed under Leadership, XP.

Instead of starting from “how do we hire top talent?”, start from “what are our weaknesses?”

Why are you hiring? Are you hiring to do more, or are you hiring to achieve more?

Design your hiring process to find the right people to strengthen your teams’ weaknesses, rather than trying to find the best people. 

Instead of “how can we find the smartest people?” think about “how can we find people who will make our team stronger?”

People are not Talent. An organisation can amplify or stifle the brilliance of people. It can grow skills or curtail talent.

Often the language used by those hiring betrays their view of people as resources. Or, to use its current popular disguise: “talent”. 

“We hire top talent” “the candidate didn’t meet our bar”, “our hiring funnel selects for the best”. “we hire smart people and get out of their way”. We design “fair tests” that are an “objective assessment” of how closely people match our preconceptions of good.

The starting point for the talent mindset is that we want to hire the smartest people in the “talent pool”. If only we can hire all the smartest people, that will give us a competitive advantage!

If you focus on hiring brilliant people, and manage to find people who are slightly smarter on average than your competitors, will you win? Achievements in tech seldom stem solely from the brilliance of any one individual. Successes don’t have a single root cause any more than failures do. 

We’re dealing with such complex systems; teams that can capture and combine the brilliance of several people will win. 

With the right conditions a team can be smarter than any individual. With the wrong conditions a team can be disastrously wrong; suffering from over confidence and groupthink, or infighting and undermining each other. Hiring the right people to help create the right conditions in the team gives us a better chance of winning than hiring the smartest people.  

Talent MindsetWeaknesses Mindset
Find top TalentFind skills that unlock our potential
Grow TeamTransform Team
Help teams do more thingsHelp teams achieve greater things
People who can do what we’re doingPeople can do things we can’t do
Hire the best personHire the person who’s best for the team
People who match our biased idea of goodPeople who have what we’re missing
Fair tests & Objective assessmentEqual opportunity to demonstrate what they’d add to our team
Consistent processChoose your own adventure
Hire the most experienced/skilled candidateHire the person who’ll have the greatest impact on the team’s ability.
Number of candidates & Conversion rateHow fast can we move for the right candidate?
Centralised Process & BureaucracyTeams choose their own people
Grading candidatesCollaborating with candidates
Prejudging what a good candidate looks likeReflecting on our team’s weaknesses
FungibilitySuitability
Smart peoplePeople who make the team smarter
IntelligenceAmplifying others
Culture FitMissing Perspectives
People who’ve accomplished great thingsPeople who’ll unblock our greatness
Exceptional individuals. People we can grow and who will grow us
What didn’t the candidate demonstrate against checklistWhat would the candidate add to our team

Talent-oriented hiring

If we start out with the intent to find the “best people” it will shape our entire process.

We write down our prejudices about what good looks like in job descriptions. We design a series of obstacles^Winterviews to filter out candidates who don’t match these preconceptions. Those who successfully run the gauntlet are rewarded with an offer, then we figure out how to best put them to use. 

Hiring processes are centralised to maximise efficiency & throughput. We look at KPIs around the number of candidates at each stage in the funnel, conversion rates. 

Interviewing gets shared out around teams like a tax that everyone pays into. Everyone is expected to interview for the wider organisation.

Hiring committees are formed and processes are standardised. We try to ensure that every candidate is assessed as fairly as possible, against consistent criteria. 

We pat ourselves on the back about how we only hire the top 1% of applicants to our funnel. We’re hiring “top talent” and working here is a privilege. We’re the best of the best. 

Weaknesses-oriented hiring

There’s another approach. Instead of starting from our idea of talent and the strengths we’re looking for, start from our weaknesses. What’s missing in our team that could help us achieve more? 

Maybe we have a team of all seniors, frequently stuck in analysis. We need some fresh perspectives and bias for action.  

Maybe we are suffering from groupthink and need someone with a different background and new perspectives. Someone who can help generate some healthy debate?

Maybe we have all the technical skills we need but nobody is skilled at facilitating discussions. We struggle to get the best out of the team.

Perhaps we’re all fearful of a scaling challenge to come. We could use someone experienced who can lend the team some courage to meet the challenge.

Maybe those in the existing team specialise in frontend tech, and we’ve realised we need to build and run a data service. We could learn to do it, but bringing someone in with existing knowledge will help us learn faster.

Maybe we are all software engineers, but are spending most of our time diagnosing and operating production systems. Let’s add an SRE in the team.

Maybe we don’t even know what our weaknesses are—until we experience collaboration with the right person who shows us how to unlock our potential.

Identify your weaknesses. Use them to draft role descriptions and design your interview processes. 

Design your interviews to assess what the candidate brings to your team. How do they reduce your weaknesses and what strengths would they add? 

Leave space in the process to discover things people could bring to your team that you weren’t aware you needed. 

A talent-oriented process would assess how the candidate stacks up against our definition of good. A weaknesses-oriented process involves collaborating with the candidate, to see whether (and how) they might make your team stronger. 

If possible, have candidates join in with real work with the team. When this is not feasible for either side, design exercises that allow people on your team to collaborate with them. 

Pair together on a problem. Not what many companies call paired interviews where it’s really an assessor watching you code under pressure. Instead, really work together to solve something. Help the candidate. Bounce ideas off each other. Share who’s actually doing the typing, as you would if you were pairing with a colleague. Don’t judge if they solved the same problems you solved; see what you can achieve together.

A weaknesses-oriented process might mean saying no to someone eminently qualified and highly skilled; because you have their skills well covered in the team already. 

A weaknesses-oriented process might mean saying yes to someone inexperienced who’s good at asking the right questions. Someone whose feedback will help the experienced engineers in the team keep things simple and operable.

Why not both? 

It’s often worth thinking about when something good is not appropriate. There are rarely “best practices”, only practices that have proven useful in a given context.

At scale I can see the need for the talent-oriented hiring approach in addition to weaknesses-oriented.

Exposing all of your teams’ variety of hiring needs to candidates may create too much confusion.

You may well want a mechanism to ensure some similarity. A mechanism to select for those who share the organisational values. To find those with enough common core skills to increase the chances that they can move from team to team. Indeed, long-lived and continuously funded teams are not a given in all organisations.

If you’re getting thousands of applications a day you’ll probably want a mechanism to improve signal to noise ratio for teams wishing to hire. Especially if you don’t want to use education results as a filter. 

I suspect a lot of hiring dysfunction comes from people copying the (very visible) talent-oriented top-of-funnel hiring practices that big companies use. Copy-pasting as the entire hiring mechanism for their smallish team.

Start from reflecting on your weaknesses. Whose help could you use? Some of the practices from the talent-oriented approach may be useful, but don’t make them your starting point. Start from your weaknesses if you want strong teams.

Posted by & filed under Leadership, XP.

If you’re struggling with how to get to tidy code, fast feedback loops, joyful work. How to get permission, or buy-in. Try team habits.

Create working agreements of the form “When we see <observable trigger>, instead of <what we currently do> we will <virtuous action>”. This is a mechanism to pre-approve the desired action.

In last week’s “level up” newsletter Pat Kua likened developers asking product managers whether to do activities such as refactoring and testing, to a chef offering washing and cleaning to customers. i.e. it shouldn’t be a discussion.

I strongly agree, but I can see that some people might still be stuck with the how. Especially if you’re the only cook in the kitchen who seems to care that there is mess everywhere.  

I recently suggested every team should be paying attention to the delays in their feedback loops. Keeping time to production under 5 minutes. Running all programmer tests in a matter of seconds. 

Measuring what matters is a good step towards being able to improve it. But it’s not enough. Especially if your manager, product manager, or fellow developers seem unperturbed by the build time creeping up, the flaky tests in your build, or the accumulating complexity.

How we get stuck

Maybe you point to graphs and metrics and are met with “yes but we have a deadline next week” or “just do something else while you’re waiting for the slow build”. We easily get stuck at how to break out of seeking permission.

How can we become a team with code we’re proud of? How can we become a team that has fast effective feedback loops?

I’m a big fan of radiating intent rather than asking for forgiveness, but what if you’re the only person on your team that seems to care? Acting autonomously takes courage. It sends a positive signal to your coworkers, but it could easily be shut down by an unsupportive or unaware manager.

You might give up when your small solo efforts make little headway against a mammoth task. It’s easy to become despondent when a new teammate rants about the state of the huge codebase you’ve been refactoring bit by bit for weeks. “Ah, but you should have seen it before” you sigh. 

People get stuck asking for permission. Asking permission, or taking a risk to do the right thing every time is wearisome. “Just do it” is easy to say but is risky. Especially if you have leaders who are reluctant to let go of control. 

Is quality code only something that teams with sympathetic managers can achieve? Are fast feedback loops something only very talented engineers can achieve? By no means!

A trap that lots of teams fall into is making the safe, sustainable, faster path improvement path the special case. We create systems whereby people have to ask for permission to clean and to tidy. We need to change the expectations in the team. It should be that tolerating a messy workspace is seen as the deviance that needs permission or risk taking. Not vice versa.

How to change this? Propose a new a beneficial habit to your team. 

When we see <observable trigger>,
instead of <what we currently do>
we will <virtuous action>”

How can you convince your team to try a new way of working? Make a prediction of the benefit and propose running an experiment for 2-4 weeks, after which the team can review. 

Good habits give teams superpowers. Habits can pre-approve the right actions. Habits unstick us from “doing things” to take necessary actions, when they’re needed, continuously. 

What do I mean by good habits? Refactoring constantly, making the build a bit faster every day, talking with customers. Feedback loops that increase our chances of success. 

Make them team habits and the onus will be on managers and team members to convince each other to not do the good things rather than the other way round.

Examples of good habits

TDD

TDD is great for habit forming—it creates triggers to do the right thing. When we see a red failing test we trigger the habit of implementing new behaviour. We never have untested code because the habit we’ve formed for writing code is in response to a failing test. When we see a green test suite we trigger the habit of refactoring. Every few minutes we see a green test suite and trigger our habit: spending some time thinking about how the code could be tidier, easier to understand, and then making it so.

Sure we could convince our pair to skip the refactoring step, but the onus is on us to make the case for deviating from the sensible default, habitual path.

When we write code, instead of starting to implement the feature we will write a failing test first”

When we see a red test suite, instead of writing lots of code we will write the simplest possible code to make it pass”

When we see a green test suite, instead of moving on to the next capability we will stop and do at least one refactoring”

Zero Tolerance of Alerts

Are you suffering from over-alerting? Do you have flickery production alerts that go off frequently and then resolve themselves without action? Use them as a trigger for a habit. Agree as a team that every time an alert fires the whole team will down tools, investigate and work together. Every time an alert happens: come up with your best idea to stop it going off again for the same reason.

If that’s too extreme you could try with a nominated person investigating. The point is to make a habit. Pre-agree expected, good, behaviour in response to a common trigger. Then there’s no need to seek permission to do the right thing, plus there’s some social obligation to do the right thing.

When we see an alert fire, instead of waiting to see if recovers by itself we will stop and investigate how to stop it firing again”

Max Build Time

Is your build getting slower and slower? You’ve made it measurable but it’s still not helping? Create a habit that will improve it over time. e.g. Every time you see the build has got a minute slower the next task the team picks up will be making the build at least two minutes faster.

Now the onus is on someone to argue that something else is more important at the time. The default, habitual, behaviour is to respond to a trigger of a slowed build with investing time in speeding it up.

When we see the median build time cross the next whole minute, instead of ignoring it we will prepend a task to our ‘next’ queue to reduce the build by at least 2 minutes”

Time Waste Limit

Do you feel like your team is drowning in technical debt? Maybe you’re dealing with such a mountain of technical debt. Perhaps it feels like nothing you do will make a difference? Build a habit to make it better in the most impactful areas first. e.g. Make wasting an hour understanding an area of the codebase trigger improvement. When you realise you’ve wasted an hour, update a shared team tally of hours wasted by area of code. When you go to add a tally mark and there’s already a couple of marks there, drop what you were going to do and spend the rest of the day tidying it up.

If you pre-agree to do this then the default behaviour is to make things a bit better over time. Permission seeking is needed to skip the cleanup.

When we notice we’ve spent an hour trying to understand some complicated code , instead of pressing ahead we will record the time wasted”

When we notice our team has wasted 5 hours in one area of the code instead of finishing our feature we will spend the rest of the day tidying up”

Spend Limits

Does your team’s work always take far longer than expected? Are you struggling to deliver a slice of value in a week? Maybe even when you think a story can be completed in a day it ends up taking two weeks? Make a habit triggered by length of time spent on a given story. Keep a tally of elapsed days. When it hits double what we expected, the team stops and has a retrospective to understand what they can learn from the surprise. 

When we have spent 5 days on one story instead of carrying on we will hold a team retrospective to understand what we can learn to help us work smaller next time”

How to help your team form a good habit

Asking your team to try to form a new habit is something anyone can do. You don’t need to be a manager. 

  1. Look for a suitable trigger mechanism, in day to day working.
  2. Think of a virtuous action that could be taken upon that trigger.  
  3. Propose an experiment of a habit to try, combining items 1+2
  4. Write it down, as a working agreement for the team.

    When we see a green test suite, instead of moving on to the next capability we will stop and do at least one refactoring”
  5. Review your working agreements next retrospective.

There’s lots of good habits that you can steal from other effective teams (such as TDD).

A team committing to try to form a habit will make a more effective experiment than one person trying to do the right thing against the flow. 

Let me know how it goes. What resistance do you run into? What are your team’s most powerful habits?

Posted by & filed under sql.

I love SQL, despite its many flaws.

Much is argued about functional programming vs object oriented. Different ways of instructing computers.

SQL is different. SQL is a language where I can ask the computer a question and it will figure out how to answer it for me.

Fluency in SQL is a very practical skill. It will make your life easier day to day. It’s not perfect, it has many flaws (like null) but it is in widespread use (unlike, say, prolog or D).

Useful in lots of contexts

As an engineer, sql databases often save me writing lots of code to transform data. They save me worrying about the best way to manage finite resources like memory. I write the question and the database (usually) figures out the most efficient algorithm to use, given the shape of the data right now, and the resources available to process it. Like magic.

SQL helps me think about data in different ways, lets me focus on the questions I want to ask of the data; independent of the best way to store and structure data.

As a manager, I often want to measure things, to know the answer to questions. SQL lets me ask lots of questions of computers directly without having to bother people. I can explore my ideas with a shorter feedback loop than if I could only pose questions to my team.

SQL is a language for expressing our questions in a way that machines can help answer them; useful in so many contexts.

It would be grand if even more things spoke SQL. Imagine you could ask questions in a shell instead of having to teach it how to transform data 

Why do we avoid it?

SQL is terrific. So why is there so much effort expended in avoiding it? We learn ORM abstractions on top of it. We treat SQL databases as glorified buckets of data: chuck data in, pull data out.

Transforming data in application code gives a comforting amount of control over the process, but is often harder and slower than asking the right question of the database in the first place.

Do you see SQL as a language for storing and retrieving bits of data, or as a language for expressing questions?

Let go of control 

The database can often figure out the best way of answering the question better than you.

Let’s take an identical query with three different states of data.

Here’s two simple relations with 1 attribute each. a and b. With a single tuple in each relation. 

CREATE TABLE a(id INT);
CREATE TABLE b(id INT);
INSERT INTO a VALUES(1);
INSERT INTO b VALUES(1);
EXPLAIN analyze SELECT * FROM a NATURAL JOIN b;

“explain analyze” is telling us how postgres is going to answer our question. The operations it will take, and how expensive they are. We haven’t told it to use quicksort, it has elected to do so.

Looking at how the database is doing things is interesting, but let’s make it more interesting by changing the data. Let’s add in a boatload more values and re-run the same query.

INSERT INTO a SELECT * FROM generate_series(1,10000000);
EXPLAIN analyze SELECT * FROM a NATURAL JOIN b;

We’ve used generate_series to generate ten million tuples in relation ‘a’. Note the “Sort method” has changed to use disk because the data set is larger compared to the resources the database has available. I haven’t had to tell it to do this. I just asked the same question and it has figured out that it needs to use a different method to answer the question now that the data has changed.

But actually we’ve done the database a disservice here by running the query immediately after inserting our data. It’s not had a chance to catch up yet. Let’s give it a chance by running analyze on our relations to force an update to its knowledge of the shape of our data. 

analyze a;
analyze b;
EXPLAIN analyze SELECT * FROM a NATURAL JOIN b;

Now re-running the same query is a lot faster, and the approach has significantly changed. It’s now using a Hash Join not a Merge Join. It has also introduced parallelism to the query execution plan. It’s an order of magnitude faster. Again I haven’t had to tell the database to do this, it has figured out an easier way of answering the question now that it knows more about the data.

Asking Questions

Let’s look at some of the building blocks SQL gives us for expressing questions. The simplest building block we have is asking for literal values.

SELECT 'Eddard';
SELECT 'Catelyn';

A value without a name is not very useful. Let’s rename them.

SELECT 'Eddard' AS forename;
SELECT 'Catelyn' AS forename;

What if we wanted to ask a question of multiple Starks: Eddard OR Catelyn OR Bran? That’s where UNION comes in. 

SELECT 'Eddard' AS forename 
UNION SELECT 'Catelyn' AS forename 
UNION SELECT 'Bran' AS forename;

We can also express things like someone leaving the family. With EXCEPT.

SELECT 'Eddard' AS forename 
UNION SELECT 'Catelyn' AS forename 
UNION SELECT 'Bran' AS forename 
EXCEPT SELECT 'Eddard' AS forename;

What about people joining the family? How can we see who’s in both families. That’s where INTERSECT comes in.

(
  SELECT 'Jamie' AS forename 
  UNION SELECT 'Cersei' AS forename 
  UNION SELECT 'Sansa' AS forename
) 
INTERSECT 
(
  SELECT 'Sansa' AS forename
);

It’s getting quite tedious having to type out every value in every query already. 

SQL uses the metaphor “table”. We have tables of data. To me that gives connotations of spreadsheets. Postgres uses the term “relation” which I think is more helpful. Each “relation” is a collection of data which have some relation to each other. Data for which a predicate is true. 

Let’s store the starks together. They are related to each other. 

CREATE TABLE stark AS 
SELECT 'Sansa' AS forename  
UNION SELECT 'Eddard' AS forename  
UNION SELECT 'Catelyn' AS forename  
UNION SELECT 'Bran' AS forename ;
 
CREATE TABLE lannister AS 
SELECT 'Jamie' AS forename 
UNION SELECT 'Cersei' AS forename 
UNION SELECT 'Sansa' AS forename;

Now we have stored relations of related data that we can ask questions of. We’ve stored the facts where “is a member of house stark” and “is a member of house lannister” are true. What if we want people who are in both houses. A relational AND. That’s where NATURAL JOIN comes in.

NATURAL JOIN is not quite the same as the set based and (INTERSECT above). NATURAL JOIN will work even if there are different arity tuples in the two relations we are comparing.

Let’s illustrate this by creating a relation pet with two attributes.

create table pet as 

CREATE TABLE pet AS 
SELECT 'Sansa' AS forename, 'Lady' AS pet
UNION SELECT 'Bran' AS forename, 'Summer' AS pet;

Now we have an AND, what about OR? We have a set-or above (UNION). I think the closest thing to a relational OR is a full outer join. 

CREATE TABLE animal AS SELECT 'Lady' AS forename, 'Wolf' AS species UNION SELECT 'Summer' AS forename, 'Wolf' AS species;
SELECT * FROM stark FULL OUTER JOIN animal USING(forename);

Ok so we can ask simple questions with ands and ors. There are also equivalents of most of the relational algebra operations

What if I want to invade King’s Landing?

What about more interesting questions? We can do those too. Let’s jump ahead a bit.

What if we’re wanting to plan an attack on Kings Landing and need to consider the routes we could take to get there. Starting from just some facts about the travel options between locations, let’s ask the database to figure out routes for us.

First the data. 

CREATE TABLE move (place text, method text, newplace text);
INSERT INTO move(place,method,newplace) VALUES
('Winterfell','Horse','Castle Black'),
('Winterfell','Horse','White Harbour'),
('Winterfell','Horse','Moat Cailin'),
('White Harbour','Ship','King''s Landing'),
('Moat Cailin','Horse','Crossroads Inn'),
('Crossroads Inn','Horse','King''s Landing');

Now let’s figure out a query that will let us plan routes between origin and destination as below

We don’t need to store any intermediate data, we can ask the question all in one go. Here “route_planner” is a view (a saved question)

CREATE VIEW route_planner AS
WITH recursive route(place, newplace, method, LENGTH, path) AS (
	SELECT place, newplace, method, 1 AS LENGTH, place AS path FROM move --starting point
		UNION -- or 
	SELECT -- next step on journey
		route.place, 
		move.newplace, 
		move.method, 
		route.length + 1, -- extra step on the found route 
		path || '-[' || route.method || ']->' || move.place AS path -- describe the route
	FROM move 
	JOIN route ON route.newplace = move.place -- restrict to only reachable destinations from existing route
) 
SELECT 
	place AS origin, 
	newplace AS destination, 
	LENGTH, 
	path || '-[' || method ||  ']->' || newplace AS instructions 
FROM route;

I know this is a bit “rest of the owl” compared to what we were doing above. I hope it at least illustrates the extent of what is possible. (It’s based on the prolog tutorial). We have started from some facts about adjacent places and asked the database to figure out routes for us.

Let’s talk it through…

CREATE VIEW route_planner AS

this saves the relation that’s the result of the given query with a name. We did this above with

CREATE TABLE lannister AS 
SELECT 'Jamie' AS forename 
UNION SELECT 'Cersei' AS forename 
UNION SELECT 'Sansa' AS forename;

While create table will store a static dataset, a view will re-execute the query each time we interrogate it. It’s always fresh even if the underlying facts change.

WITH recursive route(place, newplace, method, LENGTH, path) AS (...);

This creates a named portion of the query, called a “common table expression“. You could think of it like an extract-method refactoring.  We’re giving part of the query a name to make it easier to understand. This also allows us to make it recursive, so we can build answers on top of partial answers, in order to build up our route.

SELECT place, newplace, method, 1 AS LENGTH, place AS path FROM move

This gives us all the possible starting points on our journeys. Every place we know we can make a move from. 

We can think of two steps of a journey as the first step OR the second step. So we represent this OR with a UNION

JOIN route ON route.newplace = move.place

Once we’ve found our first and second steps, the third step is just the same—treating the second step as the starting point. “route” here is the partial journey so far, and we look for feasible connected steps. 

path || '-[' || route.method || ']->' || move.place AS path;

here we concatenate instructions so far through the journey. Take the path travelled so far, and append the next mode of transport and next destination.

Finally we select the completed journey from our complete route

SELECT 
	place AS origin, 
	newplace AS destination, 
	LENGTH, 
	path || '-[' || method ||  ']->' || newplace AS instructions 
FROM route;

Then we can ask the question

SELECT instructions FROM route_planner 
WHERE origin = 'Winterfell' 
AND destination = 'King''s Landing';

and get the answer

                                 instructions                                   
-------------------------------------------------------------------------------
Winterfell-[Horse]->White Harbour-[Ship]->King's Landing
Winterfell-[Horse]->Moat Cailin-[Horse]->Crossroads Inn-[Horse]->King's Landing
(2 rows)

Thinking in Questions

Learning SQL well can be a worthwhile investment of time. It’s a language in widespread use, across many underlying technologies. 

Get the most out of it by shifting your thinking from “how can I get at my data so I can answer questions” to “How can I express my question in this language?”. 

Let the database figure out how to best answer the question. It knows most about the data available and the resources at hand.

Posted by & filed under Leadership, Star Trek, XP.

Growing up, an influential television character for me was Jean Luc Picard from Star Trek the Next Generation.

Picard was a portrayal of a different sort of leader to most. Picard didn’t order people about. He didn’t assume he knew best.  He wasn’t seeking glory. In his words: “we work to better ourselves, and the rest of humanity”. The Enterprise was a vehicle for his crew to better themselves and society. What a brilliant metaphor for organisations to aspire to.

My current job title is “Director of engineering”. I kind of hate it. I don’t want to direct anybody. I represent them as part of my first team. People don’t report to me; I support people. My mission is to clarify so they can decide for themselves, and to help them build skills so they can succeed. 

Director is just a word, but words matter. Language matters.

Picard was an effective leader in part due to the language he used. Here’s a few lessons we can learn from the way he talked.

“Make it So!”

Picard is probably best known for saying “make it so!”

This catchphrase says so much about his leadership style. He doesn’t bark orders. He gives the crew the problems to solve. He listens to his crew and supports their ideas. His crew state their intent and he affirms (or not) their decisions. 

I think “Make it so” is even more powerful than the more common “very well” or “do it”, which are merely agreeing with the action being proposed.

“Make it so” is instead an agreement with the outcome being proposed. The crew are still free to adjust their course of action to achieve that outcome. They won’t necessarily have to come back for approval if they have to change their plan to achieve the same outcome. They understand their commander’s intent.  

And of course it’s asking for action “wishing for a thing does not make it so”

“Oh yes?” 

Picard’s most common phrase was not affirming decisions, but “oh yes”. Because he was curious, he would actively listen to his crew. He sought first to understand.  

It’s fitting that he says this more than make it so. Not everything learned requires action. It’s easy to come out of retrospectives with a long list of actions. I’d rather we learned something and took no actions than took action without learning anything. 

“Suggestions?” 

In “Cause and Effect” (The one with Captain Fraiser Crane) there’s an imminent crisis. The Enterprise is on a collision course with another ship and are unable to maneuver. What does Picard do? Resort to command and control? No; despite the urgency he asks for suggestions from the crew. Followed by a “make it so” agreement to act.

Asking for suggestions during a crisis requires enough humility to realise your crew or team is collectively smarter than you are. Picard trusts his team to come up with the best options. 

He is also willing to show vulnerability, even during a crisis. His ego doesn’t get in the way. 

In this episode, the crew did not automatically follow the suggestion of the most senior person in the room. The solution to the crisis is eventually found in the second suggestion, after they tried the first. They succeeded because they had diverged and discovered options first, before converging on a solution.

The crew were aware of the other options open to them, and when the first failed they acted to try another (successful) option. Crucially, they did not wait for their captain to approve it. There wasn’t time to let the captain make the decision, but they were free to try the other options because they’d been told to “make it so” not “do it”.

To achieve the best outcomes as a team, intentionally seek divergent ideas first before converging on a decision. Avoid converging too soon on an idea that sounds promising, when others in the group may have better ideas. If your first choice doesn’t work out you will be able to try the other ideas that came up. 

“Nicely done!”

Picard made sure his crew knew when he thought they had done well. Even when they violated orders! He was not one to blindly follow orders himself and he praised his crew when they violated orders for good reasons.

Data’s violation of orders is met not with a reprimand but a “nicely done!”; when Data questions it Picard responds “The claim ‘I was only following orders’ has been used to justify too many tragedies in our history”

How different might the tech industry be if more people carefully considered whether doing what they’ve been told is the right or wrong thing to do? 

Posted by & filed under Java.

Java 16 brings Pattern Matching for instanceof. It’s a feature with exciting possibilities, though quite limited in its initial incarnation. 

Basic Pattern Matching

We can now do things like 

Object o = "hello";
if (o instanceof String s) {
    System.out.println(s.toUpperCase());
}

Note the variable “s”, which is then used without any casting needed.

Deconstructing Optionals

It would be even nicer if we could deconstruct structured types at the same time. I think it’s coming in a future version, but I’m impatient. Let’s see how close we can get with the current tools.

One thing we can do is use a function on the left hand side of this expression.

Let’s consider the example of dealing with unknown values in Java. There’s a fair amount of legacy to deal with. Sometimes you’ll get a value, sometimes you’ll get a null. Sometimes you’ll get an Optional<T> , sometimes you’ll get an Optional<Optional<T>> etc.

How can we make unknowns nicer to deal with?

We could create a little utility function that lets us convert all of these forms of unknown into either a value or not. Then match with an instanceof test.

Object unknown = Optional.of("Hello World");
assertEquals(
       "hello world",
       unwrap(unknown) instanceof String s
               ? s.toLowerCase()
               : "absent"
);

Thanks to instanceof pattern matching we can just use the string directly, without having to resort to passing method references i.e optional.map(String::toLowercase) 

The unwrap utility itself uses pattern matching against Optional to recursively unwrap values from nested optionals. It also converts nulls and Optional.empty() to a non-instantiatable type for the purposes of ensuring they can never match the above pattern.

static Object unwrap(Object o) {
   if (o instanceof Optional<?> opt) {
       return opt.isPresent() ? unwrap(opt.get()) : None;
   } else if (o != null) {
       return o;
   } else {
       return None;
   }
}
static class None {
   private None() {}
   public static final None None = new None();
}

Here’s several more examples, if you’d like to explore further.

Deconstructing Records

What about more complex structures? Now that we have record types, wouldn’t it be great if we can deconstruct them to work with individual components more easily. I think until more powerful type patterns exist in the language we’ll have to diverge from the instanceof approach. 

I previously showed how we could do this for records we control, by having them implement an interface. What about records we do not control? How can we deconstruct those? 

This is about the closest I can get to what I’d hope would be possible as a first class citizen in the language in future.

record Name(String first, String last) {}
Object name = new Name("Benji", "Weber");
If.instance(name, (String first, String last) -> {
   System.out.println(first.toLowerCase() + last.toLowerCase()); // prints benjiweber
});

It takes a record (Name) and a lambda where the method parameters are of the same types as the component types in the record. It deconstructs the record component parts and passes them to the lambda to use (assuming the record really matches).

We could also use as an expression to return a value, as long as we provide a fallback for the case when the pattern does not match.

Object zoo = new Zoo(new Duck("Quack"), new Dog("Woof"));
 
String result = withFallback("Fail").
    If.instance(zoo, (Duck duck, Dog dog) ->
       duck.quack() + dog.woof()
    ); // result is QuackWoof

So how does this work? 

If.instance is a static method which takes an Object of unknown type (we hope it will be a Record), and a lambda function that we want to pattern match against the provided object.

How can we use a lambda as a type pattern? We can use the technique from my lambda type references article—have the lambda type be a SerializableLambda which will allow us to use reflection to read the types of each parameter. 

static <T,U,V> void instance(Object o, MethodAwareTriConsumer<T,U,V> action) { 
 
}

So we start with something like the above, a method taking an object and a reflectable lambda function.

Next we can make use of pattern matching again to check if it’s a record.

static <T,U,V> void instance(Object o, MethodAwareTriConsumer<T,U,V> action) {
   if (o instanceof Record r) {
	// now we know it's a record
   }
}

Records allow reflection on their component parts. Let’s check whether we have enough component parts to match the pattern.

static <T,U,V> void instance(Object o, MethodAwareTriConsumer<T,U,V> action) {
   if (o instanceof Record r) {
       if (r.getClass().getRecordComponents().length < 3) {
           return;
       }
 
	 // at this point we have a record with enough components and can use them.
   }
}

Now we can invoke the passed action itself 

action.tryAccept((T) nthComponent(0, r), (U) nthComponent(1, r), (V) nthComponent(2, r));

Where nthComponent uses reflection to access the relevant component property of the record.

private static Object nthComponent(int n, Record r)  {
   try {
       return r.getClass().getRecordComponents()[n].getAccessor().invoke(r);
   } catch (Exception e) {
       throw new RuntimeException(e);
   }
}

tryAccept is a helper default method I’ve added in MethodAwareTriConsumer. It checks whether the types of the provided values match the method signature before trying to pass them. Avoiding ClassCastException

interface MethodAwareTriConsumer<T,U,V> extends TriConsumer<T,U,V>, ParamTypeAware {
   default void tryAccept(T one, U two, V three) {
       if (acceptsTypes(one, two, three)) {
           accept(one, two, three);
       }
   }
   default boolean acceptsTypes(Object one, Object two, Object three) {
       return paramType(0).isAssignableFrom(one.getClass())
               && paramType(1).isAssignableFrom(two.getClass())
               && paramType(2).isAssignableFrom(three.getClass());
   }
 
   default Class<?> paramType(int n) {
       int actualParameters = method().getParameters().length; // captured final variables may be prepended
       int expectedParameters = 3;
       return method().getParameters()[(actualParameters - expectedParameters) + n].getType();
   }
}

Then put all this together and we can pattern match against Objects of unknown type, and deconstruct them if they’re records matching the provided lambda type-pattern.

record Colour(Integer r, Integer g, Integer b) {}
 
Object unknown = new Colour(5,6,7); // note the Object type
 
int result = withFallback(-1).
    If.instance(unknown, (Integer r, Integer g, Integer b) ->
       r + g + b
    );
 
assertEquals(18, result);

Degrading safely if the pattern does not match

Object unknown = new Name("benji", "weber");
 
int result = withFallback(-1).
    If.instance(unknown, (Integer r, Integer g, Integer b) ->
       r + g + b
    );
 
assertEquals(-1, result);

Code for the record deconstruction and several more examples all in this test on github. Hopefully all this will be made redundant by future enhancements to Java’s type patterns :)

Posted by & filed under ContinuousDelivery, XP.

“We got lucky”—it’s one of those phrases I listen out for during post incident or near-miss reviews. It’s an invitation to dig deeper; to understand what led to our luck. Was it pure happenstance? …or have we been doing things that increased or decreased our luck?   

There’s a saying of apparently disputed origin: “Luck is when preparation meets opportunity”. There will always be opportunity for things to go wrong in production. What does the observation “we got lucky” tell us about our preparation? 

How have we been decreasing our luck?

What unsafe behaviour have we been normalising? It can be the absence of things that increase safety. What could we start doing to increase our chances of repeating our luck in a similar incident? What will we make time for? 

“We were lucky that Amanda was online, she’s the only person who knows this system. It would have taken hours to diagnose without her” 

How can we improve collective understanding and ownership? 

Post incident reviews are a good opportunity for more of the team to understand, but we don’t need to wait for something to go wrong. Maybe we should dedicate a few hours a week to understanding one of our systems together? What about trying pair programming? Chaos engineering?

How can we make our systems easier to diagnose without relying on those who already have a good mental model of how they work? Without even relying on collaboration? How will we make time to make our systems observable? What would be the cost of “bad luck” here? maybe we should invest some of it in tooling? 

If “we got lucky” implies that we’d be unhappy with the unlucky outcome, then what do we need to stop doing to make more time for things that can improve safety? 

How have we been increasing our luck? 

I love the extreme programming idea of looking for what’s working, and then turning up the dials

Let’s seek to understand what preparation led to the lucky escape, and think how we can turn up the dials.

“Sam spotted the problem on our SLIs dashboard”

Are we measuring what matters on all of our services? Or was part of “we got lucky” that it happened to be one of the few services where we happen to be measuring the things that matter to our users? 

“Liz did a developer exchange with the SRE team last month and learned how this worked”

Should we make more time for such exchanges or and personal learning opportunities? 

“Emily remembered she was pairing with David last week and made a change in this area”

Do we often pair? What if we did more of it?

How frequently do we try our luck?

If you’re having enough production incidents to be able to evaluate your preparation, you’re probably either unlucky or unprepared ;)

If you have infrequent incidents you may be well prepared but it’s hard to tell. Chaos engineering experiments are a great way to test your preparation, and practice incident response in a less stressful context. It may seem like a huge leap from your current level of preparation to running automated chaos monkeys in production, but you don’t need to go straight there. 

Why not start with practice drills? You could have a game host who comes up with a failure scenario. You can work up to chaos in production. 

Dig deeper: what are the incentives behind your luck?

Is learning incentivised in your team, or is there pressure to get stuff shipped? 

What gets celebrated in your team? Shipping things? Heroics when production falls over? Or time spent thinking, learning, working together?

Service Level Objectives (SLOs) are often used to incentivise (enough) reliability work vs feature work…if the SLO is at threat we need to prioritise reliability. 

I like SLOs, but by the time the SLO is at risk it’s rather late. Adding incentives to counter incentives risks escalation and stress. 

What if instead we removed (or reduced) the existing incentives to rush & sacrifice safety. Remove rather than try to counter them with extra incentives for safety? 🤔

Posted by & filed under Java.

Some time ago I wrote a post about creating an embedded dsl for Html in Java. Sadly, it was based on an abuse of lambda name reflection that was later removed from Java.

I thought I should do a followup because a lot of people still visit the old article. While it’s no longer possible to use lambda parameter names in this way, we can still get fairly close. 

The following approach is slightly less concise. That said, it does have some benefits over the original:

a) You no longer need to have parameter name reflection enabled at compile time.

b) The compiler can check your attribute names are valid, and you can autocomplete them.

What does it look like? 

html(
   head(
       title("Hello Html World"),
       meta($ -> $.charset = "utf-8"),
       link($->{ $.rel=stylesheet; $.type=css; $.href="/my.css"; }),
       script($->{ $.type= javascript; $.src="/some.js"; })
   ),
   body(
       div($-> $.cssClass = "article",
           a($-> $.href="https://benjiweber.com/",
               span($->$.cssClass="label", "Click Here"),
               img($->{$.src="/htmldsl2.png"; $.width=px(25); $.height=px(25); })
           ),
           p(span("some text"), div("block"))
       )
   )
)

This generates the following html

<html>
 <head>
   <title>Hello Html World</title>
   <meta charset="utf-8" />
   <link rel="stylesheet" type="css" href="/my.css" />
   <script type="text/javascript" src="/some.js" ></script>
 </head>
 <body>
   <div class="article">
     <a href="https://benjiweber.com/">
       <span class="label">Click Here</span>
       <img src="/htmldsl2.png" width="25" height="25" />
     </a>
     <p>
       <span>some text</span>
       <div>block</div>
     </p>
   </div>
 </body>
</html>

You get nice autocompletion, and feedback if you specify inappropriate values:

You’ll also get a helping hand from the types to not put tags in inappropriate places:

Generating Code

As it’s Java you can easily mix other code to generate markup dynamically:

assertEquals(
       """
       <html>
         <head>
           <meta charset="utf-8" />
         </head>
         <body>
           <p>Paragraph one</p>
           <p>Paragraph two</p>
           <p>Paragraph three</p>
         </body>
       </html>
       """.trim(),
       html(
           head(
               meta($ -> $.charset = "utf-8")
           ),
           body(
               Stream.of("one","two","three")
                   .map(number -> "Paragraph " + number)
                   .map(content -> p(content))
           )
       ).formatted()
);

And the code can help you avoid injection attacks by escaping literal values: 

assertEquals(
       """
       <html>
         <head>
           <meta charset="utf-8" />
         </head>
         <body>
           <p>&lt;script src="attack.js"&gt;&lt;/script&gt;</p>
         </body>
       </html>
       """.trim(),
       html(
           head(
               meta($-> $.charset = "utf-8")
           ),
           body(
               p("<script src=\"attack.js\"></script>")
           )
       ).formatted()
);

How does it work?

There’s only one “trick” here that’s particularly useful for DSLs. Using the Parameter Objects pattern from my lambda type references post. 

The lambdas used for specifying the tag attributes are “aware” of their own types. And capable of instantiating the configuration they specify.

When we call 

meta($ -> $.charset="utf-8")

We make a call to 

default Meta meta(Parameters<Meta> params, Tag... children) {}

The lambda specifying the attribute config is structurally equivalent to the Parameters<Meta> type. This provides a get() function that instantiates an instance of Meta, and then passes the new instance to the lambda function to apply the config.

public interface Parameters<T> extends NewableConsumer<T> {
   default T get() {
       T t = newInstance();
       accept(t);
       return t;
   }
}

Under the hood the newInstance() method uses reflection to examine the SerializedLambda contents and find the type parameter (in this case “Meta”) before instantiating it.

You can follow the code or see the previous post which explains it in a bit more detail.

Add Mixins

It’s helpful to use interfaces as mixins to avoid having to have one enormous class with all the builder definitions. 

public interface HtmlDsl extends
   Html.Dsl,
   Head.Dsl,
   Title.Dsl,
   Meta.Dsl,
   Link.Dsl,
   Script.Dsl,
   Body.Dsl,
   Div.Dsl,
   Span.Dsl,
   A.Dsl,
   P.Dsl,
   Img.Dsl {}

Each tag definition then contains its own builder methods. We compose them together into a single HtmlDsl interface for convenience. This saves having to import hundreds of different methods. By implementing the Dsl interface a consumer gets access to all the builder methods.

Show me the code

It’s all on github. I’d start from the test examples. Bear in mind that it’s merely a port of the old proof of concept to a slightly different approach. I hope it helps illustrate the technique. It’s in no way attempting to be a complete implementation.

This approach can also be useful as an alternative to the builder pattern for passing a specification or configuration to a method. There’s another example on the type references article.

What else could you use this technique for?

Posted by & filed under XP.

“How was your day?” “Ugh, I spent all day in meetings, didn’t get any work done!” 

How often have you heard this exchange?

It makes me sad because someone’s day has not been joyful; work can be fun. 

I love a whinge as much as the next Brit; maybe if we said what we mean rather than using the catch-all “meetings” we could make work joyful.

Meetings are work

Meetings are work. It’s a rare job where you can get something done alone without collaborating with anyone else. There are some organisations that thrive with purely async communication. Regardless, if you’re having meetings let’s recognise that they are work. 

What was it about your meeting-full day that made you sad? It doesn’t have to be that way.

Working together can be fun

I’ve seen teams after a day of ensemble (mob) programming. Exhausted, yet elated at the amount they’ve been able to achieve together; at the breakthroughs they’ve made. Yet a group of people, working together, on the same thing, sounds an awful lot like a meeting. Aren’t those bad‽

Teams who make time together for a full day of planning, who embrace the opportunity to envision the future together, can sometimes come away filled with hope. Hope that better things are within their grasp than they previously believed possible.

Yet the more common experience of meetings seems synonymous with “waste of time” or “distraction from real work”. Why is this? Why weren’t they useful?

One team’s standup can be an energising way to kick off the day. Hearing interesting things we collectively learned since yesterday Deciding together what that means for today’s plan; who will work with whom on what? 

For another team it may be a depressing round of status updates that shames people who feel bad that they’ve not achieved as much as they’d hoped.

How do we make meetings better?

A first step is talking about what did or didn’t work, rather than accepting they have to be this way. Because there’s no “one weird trick” that will make your meetings magical. You’ll need to find what works for your team.

Why should you care? You probably prefer fun work. If you could make your meetings a little more fun you might enjoy your work a lot more.

Meetings beget meetings. Running out of time. Follow-ups. Clarifying things that were confusing from the first meeting… Ineffective meetings breed. Tolerating bad leads to more misery.

Saying what we mean

Here’s some things we could say that are more specific

We didn’t need a meeting for that

Was it purely a broadcast of information with no interactivity? Could we have handled it asynchronously via email/irc/slack etc?

I didn’t need to be there

No new information to you? Nothing you could contribute? If you’re not adding value how about applying the law of mobility and leave / opt out in future. Or feed back to the organiser. Maybe they’re seeing value you’re adding that you’re oblivious to.

I don’t know what that meeting was for

How about we clarify the goal for next time. Make it a ground rule for future meetings. If it’s worth everyone making time for it’s worth stating the purpose clearly so people can prepare & participate.

It wasn’t productive

Was the meeting to make a decision and we came out without either deciding anything or learning anything? 

Was the meeting to make progress towards a shared goal and it feels like we talked for an hour and achieved nothing.

Perhaps we’d benefit from a facilitator next time.

It was boring

Can we try mixing up the format? Could you rotate the facilitator to get different styles? How can you engage everyone? Or does the boredom indicate that the topic is not worth a meeting?

If it is important but still boring how do we make it engaging? It’s telling that “workshop” “retrospective” “hackathon” and other more specific names don’t have the same connotation as the catch-all “meetings”. Just giving the activity a name shows that someone has thought about an appropriate activity that will engage the participants to achieve a goal.

I needed more time to think

Could we share proposals for consideration beforehand? Suggest background reading to enable people to come prepared? Allocate time for reading and thinking in the meeting?

It was too long

We could have achieved the same outcome in 5 minutes but ended up talking in circles for an hour. 

I didn’t hear from ____

Did we exclude certain people from the conversation, intentionally or unintentionally? What efforts can you make to create a space for everyone to participate?

Not enough focus time

Do you need to defragment your calendar? Cramming in activities that need deep focus in gaps between meetings is not a recipe for success. Do you need to ask your manager for help rescheduling meetings that you can’t control? Should you be going to them all or can you trust someone else to represent you?

Too many context switches

Even if you don’t need focus time, context switching from one meeting to another can be exhausting. Are you or your team actively involved in too many different things? Can you say no to more? Can you work with others and reschedule meetings to give each day a focus?

It wasn’t as important as other work

Maybe you’re wasting lots of time planning things you might never get to and would be better off focusing on what’s important right now? Is your whole team attending something that you could send a representative to? Perhaps reading the minutes will be enough. 

Highlight the value

We decided on a database for the next feature 

We learned how the production incident occurred

We heard the difficulty the customer is having with…

We made a plan for the day 

We shared how we halved our production lead time

We realised that our solution won’t work

We agreed some coding principles

Tackling your meetings

What’s your least valuable meeting? Which brings you the least joy? 

What’s your most valuable meeting? Which brings you the most joy? 

What’s the difference between these? What made them good or bad. 

Turn up the good; vote with your feet on the bad.

Meandering path towards value

Posted by & filed under ContinuousDelivery, XP.

We design systems around the size of delays that are expected. You may have seen the popular table “latency numbers every programmer should know” which lists some delays that are significant in technology systems we build.

Teams are systems too. Delays in operations that teams need to perform regularly are significant to their effectiveness. We should know what they are.

Ssh to a server on the other side of the world and you will feel frustration; delay in the feedback loop from keypress to that character displayed on the screen. 

Here’s some important feedback loops for a team, with feasible delays. I’d consider these delays tolerable by a team doing their best work (in contexts I’ve worked in). Some teams can do better, lots do worse.

OperationDelay
Run unit tests for the code you’re working on< 100 Milliseconds
Run all unit tests in the codebase< 20 Seconds
Run integration tests< 2 Minutes
From pushing a commit to live in production< 5 Minutes
Breakage to Paging Oncallper SLO/Error Budget
Team Feedback< 2 Hours
Customer Feedback< 1 Week
Commercial Bet Feedback< 1 Quarter

What are the equivalent feedback mechanisms for your team? How long do they take? How do they influence your work?

Feedback Delays Matter

They represent how quickly we can learn. Keeping the delays as low as the table above means we can get feedback as fast as we have made any meaningful progress. Our tools/system do not hold us back. 

Feedback can be synchronous if you keep them this fast. You can wait for feedback and immediately use it to inform your next steps. This helps avoid the costs of context switching. 

With fast feedback loops we run tests, and fix broken behaviour. We integrate our changes and update our design to incoporate a colleague’s refactoring.

Fast is deploying to production and immediately addressing the performance degradation we observe. It’s rolling out a feature to 1% of users and immediately addressing errors some of them see.

With slow feedback loops we run tests and respond to some emails while they run, investigate another bug, come back and view the test results later. At this point we struggle to build a mental model to understand the errors. Eventually we’ll fix them and then spend the rest of the afternoon trying to resolve conflicts with a branch containing a week’s changes that a teammate just merged.

With slow deploys you might have to schedule a change to production. Risking being surprised by errors reported later that week, when it has finally gone live, asynchronously. Meanwhile users have been experiencing problems for hours.

Losing Twice

As feedback delays increase, we lose twice:

a) We waste more time waiting for these operations (or worse—incur context switching costs as we fill the time waiting)

b) We are incentivised to seek feedback less often, since it is costly to do so. Thereby wasting more time & effort going in the wrong direction.

I picture this as a meandering path towards the most value. Value often isn’t where we thought it was at the start. Nor is the route to it often what we envisioned at the start.

We waste time waiting for feedback. We waste time by following our circuitous route. Feedback opportunities can bring us closer to the ideal line.

When feedback is slow it’s like setting piles of money on fire. Investment in reducing feedback delays often pays off surprisingly quickly—even if it means pausing forward progress while you attend to it.

This pattern of going in slightly the wrong direction then correcting repeats at various granularities of change. From TDD, to doing (not having) continuous integration. From continuous deployment to testing in production. From customers in the team, to team visibility of financial results.

Variable delays are even worse

In recent times you may have experienced the challenge of having conversations over video links with significant delays. This is even harder when the delay is variable. It’s hard to avoid talking over each other. 

Similarly, it’s pretty bad if we know it’s going to take all day to deploy a change to production. But it’s so worse if we think we can do it in 10 minutes, when it actually ends up taking all day. Flaky deployment checks, environment problems, change conflicts create unpredictable delays.

It’s hard to get anything done when we don’t know what to expect. Like trying to hold a video conversation with someone on a train that’s passing through the occasional tunnel. 

Measure what Matters 

The time it takes for key types of feedback can be a useful lead indicator on the impact a team can have over the longer term. If delays in your team are important to you why not measure them and see if they’re getting better or worse over time? This doesn’t have to be heavyweight.

How about adding a timer to your deploy process and graphing the time it takes from start to production over time? If you don’t have enough datapoints to plot deploy delay over time that probably tells you something ;)

Or what about a physical/virtual wall for waste. Add to a tally or add a card every time you have wasted 5 mins waiting. Make it visible. How big did the tally get each week?

What do the measurements tell you? If you stopped all feature work for a week and instead halved your lead time to production, how soon would it pay off?

Would you hit your quarterly goals more easily if you stopped sprinting and first removed the concrete blocks strapped to your feet?

What’s your experience?

Every team has a different context. Different sorts of feedback loops will be more or less important to different teams. What’s important enough for your team to measure? What’s more important than I’ve listed here?

What is difficult to keep fast? What gets in the way? What is so slow in your process that synchronous feedback seems like an unattainable dream?