Curated Content October 2023

A few pieces of content I thought were worthwhile in the month of October.

Curated Content October 2023
Photo by Beth Teutschmann / Unsplash


What's an Operational Definition Anyway?

What’s an Operational Definition Anyway?
Two principles on collecting data, from the field of Statistical Process Control. As with most principles in SPC, this is both simpler and more important than you might think.

A great article on Operational Definitions from Cedric Chin of Commoncog again. These are important, because otherwise data you're collecting is likely to be meaningless or unable to be compared coherently over time.

The most important takeaway is the three things that make up an operational definition:

A criterion. The thing you want to measure.
A test procedure. What process are you going to use to measure the thing?
A decision rule. How will you decide if the thing you’re looking at should be included (or excluded) from the count?

He gives a great example of a metric without an established operational definition and how an organization changed what was being measured in order to game the metric, because no operational definition had been established.

McKinsey Developer Productivity Review

McKinsey Developer Productivity Review
McKinsey recently published an article claiming they can measure developer productivity. This has provoked something of a backlash from some prominent software people, but I have not seen anyone engage with the content of the article itself, so I thought this would be useful.I am writing this as th…

Probably the single best article to read as a follow-up to the McKinsey Developer Productivity report.

Dan North gives McKinsey's report a fair, but thorough review, and notes many areas where it comes up short, as well as a number of places where he cites available research that McKinsey ought to have been able to build off of.

You'll forget most of what you learn. What should you do about that?

You’ll forget most of what you learn. What should you do about that?
Are we crazy, or are we missing a big secret?

This one is off the beaten path, even for me, but I thought it was a really thought provoking one about learning.

The author pushes that the most important thing that you develop with regard to learning is your attitude towards it, because you'll often forget the details of things you learn, whether through the education system, work, or self education.

The students who ultimately succeed in learning R are not the ones who force themselves to memorize functions or do a bunch of coding drills. They’re the ones who accept they will feel stupid and that most of the rules will at first seem totally arbitrary, and who understand that they will gain great power if they just keep going. (Emphasis mine.)

This applies to programming, but I've found it applies just as well to getting yourself familiar with new domains.

Don't Be Beaten By Agile Cadences. Slow Down & RELAX

Don’t Be Beaten By Agile Cadences. Slow Down & RELAX.
One of the earliest pieces of advice I got about practicing guitar was to use a metronome. Playing to a regular beat helps build timing and rhythm, but it also shows up the weaknesses in your playing and forces you to overcome them.

Despite coming from an unorthodox place to find great software content, LinkedIn, Jason Gorgman produced a great article that will be one I keep around for new folks that I bring into my organization.

Individuals that aren't used to working in small slices where code is continuously merged into the mainline and deployed every single day, multiple times per day, can struggle to adapt to a new way of writing code. This doesn't mean they're bad engineers, just that they have to learn a new way of working.

This is a reminder as you're doing that to be patient with yourself. It takes time to learn a new way of working.


Rigor Mortis: How Sloppy Science Creates Worthless - Richard Harris

While well outside my personal expertise, I thought Rigor Mortis was a really interesting read on how we make a lot of missteps and have a lot of perverse incentives setup with regards to biomedical research.

It's also got some applicability into how we ought to be thinking about empirical research in software development, with regard to replicability.

Conf Talks

No conf talks this month.


No podcasts this month.


Scaling up software depends on how much work can genuinely be done in parallel, and that depends on - funnily enough - dependencies between processes.  If Process A produces information Process B needs, then B will have to wait for A before it can get on with its job, or risk a conflict.  I see a similar effect when developers ares working on highly interdependent code, and how better separation of concerns tends to lead to less blocking and fewer merge conflicts. In situations where dependencies make parallelism unworkable, and we're effectively limited to doing the work in series, we can speed things up by using a faster processor.  In software development, that means applying more brain power and experience to a problem. You may know it as "ensemble programming".  Note that the fastest way for a team to complete my Team Mars Rover exercise - because it's a very interconnected problem - is to take turns completing each feature.

An interesting post from Jason Gorman about when different programming approaches can make sense, like mob/ensemble.

As I've long argued, building faster than you can learn is waste.  It's kind of like Twyman's Law. If most upfront requirements are wrong, then unless it is in service of learning, most of the time spent following plan is also time spent off course.

I hadn't heard of Twyman's Law before, and didn't feel like the article connected with the post as well as I would have liked, but this post really resonated with me about a key constraint in product development.