The Economics of DevOps Part II – Ronald Coase and Transaction costs

In an earlier blog post we talked about the classical economic theories of Adam Smith and how they applied to DevOps (particularly the role of “generalists” and “specialists” with DevOps teams).

In this post we’re going to talk about the work of Ronald Coase on transactions costs and the “Nature of the Firm” and why this is relevant to how & why the DevOps approach works.

“In order to carry out a market transaction it is necessary to discover who it is that one wishes to deal with, to conduct negotiations leading up to a bargain, to draw up the contract, to undertake the inspection needed to make sure that the terms of the contract are being observed, and so on.” – Ronald Coase

One of the key tenets of DevOps is to try and remove the “silos” that exist in “traditional” IT departments but in order to remove these “silos” we need to understand how&why they exist in the first place.

In 1937 Coase published the “Nature of the Firm” which examined some of these issues in the context of why certain activities were taking place within the boundaries of an organisation as opposed to by external suppliers (and also why certain activities need to be undertaken by governments in order for the market to work effectively).

Where do we “draw the line” between what’s economically viable to do “in-house” as opposed to “subcontracting out”? In IT terms this can be seen as analogous to the “buy vs build” decision when it comes to COTS software.

Coase postulated 6 “transaction costs” that influenced this decision, and where you drew the “boundary of the firm” depended on the your desire to minimise these supply costs for that particular service or product (and hence to maximise YOUR profits for the output of your value –added activity that was built on top of these base costs).

  • Search and information costs are costs such as those incurred in determining that the required good is available on the market, which has the lowest price, etc.
  • Bargaining costs are the costs required to come to an acceptable agreement with the other party to the transaction, drawing up an appropriate contract and so on.
  • Policing and enforcement costs are the costs of making sure the other party sticks to the terms of the contract, and taking appropriate action (often through the legal system) if this turns out not to be the case.

Source – Wikipedia

If we examine many of the current trends in both business and IT it’s clear that these cost are (consciously or unconsciously) the target for many of the new products and services we see in the market.

  • Googles entire business model is based on LOWERING search and information costs (even more so with services like Google Shopper that enable you to very quickly find information on products and prices). Lowering the “information asymmetry” between buyer and seller generally optimises the price.
  • eBay and Etsy also directly attack these costs – they collect items into a market so that lowers search and information costs, they set very clear policies (“contracts”) regarding prices etc, offer policing and enforcement mechanism in the case of disputes and generally guarantee payment via the use of escrow systems etc.
  • virtual workforce websites like or do the same for services – they create an efficient market where you can find the skills you require within a framework that lowers your risk and minimises these transaction costs.

So how does this apply to silos within at IT organisation and DevOps transformation?

Well, if we shrink our definition of the “firm” down to the IT department we can see how silos can form.

For example, if you want some DBA expertise it lowers search costs if you can just say “go see the DBA team over there” (silo). You can define clear processes to request and interact with that silo (lowers bargaining costs) and all the DBA’s report to a common line manager so when it comes to “policing and enforcement” you have a clear route to complain. It also makes life easier for the DBA’s in that they lower their internal “search costs” when it comes to sharing knowledge, information, workload etc.

So what is DevOps trying to change?

Well, DevOps postulates that creating silos based on expertise is ultimately inefficient when seen from the perspective of a building a customer product. It might be efficient for the DBA’s but these silos don’t scale very well.

In order to build a “product” I need DBA’s, systems administrators, business analysts, developers (of various flavours), testers etc etc. If all of these exist in silos, surrounded by processes, my transaction costs increase to a point where my “boundary of the firm” from the perspective of the product team has grown so high that it makes sense for me to redraw my boundary to have those skills “in-house”.

This is the essence of DevOps – we re-draw the boundaries to create cross-functional teams to reduce transaction costs within that (product) team.

Search and Information costs trend to zero because it’s not some remote person in another silo… the expertise I need is across the desk from me, interacting with me every day via our Scrum or Kanban processes.

Bargaining costs become personal, not abstract – as a team we are committing to our shared goals without the need for reams of formal processes.

Policing and enforcement likewise become social and transparent – if someone isn’t contributing or delivering on their promises then this rapidly becomes apparent to the rest of the DevOps team and social pressures start to come to bear to align their behaviour with the groups agreed goals.

Similarly from the “external” perspective to our “DevOps firm” if we have a problem with Product X its clear where the responsibility lies – with the DevOps team that owns that product – and not diffused across multiple silos (Dev, Ops, etc) who can “point the finger” at the other team as the “root cause”. For many organisations the time wasted in defining & executing “problem management” processes to solve this “enforcement cost problem” to find out who is responsible for fixing a problem are immense.

Viewing DevOps from economics perspective starts to give us a framework to understand why the empirical benefits we see in organisations that have adopted DevOps principles are created, which ultimately gives us clearer insight into how & why to implement DevOps.

image source: - CC  - classicsintroian via Flickr -


Here are the slides from James’s epic demo at #VelocityConf…



Lessons Learnt:

(1) Don’t share the URL to your new tool with a room of 800 geeks BEFORE your demo

(2) make sure you have “here’s one I created earlier” web pages in another Chrome Tab

(3) Have a larger API key so you don’t run out of test credits…

DevOps and the Knowledge Horizon

One of the things I love about SF novels is that they are always full of fascinating ideas that sometimes spark resonances in my own thinking (in this case, on DevOps).

Alastair Reynolds’ new novel “On the Steel Breeze” has this passage when discussing the rotation of Hyperion (one of Saturn’s moons):

“There’s a value in chaos theory, a number called the Lyapunov exponent, which tells you how to predict a chaotic systems boundary – it’s knowledge horizon, if you like. Hyperion’s Lyapunov exponent is just forty day – we can’t predict this moon’s motion beyond the next forty days. That’s the maximum limit of our foreknowledge! If my life depended on this moon’s motion, I would still not be able to say a word about its state beyond forty days”

My first thought when I read this was “Yep, sound like every IT project I’ve ever worked on… pretty much turns to chaos after about 40 days…”.

Now, the mathematics of the Lyapunov Exponent are way beyond my long-forgotten basics of high school calculus so I can’t comment on whether the author’s usage of the exponent is valid but the concept – that there is a limit on our foreknowledge beyond some sort of chaos event horizon – really stuck with me.

So what’s this all got to do with DevOps?

Well, “classical” project management takes a rather linear, deterministic view of the world – that we can accurately define a set of initial conditions that are complete (the mythical “requirements spec”) and that we can define a series of steps (the “waterfall process”) that will result in us achieving a desired end state (a working system that matches the specification AND the customer’s intent).

But what happens if software development is “non-linear”, chaotic and subject to “the butterfly effect”?

If “small changes to the initial conditions” (i.e. the spec is a tiny bit wrong, or more likely the customer changes their mind) can have a major impact on the end state (not to mention any of the other factors that might turn a project plan to “chaos” in the common usage of the term) does any given software project have a “Lyapunov Exponent” that limits our ability to predict the future?

I’d argue that it might… and this is intuitively what DevOps seeks to address in the “2nd Way of DevOps”

The Second Way is about creating the right to left feedback loops. The goal of almost any process improvement initiative is to shorten and amplify feedback loops so necessary corrections can be continually made.

By shortening the feedback cycle we, consciously or unconsciously, are seeking to bring back our project cycles within our “Lyapunov Exponent”. If we can carry out our planning cycles (Sprints in an Agile development world) within that “limit of our foreknowledge” we create a system where we can correct and tweak the “initial conditions” of our next cycle to seek to keep our project “on track”.

What I also find interesting is the idea that different projects have different “Lyapunov Exponents” depending on their degree of “non-linearity” and sensitivity to initial conditions etc.

Perhaps this is why different projects can have success (or failure) with different sprint durations.

Just because a one, two or four week sprints work for project X doesn’t mean that’s the best duration for Project Y because it has a different exponent. Part of our feedback cycle also involves finding the optimal value for our “knowledge horizon” – not so far out we descend into “chaos” but not so short we spend more time re-planning than we might need to otherwise do.

Sprint Duration < “Lyapunov Exponent” = DevOps Success?

Sadly I don’t think that there is any theoretical way to determine a project (or organisation’s) hypothetical “Lyapunov Exponent” when it comes to project planning… it has to be empirical based on trial and error, but I’d be curious to know if, even anecdotally, that for a given organisation or type of project the likelihood of a “successful project” falls off a cliff beyond some planning horizon. If there was some way to determine the “Lyapunov Expontent” for a project and optimise an organisation’s feedback cycle (or sprint duration) below that limit then I think that would be of significant benefit.


p.s. yes, I know I’ve played fairly fast and loose with the mathematical usage of the word “chaos” compared to its vernacular usage. I apologise to any mathematicians reading this blog but I hope you’ll forgive me in the spirit of exploring an interesting concept!

image source: - CC  -  donkeyhotey via Flickr -

Meet the DevOpsGuys @VelocityConf & @WebPerfDays

This week is a full on conference-fest for the DevOpsGuys!

@TheDevMgr will be doing a lightening demo at Velocity Conf on (a global testing front-end for

Then the @DevOpsGuys will have “Office Hours” at Velocity at 10.45am on Thursday.

@TheOpsMgr will be hanging around Velocity all week while furiously sorting out last minute details for WebPerfDays London 2013 (which is now Sold Out but there are some tickets still available for the “After Party” on Saturday night I you want to come and hang out post-Velocity).

We’ll be around in the Hilton Metropole bar on Wednesday night if you want to come over to say hello. Just listen out for the loud Aussie and Welshman and that will be us…