DevOps – Back to the Future?

In my first ever development role I was luckily enough to work in a completely virtualised environment.

I could, with a single script, build a complete development environment just for me. I could compile my code and deploy into my test environment automatically, and when I was testing – I had tools I could use to trace execution flow down every path in my code. If I needed any assistance, there was a dedicated Operations Guy who was an expert in Automation who sat 3 desks away and could answer all my problems.

DevOps Heaven, right?

This was in 1990.

The operating system was IBM MVS/ESA running on VM/370 (which pretty much invented the term “hypervisor”), and the “Operations Guy” was called a “systems programmer” (or “SysProg” for short). At the time we were writing NetView which I guess was a kind of Nagios equivalent for VTAM/SNA networks, although it also had automation capabilities a la ScienceLogic. Whilst we were at it we also wrote an in-memory, object-oriented, non-relational (noSQL) database called RODM (Resource Object Data Manager) thereby pre-empting “innovations” like memcached, Couchbase and the current noSQL movement by about 20 years… but I digress!

 “What has been is what will be, and what has been done is what will be done, and there is nothing new under the sun.” (Ecclesiastes 1:9).

Control within organisations tends to moves in cycles from centralised command/control mindsets to distributed autonomous teams and back again, in the same way computing resources have moved from centralised mainframes to distributed PC’s and now are moving back centrally again as “The Cloud”.

Make no mistake, I am not denigrating in any way the goals of the DevOps movement (or the Cloud for that matter) by pointing out that some of it might have been done before! Each expansion and contraction, dispersion and centralisation, of computing power needs to be matched with the correct management and organisational models.

I firmly believe that removing the silos within IT Departments and building cross-functional autonomous teams with end-to-end responsibility is the correct model to match the incredible flexibility we now possess with “on-demand” Cloud Computing (although I might mention that the term “on-demand” was an IBM slogan in 2002…).

But, as always, “Those who don’t know history are destined to repeat it.” (Burke or Santayana, take your pick). So what can we learn from the past that might be relevant to the DevOps movement today?

Let’s start with 5 quick lessons:

  1.  “To err is human” – automation can reduce human error that leads to expensive downtime (“Downtime caused by incorrect manual configuration estimated at £48K/hr or $72K/hr in 2012”) but nothing, and I mean nothing, seriously screws things up like getting your automation wrong. Want every application to suddenly stop working simultaneously? Yup, that change-password script didn’t quite work the way you planned.
  2. Back in the day we had a job role called an “Operator” – which was a soul destroying shift job basically watching batch, automated processes run. So for everyone whose “empowered” by DevOps make sure you aren’t creating a under-class at the same time!
  3. Aligning reward and recognition – just because you’ve created your awesome cross-functional DevOps team doesn’t mean the organisation’s HR function has kept pace. Nothing destroys team cohesion faster that misaligned reward and recognition. It’s worth noting that lessons from anthropology show us that primates have an innate sense of fairness/justice… so it’s not enough to just ensure that the “good” get rewarded, it’s equally important that the “bad” get punished (or sacked). If management doesn’t control this the “group” will by means of in-group formation and ostracism. Both of which are anathema to high-performance teams.
  4. Make room for exceptions. Not everyone will want to work in an Agile DevOps world. Back in the day we had a short, bitingly sarcastic, New Yorker on our team. He wouldn’t be my first pick for a multi-disciplinary DevOps team… but man that boy could code! I’m talking the kinda code that made something hideously complex look blindingly obvious and had everyone slapping themselves upside that head saying “gee, why didn’t I think of that?”. Well, probably cos he was just better than you, numbnuts! So, make some space in the corner for misfits that don’t play nice with others and work out a way to incorporate their strengths into your DevOps model while patching over their weaknesses.
  5. Make environment building part of the Development pattern – I saw a tweet the other day that said “Dev: “It works on my machine, just not on the server.” Me: “Ok, backup your mail. We’re putting your laptop into production.” – For me, this is one of the great messages of DevOps – get the Operational requirements for the run-time environment written into the Development process as user stories and ensure that it DOES work (in your virtualised, “infrastructure as code” environment) before you say it’s ready. Gene Kim calls this his “DevOps favourite pattern #3”. That’s what the SysProg did for us, back in the day, and it works.

What lessons do you have from “the good old days” that might be relevant to the DevOps movement?

– TheDevMgr

Give us your thoughts!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s