All posts by DevOpsGuys

DevOps Cardiff Meetup is Launched!

If you want to deliver better software faster, and live in or close to Cardiff, Wales – then this group is for you!

A chance to exchange some of the latest ideas and technologies for implementing Continuous Delivery, Automation and DevOps practices through short presentations by invited speakers, and a chance to chat it after over drinks/snacks with new friends.

This event was created with the intention of uncovering betters ways of interacting between development and operations, by doing it and helping others do it.

Our aims are simple;

1. Demonstrate tools, techniques and practices we use to bridge the gap between software development and IT groups they rely upon.

2. Facilitate discussion on DevOps successes and challenges.

3. Explore, experiment and learn new and better ways of practicing DevOps.

Come and join us and help create more awesome!

DevOps Cardiff

Cardiff, GB
19 Members

If you’re a Developer or Operations person in the Cardiff Area and wanting to deliver better software faster, then this group is for you!A chance to exchange some of the late…

Next Meetup

Inaugural DevOps Meetup

Wednesday, May 7, 2014, 6:30 PM
11 Attending

Check out this Meetup Group →

Why companies are investing in DevOps and Continuous Delivery

A thought-provoking infographic – with some interesting data points – shows how companies are reaping real rewards from investing in agile software delivery processes. Check out the graphic – from Zend – for more on how DevOps and Continuous Delivery are bridging the speed and innovation gap between business demand and IT.

Continuous Delivery Infographic by Zend Technologies.

Continuous Delivery Infographic

DevOps 101 for Recruiters

We put together this “DevOps Intro” to the recruitment team at a London recruitment consultancy to help them understand the DevOps Market place.

The goal was to help them understand both the WHY and the WHAT of DevOps and what that might mean for recruiters.

Hopefully you will find this interesting as well!

BamiWvqCIAAfBgw

Upgrading to Octopus 2.0

On 20th December 2013, Paul Stovell and the Octopus Team released Octopus Deploy 2.0. We’ve had a chance to play with the product for sometime, and it’s a great next version.

Here’s 5 great reasons to upgrade;

  1. Configurable Dashboards
  2. Sensitive variables
  3. Polling Tentacles (Pull or Push)
  4. IIS website and application pool configuration
  5. Rolling deployments

So let’s take a look at how the upgrade process works.

Step 1 – Backup your existing database

Things to check before you upgrade;

  1. Your administrator username and password (especially, if you are not using Active Directory)
  2. Before attempting to migrate, make sure that you don’t have any projects, environments, or machines with duplicated names (this is no longer allowed in Octopus 2.0, and the migration wizard will report an error if it finds duplicates).
  3. IIS Configuration. Are you using ARR or rewrite rules of any kind.
  4. SMTP Server Settings (these are not kept in upgrade)

The first step is to ensure that you have a  recent database backup that you can restore in case anything goes wrong.

1 - Database Backup

Confirm the location of your backup file. You’ll need this file at a later stage in the upgrade process (and if anything goes wrong)

2-Database Backup Location

Step 2 – Install Octopus 2.0

Download the latest installer MSI from the Octopus Website at http://octopusdeploy.com/downloads

Next follow the installer instructions and the Octopus Manager process. This process is really straight forward, and if you’ve installed Octopus before you’ll be familiar with most of the process. There’s a great video here on how to install Octopus http://docs.octopusdeploy.com/display/OD/Installing+Octopus

Step 3 – Import Previous Octopus Database

Once you’ve installed Octopus 2.0, you’ll most likely want to import your old v1.6 database with all your existing configuration. To do this, select the “import from Octopus 1.6″ option and choose the database back-up from the process you followed in step 1 and follow the instructions.

3 - Import Octopus Database

Step 4 – Post Upgrade Steps (server)

When you have completed the installation it’s essential to task a copy of the master key for backups. Instructions on how to backup the key can be found here http://docs.octopusdeploy.com/display/OD/Security+and+encryption

Once you have the server installed there’s a couple of items you’ll need to configure again.

  1. SMTP Server Settings are not kept, so these will need to be re-entered.

Step 5 – Upgrading Tentacles

If you are upgrading from version 1.6 - Octopus 2.0 server can no longer communicate with Tentacle 1.6. So in addition to upgrading Octopus, you’ll also need to upgrade any Tentacles manually. There are two ways to achieve this;

1. Download and Install the new Octopus Deploy Tentacle MSI on each target server

2. Use  a PowerShell script to download the latest Tentacle MSI, install it, import the X.509 certificate used for Tentacle 1.6, and configure it in listening mode. The powershell script is as follows;

function Upgrade-Tentacle
{ 
  Write-Output "Beginning Tentacle installation"
  Write-Output "Downloading Octopus Tentacle MSI..."
  $downloader = new-object System.Net.WebClient
  $downloader.DownloadFile("http://download.octopusdeploy.com/octopus/Octopus.Tentacle.2.0.5.933.msi", "Tentacle.msi")
  
  Write-Output "Installing MSI"
  $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/i Tentacle.msi /quiet" -Wait -Passthru).ExitCode
  Write-Output "Tentacle MSI installer returned exit code $msiExitCode"
  if ($msiExitCode -ne 0) {
    throw "Installation aborted"
  }
  
  Write-Output "Configuring and registering Tentacle"
   
  cd "${env:ProgramFiles(x86)}\Octopus Tentacle 2.0\Agent"
  Write-Output "Stopping the 1.0 Tentacle"
  
  & .\tentacle.exe create-instance --instance "Tentacle" --config "${env:SystemDrive}\Octopus\Tentacle\Tentacle.config" --console
  & .\tentacle.exe configure --instance "Tentacle" --home "${env:SystemDrive}\Octopus" --console
  & .\tentacle.exe configure --instance "Tentacle" --app "${env:SystemDrive}\Octopus\Applications" --console
  & .\tentacle.exe configure --instance "Tentacle" --port "10933" --console
  & .\tentacle.exe import-certificate --instance "Tentacle" --from-registry  --console
  Write-Output "Stopping the 1.0 Tentacle"
  Stop-Service "Tentacle"
  Write-Output "Starting the 2.0 Tentacle"
  & .\tentacle.exe service --instance "Tentacle" --install --start --console
  
  Write-Output "Tentacle commands complete"
}
  
Upgrade-Tentacle

Step 6 – Upgrade TeamCity Integration

The JetBrains TeamCity Plugin has also been released and requires an upgrade. To do this

  1. Download the plugin from the Octopus Website http://octopusdeploy.com/downloads.
  2. Shutdown the TeamCity server.
  3. Copy the zip archive (Octopus.TeamCity.zip) with the plugin into the <TeamCity Data Directory>/plugins directory.
  4. Start the TeamCity server: the plugin files will be unpacked and processed automatically. The plugin will be available in the Plugins List in the Administration area.

Please Note: The API Key’s for the users will have changed and will need to be re-entered into the Octopus Deploy Runner Configuration within the Build Step

Summary

We think that overall the Octopus Team have done a great job making the upgrade process easy. There’s still a few little issues to iron out, but overall the process is robust and simple. It’s a shame that automatic upgrade of the tentacles couldn’t have been supported as this may cause pain for customers with a large server install base, but Powershell automation makes for a great workaround.

As we learn more about Octopus 2.0, we’ll continue to let you know what we find. Good Luck with Upgrading — it’s definitely worth it!

DevOpsGuys use Octopus Deploy to deliver automated deployment solutions – ensuring frequent, low-risk software releases into multiple environments. Our engineers have deployed thousands of software releases using Octopus Deploy and we can help you do the same. Contact us today – we’d be more than happy to help.

requirements_rotated

The Top Ten DevOps “Operational Requirements”

Join us for “The Top 10 DevOps Operational Requirements” –  http://www.brighttalk.com/webcast/534/98059 via @BrightTALK

One of the key tenets in DevOps is to involve the Operations teams in the full software development life cycle (SDLC) and in particular to ensure that “operational requirements” (“OR’s”, formerly known as “non-functional requirements”, “NFR’s”) are incorporated into the design&build phases.

In order to make your life easier the DevOpsGuys have scoured the internet to compile this list of the Top Ten DevOps Operational Requirements (ok, it was really just chatting with some of the guys down the pub BUT we’ve been doing this a long time and we’re pretty sure that if you deliver on these your Ops people will be very happy indeed!).

#10 – Instrumentation

Would you drive a car with a blacked out windscreen and no speedo? No, didn’t think so, but often Operations are expected to run applications in Production in pretty much the same fashion.

Instrumenting your application with metrics and performance counters gives the Operations people a way to know what’s happening before the application drives off a cliff.

Some basic counters include things like “transactions per second” (useful for capacity) and “transaction time” (useful for performance).

#9 – Keep track of the Dependencies!

“Oh yeah, I forgot to mention that it needs [dependency XYZ] installed first” or “Yes, the system relies on [some 3rd party web service] can you just open up firewall port 666 right away”.

Look, we all understand that modern web apps rely on lots of 3rd party controls and web services – why re-invent the wheel if someone’s already done it, right? But please keep track of the dependencies and make sure that they are clearly documented (and ideally checked into source control along with your code where possible). Nothing derails live deployments like some dependency that wasn’t documented and that has to be installed/configured/whatever at the last moment. It’s a recipe for disaster.

#8 – Code defensively & degrade gracefully

Related to #9 above – don’t always assume the dependencies are present, particularly when dealing with network resources like databases or web services and even more so in Cloud environments where entire servers are known to vanish in the blink of an Amazon’s eye.

Make sure the system copes with missing dependences, logs the error and degrades gracefully should the situation arise!

#7 – Backward/Forward Compatibility

Existing code base with new database schema or stored procedure?

New code base with existing database schema or stored procedures?

Either way, forwards or backwards, it should work just fine because if it doesn’t you introduce “chicken and the egg” dependencies. What this mean for Operations is that we have to take one part of the system offline in order to upgrade the other part… and that can mean an impact on our customers and probably reams of paperwork to get it all approved.

#6 – Configurability

I once worked on a system where the database connection string was stored in a compiled resource DLL.

Every time we wanted to make a change to that connection string we had to get a developer to compile that DLL and then we had to deploy it… as opposed to simply just editing a text configuration file and re-starting the service. It was, quite frankly, a PITA.

Where possible avoid hard-coding values into the code;  they should be in external configuration files that you load (and cache) at system initialisation. This is particularly important as we move the application between environments (Dev, Test, Staging etc) and need to configure the application for each environment.

That said, I’ve seen systems that had literally thousands of configuration options and settings, most of which weren’t documented and certainly were rarely, if ever, changed. An “overly configurable” system can also create a support nightmare as tracking down which one of those settings has been misconfigured can be extremely painful!

#5 – “Feature Flags”

A special case of configurability that deserves its own rule – “feature flags”.

We freakin’ love feature flags.

Why?

Because they give us a lot of control over how the application works that we can use to (1) easily back out something that isn’t working without having to roll-back the entire code base and (2) we can use it to help control performance and scalability.

#4 – Horizontal Scalability (for all tiers).

We all want the Product to be a success with customers BUT we don’t want to waste money by over-provisioning the infrastructure upfront (we also want to be able to scale up/down if we have a spiky traffic profile).

For that we need the application to support “horizontal scalability” and for that we need you to think about this when designing the application.

3 quick “For Examples”:

  1. Don’t tie user/session state to a particular web/application server (use a shared session state mechanism).
  2. Support for read-only replicas of the database (e.g. a separate connection string for “read” versus “write”)
  3. Support for multi-master or peer-to-peer replication (to avoid a bottleneck on a single “master” server if the application is likely to scale beyond a reasonable server specification). Think very carefully about how the data could be partitioned across servers, use of IDENTITY/@Auto_Increment columns etc.

#3 –Automation and “scriptability”

One of the key tenets in the CALMS DevOps Model is A for Automation (Culture-Automation-Lean-Metrics-Sharing if you want to know the others).

We want to automate the release process as much as possible, for example by packaging the application into versionable released or the “infrastructure-as-code” approach using tools like Puppet & Chef for the underlying “hardware”.

But this means that things need to be scriptable!

I can remember being reduced to using keystroke macros to automate the (GUI) installer of a 3rd party dependency that didn’t have any support for silent/unattended installation. It was a painful experience and a fragile solution.

When designing the solution (and choosing your dependencies) constantly ask yourself the question “Can these easily be automated for installation and configuration”? Bonus points if you can, in very large scale environments (1,000 of servers) build in “auto-discovery” mechanisms where servers automatically get assigned roles, service auto-discovery (e.g. http://curator.apache.org/curator-x-discovery/index.htm) etc.

#2 – Robust Regression Test suite

Another think we love, almost as much as “feature flags” is a decent set of regression test scripts that we can run “on-demand” to help check/verify/validate everything is running correctly in Production.

We understand that maintaining automated test scripts can be onerous and painful BUT automated testing is vital to an automation strategy – we need to be able to verify that an application has been deployed correctly, either as part of a software release or “scaling out” onto new servers, in a way that doesn’t involve laborious manual testing. Manual testing doesn’t scale!

The ideal test suite will exercise all the key parts of the application and provide helpful diagnostic messaging if something isn’t working correctly. We can combine this with the instrumentation (remember #10 above), synthetic monitoring, Application Performance Management (APM) tools (e.g. AppDynamics), infrastructure monitoring (e.g. SolarWinds) etc to create a comprehensive alerting and monitoring suite for the whole system. The goal is to ensure that we know something is wrong before the customer!

#1 – Documentation

Contrary to popular belief we (Operations people) are quite happy to RTFM.

All we ask is that you WTFM (that’s W as in WRITE!) J

Ideally we’d collaborate on the product-centric documentation using a Wiki platform like Atlassian Confluence as we think that this gives everyone the easiest and best way to create – and maintain – documentation that’s relevant to everyone.

As a minimum we want to see:

  1. A high-level overview of the system (the “big picture”) probably in a diagram
  2. Details on every dependency
  3. Details on every error message
  4. Details on every configuration option/switch/flag/key etc
  5. Instrumentation hooks, expected values
  6. Assumptions, default values, etc

Hopefully this “Top Ten” list will give you a place to start when thinking about your DevOps “Operational Requirements” but it’s by no means comprehensive or exhaustive. We’d love to get your thoughts on what you think are the key OR’s for your applications!

Our experienced DevOps team provides a fully-managed “application-centric” website support service to your business and  your customers. Contact us to today, to find out how we can help.

image source: - CC  - c_knaus via Flickr - http://www.flickr.com/photos/soda37/6496536471
7348317140_e30eeda034

DevOps and the Product Owner

In a previous post we talked a lot about the “Product-centric” approach to DevOps but what does this mean for the role of the Agile “Product Owner”?

So what is the traditional role of the Product Owner? Agile author Mike Cohn from MountainGoat Software defines it thus:

“The Scrum product owner is typically a project’s key stakeholder. Part of the product owner responsibilities is to have a vision of what he or she wishes to build, and convey that vision to the scrum team. This is key to successfully starting any agile software development project. The agile product owner does this in part through the product backlog, which is a prioritized features list for the product.

The product owner is commonly a lead user of the system or someone from marketing, product management or anyone with a solid understanding of users, the market place, the competition and of future trends for the domain or type of system being developed” – Mike Cohn

The definition above is very “project-centric” – the Product Owner’s role appears to be tied to the existence and duration of the project and their focus is on the delivery of “features”.

DevOps, conversely, asks us (in the “First Way of DevOps”) to use “Systems Thinking” and focus on the bigger picture (not just “feature-itis”) and the “Product-centric” approach says we need to focus on the entire lifecycle of the product, not just the delivery of a project/feature/phase.

Whilst decomposing the “big picture” into “features” is something we completely agree with, as features should be the “unit of work” for your Scrum teams or “Agile Software Development Factory”, it needs to be within the context of the Product Lifecycle (and the “feature roadmap”).

So the key shift here then is to start talking about the “Product Lifecycle Owner”, not just the Product Owner, and ensure that Systems Thinking is a critical skill for that role.

The second big shift with DevOps is that “Non-Functional Requirements” proposed by Operations as being critical to the manageability and stability of the product across its full lifecycle “from inception to retirement” must be seen as equally important as the functional requirements proposed by the traditional Product Owner role.

In fact, we’d like to ban the term “Non-Functional Requirements” (NFR’s) completely, as the name itself seems to carry an inherent “negativity” that we feel contributes to the lack of importance placed on NFR’s in many organisations.

We propose the term “Operational Requirements” (OR’s) as we feel that this conveys the correct “product lifecycle-centric” message about why these requirements are in the specification – “This is what we need to run and operate this product in Production across the product’s lifecycle in order to maximise the product’s likelihood of meeting the business objects set for it”.

We propose the term “Operational Requirements” (OR’s) as we feel that this conveys the correct “product lifecycle-centric” message about why these requirements are in the specification.

For the slightly more pessimistic or combative amongst you the “OR” in Operational Requirements can stand for “OR this doesn’t get deployed into Production…” .

The unresolved question is do we need an “Operational Product Owner” or does the role of the traditional, business-focussed Product Owner extend to encompass the operational requirements?

You could argue that the “Operational Product Owner” already partly exists as the “Service Delivery Manager” (SDM) within the ITIL framework but SDM’s rarely get involved in the software development lifecycle as they are focussed on the “delivery” part at the end of the SDLC. Their role could be extended to include driving Operational Requirements into the SDLC as part of the continual service improvement (CSI) process however.

That said, having two Product Owners might be problematic and confusing from the Agile development team perspective so it would probably be preferable if the traditional Business product owner was also responsible for the operational requirements as well as the functional requirements. This may require the Product Owner to have a significantly deeper understanding of technology and operations than previously otherwise trying to understand why “loosely-coupled session state management” is important to “horizontal scalability” might result in some blank faces!

So in summary a “DevOps Product Owner” needs to:

  • Embrace “System Thinking” and focus on the “Product Lifecycle” not just projects or features
  • Understand the “Operational Requirements” (and just say “No to NFR’s”!)
  • Ensure that the “OR’s” are seen as important as the “Functional Requirements” in the Product roadmap and champion their implementation

In future posts we’ll examine the impact of DevOps on other key roles in the SDLC & Operations. We’ve love to get your opinions in the comments section below!

-TheOpsMgr

image source: – CC  - CannedTuna via Flickr - http://www.flickr.com/photos/cannedtuna/7348317140/sizes/m/

5401494223_e339c6425a

DevOps and the “Product-centric” approach

When doing the research for our BrightTalk Webinar on DevOps I came across this quote from Jez Humble on “products not projects”, which really struck a chord with our thinking about what we call the “product-centric” approach to DevOps.

Products not Projects quotation

Figure 1 – DevOpsGuys BrightTalk Webinar

One of the key elements of DevOps is to ensure that IT strategy is directly linked with Business strategy.

One of the critical scenes in “The Phoenix Project” is where our hero Bill gets to meet with the CFO and they work through the list of “pet IT projects” in the organisation and find that many of them can’t really be tied back to any business strategy or organisational goal.

This is, we feel, one of the major problems with the “project-centric” view of IT and why we need to push for a “product-centric” view.

The “project-centric” view, whilst important for mobilising resources and organising activities, can easily get disconnected from the original business objectives. Any “Project”, just like the “Phoenix Project” in the novel, runs the risk of becoming its own “raison d’etre” as it becomes more about getting “the Project” over the line than whatever business benefits were originally proposed.

In contrast a “Product-centric” viewpoint is focussed on the Product (or Service) that you are taking to market. By keeping the focus on “the Product” you are constantly reminded that you are building a product, for customers, as part of an organisational objective that ties back to over-arching business strategy.

For example if you were in the travel sector you might be adding a new “Online Itinerary Manager” product to your website to enable your customers to view (and possibly update) their itinerary online as part of your business strategy to both empower customers online and reduce the number of “avoidable contacts” to your call centre (and hence reduce costs).

One of the other benefits of the “product-centric” view is also highlighted in Jez’s quote above – “From Inception to Retirement”.

The “product-centric” view reminds us to think about the “Product lifecycle” and not just the “software development lifecycle”.

The “product-centric” view reminds us to think about the “Product lifecycle”
and not just the “software development lifecycle”.

You need to understand how this product is going to be deployed, managed, patched, upgraded, enhanced and ultimately retired… and that means you need close cooperation between the business, the developers and operations (= DevOps!).

So how do you introduce the “Product-centric” view into an organisation that might already have existing website that offers products/services to customers?

Well, firstly, you need to stop referring to it as “the website”.

A “website” is a platform and a channel to market, it’s not a “product”.

A product is a good or service that you offer to the market in order to meet a perceived market need in the hope that you will in return received an economic reward.

For some “websites” there might be a single product e.g. Dominos sells pizza online (food products) and for other there might be multiple products e.g. theAA.com sells membership (breakdown services), Insurance, Financial Services, Driver Services (mostly training) and Travel Services. If you’re ever in doubt on the products your website sells your top navigation menu will probably give you a pretty good indication!

What Products does the AA Sell?

Figure 2 – what products does the AA sell?

It’s worth mentioning that the AA has another product too – the “RoutePlanner”. Although the route planner is “free” is has a value to the organisation both indirectly (by drawing traffic to the site that you might then cross-sell too) and intangibly (by offering a valuable service for “free” it enhances the brand).

Secondly, you need to identify the “product owners” for each of the core products that use your website as a channel to market, so in our AA example you’d probably have separate product owners for Membership, Insurance, Finance etc.

If you’re ever in doubt about who is, or isn’t, the right product owner then there is a simple test – “Do you have a significantly financial incentive (bonus) for Product “X” to succeed in the market?”.

The “Product Owner” Test:
“Do you have a significantly financial incentive (bonus) for Product “X” to succeed in the market?”

If the answer is “no” keep going up the hierarchy until you find someone who says “yes”. A product owner who doesn’t have any “skin in the game” regarding the success of their product line is a bad idea!

Thirdly, you need to work with the product owner to map out the product lifecycle of that product, and then identify the IT dependencies and deliverables at every stage along that product lifecycle. Within your product lifecycle you might also want to map out the “feature roadmap” if your product devolves into multiple features that will be released over time. Creating the “big picture” can be vital in motivating your teams and helping them to understand what you’re trying to achieve, and this in turn helps that to make better decisions.

Fourthly, you need to “sense check” your product lifecycle and feature roadmap against your business strategy and organisational goals. If they don’t align you either need to re-work the plan or you might decide to drop the product altogether and re-deploy those resources to a product that *IS* part of your core strategy.

Lastly you need to re-organise your DevOps teams around these products and align your delivery pipeline and processes with the product lifecycle (and feature roadmap). Your DevOps teams are responsible for the “inception to retirement” management of that product (*Top Tip – just like your “product owner” it might be a great idea to incentivise your DevOps teams with some “product success” (business) metrics in their bonus, not just technical metrics like “on-time delivery” or “system availability”. It never hurts for them to have some “skin in the game” to promote a sense of ownership in what they are delivering!).

So, to summarise, the key elements in a “product-centric approach” are:

(1)    Breakdown your “website” into the key “Products” (that generate business value)

(2)    Identify “Product Owners” with “skin in the game”

(3)    Map out the product lifecycle (and ideally feature roadmap) for each product with them

(4)    Sense check this product strategy with your organisational strategy

(5)    Align your DevOps teams with the core products (and incentivise them accordingly!)

So next time you’re in a meeting and someone proposes a new “Project” see if you can challenge them to create a new “Product” instead!

 

image source: - CC  - jeremybrooks via Flickr - http://www.flickr.com/photos/jeremybrooks/5401494223/sizes/s/
shadowitmain

Is DevOps your defence against Shadow IT?

We’ve been evangelising DevOps quite a bit lately amongst customers and partners and one of the arguments that seems to resonant the most with people about why the current paradigm of IT Development and Operations is “broken” is around the rise of “Shadow IT”.

“Shadow IT is a term often used to describe IT systems and IT solutions built and used inside organizations without organizational approval. It is also used, along with the term “Stealth IT,” to describe solutions specified and deployed by departments other than the IT department.” – Wikipedia

“Shadow IT” is nothing new – it’s been around pretty much since the invention of the PC and client/server computing – but what is new is the speed and ease at which “Shadow IT” can be deployed, and the performance, reliability and stability of that “Shadow IT” solution.

“Sally from Marketing”, armed with nothing more than a company credit card, can instantiate an arbitrary number of servers, of varying capacity and specification, from a wide variety of Cloud hosting providers. Depending on the quality of the internal IT and her choice of Cloud provider it’s quite possible that she will get better uptime and performance from her Shadow IT solution than what she might get internally.

She can then find a 3rd software house to write some bespoke software (and then someone like DevOpsGuys to deploy and manage it… Oooopps!) and her “Shadow IT solution” is up and running (not to mention the many other SaaS solutions that she could consume).

shadowit

Figure 1 – DevOpsGuys – BrightTalk Webinar

In our recent BrightTalk webinar we spoke about how Gartner predicts that “Shadow IT” is expected to grow, and that there is some evidence (the PwC survey) that there is a negative correlation between “IT control” and organisational performance.

So, in summary, the traditional silo-mentality model of IT clearly isn’t meeting the customer’s needs for flexibility, innovation and time-to-market, and Cloud Computing is enabling the growth of “Shadow IT” on a scale never seen before.

To take a military analogy this is like the invention of highly mobile, mechanised “manoeuvre warfare” during WW II. The entrenched positions of the Maginot Line (think “traditional IT departments”) were rendered irrelevant by the “blitzkrieg” tactics (think “Shadow IT”) of the Wehrmacht as they simply bypassed the fixed fortifications with their more manoeuvrable, dare I say “agile”, mechanised infantry.

What is particularly fascinating, if Wikipedia can be believed, is that “blitzkrieg”, contrary to popular belief, was never a formal warfighting “doctrine” of the German Army (emphasis mine):

“Naveh states, “The striking feature of the blitzkrieg concept is the complete absence of a coherent theory which should have served as the general cognitive basis for the actual conduct of operations”. Naveh described it as an “ad hoc solution” to operational dangers, thrown together at the last moment”  – Wikipedia

An “ad-hoc solution” to operational dangers, thrown together at the last moment” is probably a pretty good definition of “Shadow IT” too but the important fact to remember is that Blitzkrieg worked (whether it was a formal doctrine or not). It crushed the opposition and subsequently became the cornerstone of modern military “combined arms” doctrine.

So, what’s this got to do with DevOps?

Well, clearly Gartner think that “Shadow IT” is working well too, and will continue to “outflank” traditional IT Departments.

Our view is that DevOps can be seen as the perfect defence to “Shadow IT” as it co-opts many of the key “manoeuvre warfare” concepts to provide the user with the speed, flexibility and time-to-market they want, but still within the control of IT to ensure standards, security and compliance.

DevOps, by breaking down the silos between Development and Operations, seeks to create unified cross-functional teams organised around specific objectives (ideally specific products that generate value for your organisation).

Compare this to the Wikipedia definition of “combined arms” doctrine” (bold emphasis mine):

“Combined arms is an approach to warfare which seeks to integrate different combat arms of a military to achieve mutually complementary effects (for example, using infantry and armor in an urban environment, where one supports the other, or both support each other). Combined arms doctrine contrasts with segregated arms where each military unit is composed of only one type of soldier or weapon system. Segregated arms is the traditional method of unit/force organisation, employed to provide maximum unit cohesion and concentration of force in a given weapon or unit type.”

Let’s paraphrase this for DevOps…

“[DevOps] is an approach to [IT Service delivery] which seeks to integrate different [technical silos] of a [IT Department) to achieve mutually complementary effects (for example, using [Development] and [Operations] in an [e-commerce] environment, where one supports the other, or both support each other). [DevOps] doctrine contrasts with [Traditional IT] where each [IT Team] is composed of only one type of [technical specialist] or [Technology] system. [Traditional IT] the traditional method of [team/department] organisation, employed to provide maximum [team] cohesion and concentration of [technical expertise] in a given [technology] or [team] type.”

That seems to be a pretty good definition of DevOps doctrine, to me!

The only true defence to “Shadow IT” is to offer a level of service that meets the internal customer needs for speed, flexibility and time-to-market they want. If they can get it “in-house” then the impetus to build a “Shadow IT” organisation is reduced.

The best way to deliver this level of service is, in our view, to adopt the lessons of Blitzkrieg and “combined arms” doctrine as embodied within DevOps by “integrating different teams… to achieve mutually complementary effects” and leveraging new technologies (like Cloud, APM and continuous delivery) to ensure ability AND stability.

image source: - CC  - faungg's via Flickr - http://www.flickr.com/photos/44534236@N00/2461920698