What does DevOps look like in UK job ads

#DevOps – Organic versus Transformational DevOps

Despite what some people seem to think there is more to DevOps than just Continuous Delivery and Infrastructure Automation with Puppet, Chef or Ansible.

To me, DevOps is “an alternative model for the creation of business value from the software development life-cycle that encompasses a product-centric view across the entire product life-cycle (from inception to retirement) and recognises the value in close collaboration, experimentation and rapid feedback”.

Moving from one model of value creation can either be an organic process or a transformational one – you can “grow into” the new model or you can plan a strategy of change to transform your organisation from one to the other.

It’s in this “Organic DevOps” versus “Transformational DevOps” that I see a growing disconnect between different sectors of the DevOps community, particularly between “DevOps for Start-ups” and “DevOps for Enterprise”.

IMHO, “Start-Ups DevOps” normally follows the “organic DevOps” path – you’re often starting from a relatively “greenfields” approach based on a cloud infrastructure. You probably already have a very close, collaborative culture because there’s only 20 of you anyway and you all work in the same office and you spend 18hrs a day there. Automation is part of your DNA because you’ve never had the staffing levels to do it manually.

“Enterprise DevOps” is normally “Transformational DevOps” – you have large, distributed IT teams that cross geographic locations, time-zones and probably organisational boundaries (due to outsourcing). You have extensive legacy applications and infrastructure estates (JVM 1.4 on Tomcat 5 anyone?) and you’re likely to have well developed Prince2/ITIL/SixSigma delivery models rigidly enforced by a centralised command&control mindset, backed by an army of highly-paid consultant from the Big 5 telling your CEO, CIO and CTO the best way to manage their IT budget.

Moving an enterprise to DevOps via a transformation programme is a very different challenge to introducing DevOps concepts into a receptive start-up and watching them grow organically, and the DevOps community needs to make sure that when it’s evangelising DevOps to the world that it’s aware of the differences and challenges inherent in each approach.

If you want to debate this idea of “Start-up Organic versus Enterprise Transformational DevOps” we’re taking part in a Webinar tonight with the great folks over at ScriptRock that’s focussing on Enterprise DevOps. It’s at 1900 BST,  11:00am PT / 2:00pm ET (60 minutes).

We’d really like to get your thoughts on this by asking a question on the webinar or by leaving a comment below as these concepts are still experimental and, just like DevOps itself, the faster we get feedback and the more we iterate around the concept the stronger it will be!

http://info.scriptrock.com/devops_webinar-2

Enterprise DevOps Webinar
Enterprise DevOps Webinar

 

 

Why companies are investing in DevOps and Continuous Delivery

A thought-provoking infographic – with some interesting data points – shows how companies are reaping real rewards from investing in agile software delivery processes. Check out the graphic – from Zend – for more on how DevOps and Continuous Delivery are bridging the speed and innovation gap between business demand and IT.

Continuous Delivery Infographic by Zend Technologies.

Continuous Delivery Infographic

The benefits of using an #APM solution while performance testing

DevOpsGuys presented at an NCC Group customer event yesterday on “The benefits of using an APM solution while performance testing”.

We gave a 30min overview presentation talking about “what is APM”, the APM marketplace and what we see as the 4 key benefits of using APM tools while load testing.

  1. See the Big Picture (Systems Thinking)
  2. Drill down to the details
  3. Faster Iteration = Better Value
  4. Stop the “Blame Game”

We also did a 1hr 15min “live demo” workshop of load testing using NCC Group’s load testing platform against a demo e-commerce website that was instrumented with the AppDynamics APM tool.

Overall the feedback from the day was very positive and it was great to see so many people passionate about performance!

The slide decks from the day are below!

If anyone is interested in having a 15 Day Free Trial of AppDynamics just click on the link, fill out the form and then drop an email to team@devopsguys.com.

DevOpsGuys offer a fully-managed AppDynamics service to ensure that you get the maximum benefit from your investment in APM, as well as “quick start” consulting packages to get you up and running. You can read more about our “Application-Centric” performance offerings here http://www.devopsguys.com/Services/Operations/APM.

A scientific basis for #DevOps success?

A fascinating TED talk from “Predictably Irrational” author Dan Ariely has some interesting pointers to some of the underlying psychological mechanisms that make the DevOps model a better way to structure work within IT departments.

“we care much more about a product if we’ve participated from start to finish rather than producing a single part over and over.”

The accompanying article lists 7 insights into our motivations when performing tasks (and the research that supports those insights).

  1. Seeing the fruits of our labour may make us more productive
  2. The less appreciated we feel our work is, the more money we want to do it
  3. The harder a project is, the prouder we feel of it
  4. Knowing that our work helps others may increase our unconscious motivation
  5. The promise of helping others makes us more likely to follow rules
  6. Positive reinforcement about our abilities may increase performance
  7. Images that trigger positive emotions may actually help us focus

Many of these tie directly back to key DevOps principles.

For example the “First Way of DevOps” encourages “systems thinking” which relates directly to #1 above – if we are looking at the entire system (not just our small part) we will inherently be looking at the “fruits of our labour”.

Similarly fostering a team-based DevOps culture where we can see “how our work impacts on others” is closely aligned with #4.

For me, #2 and #6 tie directly back to “leadership” (as opposed to “management”). Good leaders know that praise (either private 1:1 praise with individuals or public praise in front of the team) can have a huge impact on morale, with a subsequent impact on productivity and quality.

It’s fascinating to see how behavioural science is increasing our understanding of human motivation. The challenge for us in the DevOps movement is to take these science-based insights and see how we can apply them with our teams to create a better way of working.

DevOps 101 for Recruiters

We put together this “DevOps Intro” to the recruitment team at a London recruitment consultancy to help them understand the DevOps Market place.

The goal was to help them understand both the WHY and the WHAT of DevOps and what that might mean for recruiters.

Hopefully you will find this interesting as well!

BamiWvqCIAAfBgw

Upgrading to Octopus 2.0

On 20th December 2013, Paul Stovell and the Octopus Team released Octopus Deploy 2.0. We’ve had a chance to play with the product for sometime, and it’s a great next version.

Here’s 5 great reasons to upgrade;

  1. Configurable Dashboards
  2. Sensitive variables
  3. Polling Tentacles (Pull or Push)
  4. IIS website and application pool configuration
  5. Rolling deployments

So let’s take a look at how the upgrade process works.

Step 1 – Backup your existing database

Things to check before you upgrade;

  1. Your administrator username and password (especially, if you are not using Active Directory)
  2. Before attempting to migrate, make sure that you don’t have any projects, environments, or machines with duplicated names (this is no longer allowed in Octopus 2.0, and the migration wizard will report an error if it finds duplicates).
  3. IIS Configuration. Are you using ARR or rewrite rules of any kind.
  4. SMTP Server Settings (these are not kept in upgrade)

The first step is to ensure that you have a  recent database backup that you can restore in case anything goes wrong.

1 - Database Backup

Confirm the location of your backup file. You’ll need this file at a later stage in the upgrade process (and if anything goes wrong)

2-Database Backup Location

Step 2 – Install Octopus 2.0

Download the latest installer MSI from the Octopus Website at http://octopusdeploy.com/downloads

Next follow the installer instructions and the Octopus Manager process. This process is really straight forward, and if you’ve installed Octopus before you’ll be familiar with most of the process. There’s a great video here on how to install Octopus http://docs.octopusdeploy.com/display/OD/Installing+Octopus

Step 3 – Import Previous Octopus Database

Once you’ve installed Octopus 2.0, you’ll most likely want to import your old v1.6 database with all your existing configuration. To do this, select the “import from Octopus 1.6″ option and choose the database back-up from the process you followed in step 1 and follow the instructions.

3 - Import Octopus Database

Step 4 – Post Upgrade Steps (server)

When you have completed the installation it’s essential to task a copy of the master key for backups. Instructions on how to backup the key can be found here http://docs.octopusdeploy.com/display/OD/Security+and+encryption

Once you have the server installed there’s a couple of items you’ll need to configure again.

  1. SMTP Server Settings are not kept, so these will need to be re-entered.

Step 5 – Upgrading Tentacles

If you are upgrading from version 1.6 - Octopus 2.0 server can no longer communicate with Tentacle 1.6. So in addition to upgrading Octopus, you’ll also need to upgrade any Tentacles manually. There are two ways to achieve this;

1. Download and Install the new Octopus Deploy Tentacle MSI on each target server

2. Use  a PowerShell script to download the latest Tentacle MSI, install it, import the X.509 certificate used for Tentacle 1.6, and configure it in listening mode. The powershell script is as follows;

function Upgrade-Tentacle
{ 
  Write-Output "Beginning Tentacle installation"
  Write-Output "Downloading Octopus Tentacle MSI..."
  $downloader = new-object System.Net.WebClient
  $downloader.DownloadFile("http://download.octopusdeploy.com/octopus/Octopus.Tentacle.2.0.5.933.msi", "Tentacle.msi")
  
  Write-Output "Installing MSI"
  $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/i Tentacle.msi /quiet" -Wait -Passthru).ExitCode
  Write-Output "Tentacle MSI installer returned exit code $msiExitCode"
  if ($msiExitCode -ne 0) {
    throw "Installation aborted"
  }
  
  Write-Output "Configuring and registering Tentacle"
   
  cd "${env:ProgramFiles(x86)}\Octopus Tentacle 2.0\Agent"
  Write-Output "Stopping the 1.0 Tentacle"
  
  & .\tentacle.exe create-instance --instance "Tentacle" --config "${env:SystemDrive}\Octopus\Tentacle\Tentacle.config" --console
  & .\tentacle.exe configure --instance "Tentacle" --home "${env:SystemDrive}\Octopus" --console
  & .\tentacle.exe configure --instance "Tentacle" --app "${env:SystemDrive}\Octopus\Applications" --console
  & .\tentacle.exe configure --instance "Tentacle" --port "10933" --console
  & .\tentacle.exe import-certificate --instance "Tentacle" --from-registry  --console
  Write-Output "Stopping the 1.0 Tentacle"
  Stop-Service "Tentacle"
  Write-Output "Starting the 2.0 Tentacle"
  & .\tentacle.exe service --instance "Tentacle" --install --start --console
  
  Write-Output "Tentacle commands complete"
}
  
Upgrade-Tentacle

Step 6 – Upgrade TeamCity Integration

The JetBrains TeamCity Plugin has also been released and requires an upgrade. To do this

  1. Download the plugin from the Octopus Website http://octopusdeploy.com/downloads.
  2. Shutdown the TeamCity server.
  3. Copy the zip archive (Octopus.TeamCity.zip) with the plugin into the <TeamCity Data Directory>/plugins directory.
  4. Start the TeamCity server: the plugin files will be unpacked and processed automatically. The plugin will be available in the Plugins List in the Administration area.

Please Note: The API Key’s for the users will have changed and will need to be re-entered into the Octopus Deploy Runner Configuration within the Build Step

Summary

We think that overall the Octopus Team have done a great job making the upgrade process easy. There’s still a few little issues to iron out, but overall the process is robust and simple. It’s a shame that automatic upgrade of the tentacles couldn’t have been supported as this may cause pain for customers with a large server install base, but Powershell automation makes for a great workaround.

As we learn more about Octopus 2.0, we’ll continue to let you know what we find. Good Luck with Upgrading — it’s definitely worth it!

DevOpsGuys use Octopus Deploy to deliver automated deployment solutions – ensuring frequent, low-risk software releases into multiple environments. Our engineers have deployed thousands of software releases using Octopus Deploy and we can help you do the same. Contact us today – we’d be more than happy to help.

requirements_rotated

The Top Ten DevOps “Operational Requirements”

Join us for “The Top 10 DevOps Operational Requirements” –  http://www.brighttalk.com/webcast/534/98059 via @BrightTALK

One of the key tenets in DevOps is to involve the Operations teams in the full software development life cycle (SDLC) and in particular to ensure that “operational requirements” (“OR’s”, formerly known as “non-functional requirements”, “NFR’s”) are incorporated into the design&build phases.

In order to make your life easier the DevOpsGuys have scoured the internet to compile this list of the Top Ten DevOps Operational Requirements (ok, it was really just chatting with some of the guys down the pub BUT we’ve been doing this a long time and we’re pretty sure that if you deliver on these your Ops people will be very happy indeed!).

#10 – Instrumentation

Would you drive a car with a blacked out windscreen and no speedo? No, didn’t think so, but often Operations are expected to run applications in Production in pretty much the same fashion.

Instrumenting your application with metrics and performance counters gives the Operations people a way to know what’s happening before the application drives off a cliff.

Some basic counters include things like “transactions per second” (useful for capacity) and “transaction time” (useful for performance).

#9 – Keep track of the Dependencies!

“Oh yeah, I forgot to mention that it needs [dependency XYZ] installed first” or “Yes, the system relies on [some 3rd party web service] can you just open up firewall port 666 right away”.

Look, we all understand that modern web apps rely on lots of 3rd party controls and web services – why re-invent the wheel if someone’s already done it, right? But please keep track of the dependencies and make sure that they are clearly documented (and ideally checked into source control along with your code where possible). Nothing derails live deployments like some dependency that wasn’t documented and that has to be installed/configured/whatever at the last moment. It’s a recipe for disaster.

#8 – Code defensively & degrade gracefully

Related to #9 above – don’t always assume the dependencies are present, particularly when dealing with network resources like databases or web services and even more so in Cloud environments where entire servers are known to vanish in the blink of an Amazon’s eye.

Make sure the system copes with missing dependences, logs the error and degrades gracefully should the situation arise!

#7 – Backward/Forward Compatibility

Existing code base with new database schema or stored procedure?

New code base with existing database schema or stored procedures?

Either way, forwards or backwards, it should work just fine because if it doesn’t you introduce “chicken and the egg” dependencies. What this mean for Operations is that we have to take one part of the system offline in order to upgrade the other part… and that can mean an impact on our customers and probably reams of paperwork to get it all approved.

#6 – Configurability

I once worked on a system where the database connection string was stored in a compiled resource DLL.

Every time we wanted to make a change to that connection string we had to get a developer to compile that DLL and then we had to deploy it… as opposed to simply just editing a text configuration file and re-starting the service. It was, quite frankly, a PITA.

Where possible avoid hard-coding values into the code;  they should be in external configuration files that you load (and cache) at system initialisation. This is particularly important as we move the application between environments (Dev, Test, Staging etc) and need to configure the application for each environment.

That said, I’ve seen systems that had literally thousands of configuration options and settings, most of which weren’t documented and certainly were rarely, if ever, changed. An “overly configurable” system can also create a support nightmare as tracking down which one of those settings has been misconfigured can be extremely painful!

#5 – “Feature Flags”

A special case of configurability that deserves its own rule – “feature flags”.

We freakin’ love feature flags.

Why?

Because they give us a lot of control over how the application works that we can use to (1) easily back out something that isn’t working without having to roll-back the entire code base and (2) we can use it to help control performance and scalability.

#4 – Horizontal Scalability (for all tiers).

We all want the Product to be a success with customers BUT we don’t want to waste money by over-provisioning the infrastructure upfront (we also want to be able to scale up/down if we have a spiky traffic profile).

For that we need the application to support “horizontal scalability” and for that we need you to think about this when designing the application.

3 quick “For Examples”:

  1. Don’t tie user/session state to a particular web/application server (use a shared session state mechanism).
  2. Support for read-only replicas of the database (e.g. a separate connection string for “read” versus “write”)
  3. Support for multi-master or peer-to-peer replication (to avoid a bottleneck on a single “master” server if the application is likely to scale beyond a reasonable server specification). Think very carefully about how the data could be partitioned across servers, use of IDENTITY/@Auto_Increment columns etc.

#3 –Automation and “scriptability”

One of the key tenets in the CALMS DevOps Model is A for Automation (Culture-Automation-Lean-Metrics-Sharing if you want to know the others).

We want to automate the release process as much as possible, for example by packaging the application into versionable released or the “infrastructure-as-code” approach using tools like Puppet & Chef for the underlying “hardware”.

But this means that things need to be scriptable!

I can remember being reduced to using keystroke macros to automate the (GUI) installer of a 3rd party dependency that didn’t have any support for silent/unattended installation. It was a painful experience and a fragile solution.

When designing the solution (and choosing your dependencies) constantly ask yourself the question “Can these easily be automated for installation and configuration”? Bonus points if you can, in very large scale environments (1,000 of servers) build in “auto-discovery” mechanisms where servers automatically get assigned roles, service auto-discovery (e.g. http://curator.apache.org/curator-x-discovery/index.htm) etc.

#2 – Robust Regression Test suite

Another think we love, almost as much as “feature flags” is a decent set of regression test scripts that we can run “on-demand” to help check/verify/validate everything is running correctly in Production.

We understand that maintaining automated test scripts can be onerous and painful BUT automated testing is vital to an automation strategy – we need to be able to verify that an application has been deployed correctly, either as part of a software release or “scaling out” onto new servers, in a way that doesn’t involve laborious manual testing. Manual testing doesn’t scale!

The ideal test suite will exercise all the key parts of the application and provide helpful diagnostic messaging if something isn’t working correctly. We can combine this with the instrumentation (remember #10 above), synthetic monitoring, Application Performance Management (APM) tools (e.g. AppDynamics), infrastructure monitoring (e.g. SolarWinds) etc to create a comprehensive alerting and monitoring suite for the whole system. The goal is to ensure that we know something is wrong before the customer!

#1 – Documentation

Contrary to popular belief we (Operations people) are quite happy to RTFM.

All we ask is that you WTFM (that’s W as in WRITE!) J

Ideally we’d collaborate on the product-centric documentation using a Wiki platform like Atlassian Confluence as we think that this gives everyone the easiest and best way to create – and maintain – documentation that’s relevant to everyone.

As a minimum we want to see:

  1. A high-level overview of the system (the “big picture”) probably in a diagram
  2. Details on every dependency
  3. Details on every error message
  4. Details on every configuration option/switch/flag/key etc
  5. Instrumentation hooks, expected values
  6. Assumptions, default values, etc

Hopefully this “Top Ten” list will give you a place to start when thinking about your DevOps “Operational Requirements” but it’s by no means comprehensive or exhaustive. We’d love to get your thoughts on what you think are the key OR’s for your applications!

Our experienced DevOps team provides a fully-managed “application-centric” website support service to your business and  your customers. Contact us to today, to find out how we can help.

image source: - CC  - c_knaus via Flickr - http://www.flickr.com/photos/soda37/6496536471
Follow

Get every new post delivered to your Inbox.

Join 1,533 other followers

%d bloggers like this: