All posts by DevOpsGuys

DevOps Cardiff Meetup is Launched!

If you want to deliver better software faster, and live in or close to Cardiff, Wales – then this group is for you!

A chance to exchange some of the latest ideas and technologies for implementing Continuous Delivery, Automation and DevOps practices through short presentations by invited speakers, and a chance to chat it after over drinks/snacks with new friends.

This event was created with the intention of uncovering betters ways of interacting between development and operations, by doing it and helping others do it.

Our aims are simple;

1. Demonstrate tools, techniques and practices we use to bridge the gap between software development and IT groups they rely upon.

2. Facilitate discussion on DevOps successes and challenges.

3. Explore, experiment and learn new and better ways of practicing DevOps.

Come and join us and help create more awesome!

DevOps Cardiff

Cardiff, GB
19 Members

If you’re a Developer or Operations person in the Cardiff Area and wanting to deliver better software faster, then this group is for you!A chance to exchange some of the late…

Next Meetup

Inaugural DevOps Meetup

Wednesday, May 7, 2014, 6:30 PM
11 Attending

Check out this Meetup Group →

Photos are online

Originally posted on PIPELINE:

Photos from the day are available in our PIPELINE Flickr Group.

If you have any photos of your own from the day that you’d like to share then please join the group and add them.

Photos were taken by the wonderful Fabienne Jung.

13902984215_76c743e67e_b

View original

Why companies are investing in DevOps and Continuous Delivery

A thought-provoking infographic – with some interesting data points – shows how companies are reaping real rewards from investing in agile software delivery processes. Check out the graphic – from Zend – for more on how DevOps and Continuous Delivery are bridging the speed and innovation gap between business demand and IT.

Continuous Delivery Infographic by Zend Technologies.

Continuous Delivery Infographic

DevOps 101 for Recruiters

We put together this “DevOps Intro” to the recruitment team at a London recruitment consultancy to help them understand the DevOps Market place.

The goal was to help them understand both the WHY and the WHAT of DevOps and what that might mean for recruiters.

Hopefully you will find this interesting as well!

BamiWvqCIAAfBgw

Upgrading to Octopus 2.0

On 20th December 2013, Paul Stovell and the Octopus Team released Octopus Deploy 2.0. We’ve had a chance to play with the product for sometime, and it’s a great next version.

Here’s 5 great reasons to upgrade;

  1. Configurable Dashboards
  2. Sensitive variables
  3. Polling Tentacles (Pull or Push)
  4. IIS website and application pool configuration
  5. Rolling deployments

So let’s take a look at how the upgrade process works.

Step 1 – Backup your existing database

Things to check before you upgrade;

  1. Your administrator username and password (especially, if you are not using Active Directory)
  2. Before attempting to migrate, make sure that you don’t have any projects, environments, or machines with duplicated names (this is no longer allowed in Octopus 2.0, and the migration wizard will report an error if it finds duplicates).
  3. IIS Configuration. Are you using ARR or rewrite rules of any kind.
  4. SMTP Server Settings (these are not kept in upgrade)

The first step is to ensure that you have a  recent database backup that you can restore in case anything goes wrong.

1 - Database Backup

Confirm the location of your backup file. You’ll need this file at a later stage in the upgrade process (and if anything goes wrong)

2-Database Backup Location

Step 2 – Install Octopus 2.0

Download the latest installer MSI from the Octopus Website at http://octopusdeploy.com/downloads

Next follow the installer instructions and the Octopus Manager process. This process is really straight forward, and if you’ve installed Octopus before you’ll be familiar with most of the process. There’s a great video here on how to install Octopus http://docs.octopusdeploy.com/display/OD/Installing+Octopus

Step 3 – Import Previous Octopus Database

Once you’ve installed Octopus 2.0, you’ll most likely want to import your old v1.6 database with all your existing configuration. To do this, select the “import from Octopus 1.6″ option and choose the database back-up from the process you followed in step 1 and follow the instructions.

3 - Import Octopus Database

Step 4 – Post Upgrade Steps (server)

When you have completed the installation it’s essential to task a copy of the master key for backups. Instructions on how to backup the key can be found here http://docs.octopusdeploy.com/display/OD/Security+and+encryption

Once you have the server installed there’s a couple of items you’ll need to configure again.

  1. SMTP Server Settings are not kept, so these will need to be re-entered.

Step 5 – Upgrading Tentacles

If you are upgrading from version 1.6 – Octopus 2.0 server can no longer communicate with Tentacle 1.6. So in addition to upgrading Octopus, you’ll also need to upgrade any Tentacles manually. There are two ways to achieve this;

1. Download and Install the new Octopus Deploy Tentacle MSI on each target server

2. Use  a PowerShell script to download the latest Tentacle MSI, install it, import the X.509 certificate used for Tentacle 1.6, and configure it in listening mode. The powershell script is as follows;

function Upgrade-Tentacle
{ 
  Write-Output "Beginning Tentacle installation"
  Write-Output "Downloading Octopus Tentacle MSI..."
  $downloader = new-object System.Net.WebClient
  $downloader.DownloadFile("http://download.octopusdeploy.com/octopus/Octopus.Tentacle.2.0.5.933.msi", "Tentacle.msi")
  
  Write-Output "Installing MSI"
  $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/i Tentacle.msi /quiet" -Wait -Passthru).ExitCode
  Write-Output "Tentacle MSI installer returned exit code $msiExitCode"
  if ($msiExitCode -ne 0) {
    throw "Installation aborted"
  }
  
  Write-Output "Configuring and registering Tentacle"
   
  cd "${env:ProgramFiles(x86)}\Octopus Tentacle 2.0\Agent"
  Write-Output "Stopping the 1.0 Tentacle"
  
  & .\tentacle.exe create-instance --instance "Tentacle" --config "${env:SystemDrive}\Octopus\Tentacle\Tentacle.config" --console
  & .\tentacle.exe configure --instance "Tentacle" --home "${env:SystemDrive}\Octopus" --console
  & .\tentacle.exe configure --instance "Tentacle" --app "${env:SystemDrive}\Octopus\Applications" --console
  & .\tentacle.exe configure --instance "Tentacle" --port "10933" --console
  & .\tentacle.exe import-certificate --instance "Tentacle" --from-registry  --console
  Write-Output "Stopping the 1.0 Tentacle"
  Stop-Service "Tentacle"
  Write-Output "Starting the 2.0 Tentacle"
  & .\tentacle.exe service --instance "Tentacle" --install --start --console
  
  Write-Output "Tentacle commands complete"
}
  
Upgrade-Tentacle

Step 6 – Upgrade TeamCity Integration

The JetBrains TeamCity Plugin has also been released and requires an upgrade. To do this

  1. Download the plugin from the Octopus Website http://octopusdeploy.com/downloads.
  2. Shutdown the TeamCity server.
  3. Copy the zip archive (Octopus.TeamCity.zip) with the plugin into the <TeamCity Data Directory>/plugins directory.
  4. Start the TeamCity server: the plugin files will be unpacked and processed automatically. The plugin will be available in the Plugins List in the Administration area.

Please Note: The API Key’s for the users will have changed and will need to be re-entered into the Octopus Deploy Runner Configuration within the Build Step

Summary

We think that overall the Octopus Team have done a great job making the upgrade process easy. There’s still a few little issues to iron out, but overall the process is robust and simple. It’s a shame that automatic upgrade of the tentacles couldn’t have been supported as this may cause pain for customers with a large server install base, but Powershell automation makes for a great workaround.

As we learn more about Octopus 2.0, we’ll continue to let you know what we find. Good Luck with Upgrading — it’s definitely worth it!

DevOpsGuys use Octopus Deploy to deliver automated deployment solutions – ensuring frequent, low-risk software releases into multiple environments. Our engineers have deployed thousands of software releases using Octopus Deploy and we can help you do the same. Contact us today – we’d be more than happy to help.

requirements_rotated

The Top Ten DevOps “Operational Requirements”

Join us for “The Top 10 DevOps Operational Requirements” –  http://www.brighttalk.com/webcast/534/98059 via @BrightTALK

One of the key tenets in DevOps is to involve the Operations teams in the full software development life cycle (SDLC) and in particular to ensure that “operational requirements” (“OR’s”, formerly known as “non-functional requirements”, “NFR’s”) are incorporated into the design&build phases.

In order to make your life easier the DevOpsGuys have scoured the internet to compile this list of the Top Ten DevOps Operational Requirements (ok, it was really just chatting with some of the guys down the pub BUT we’ve been doing this a long time and we’re pretty sure that if you deliver on these your Ops people will be very happy indeed!).

#10 – Instrumentation

Would you drive a car with a blacked out windscreen and no speedo? No, didn’t think so, but often Operations are expected to run applications in Production in pretty much the same fashion.

Instrumenting your application with metrics and performance counters gives the Operations people a way to know what’s happening before the application drives off a cliff.

Some basic counters include things like “transactions per second” (useful for capacity) and “transaction time” (useful for performance).

#9 – Keep track of the Dependencies!

“Oh yeah, I forgot to mention that it needs [dependency XYZ] installed first” or “Yes, the system relies on [some 3rd party web service] can you just open up firewall port 666 right away”.

Look, we all understand that modern web apps rely on lots of 3rd party controls and web services – why re-invent the wheel if someone’s already done it, right? But please keep track of the dependencies and make sure that they are clearly documented (and ideally checked into source control along with your code where possible). Nothing derails live deployments like some dependency that wasn’t documented and that has to be installed/configured/whatever at the last moment. It’s a recipe for disaster.

#8 – Code defensively & degrade gracefully

Related to #9 above – don’t always assume the dependencies are present, particularly when dealing with network resources like databases or web services and even more so in Cloud environments where entire servers are known to vanish in the blink of an Amazon’s eye.

Make sure the system copes with missing dependences, logs the error and degrades gracefully should the situation arise!

#7 – Backward/Forward Compatibility

Existing code base with new database schema or stored procedure?

New code base with existing database schema or stored procedures?

Either way, forwards or backwards, it should work just fine because if it doesn’t you introduce “chicken and the egg” dependencies. What this mean for Operations is that we have to take one part of the system offline in order to upgrade the other part… and that can mean an impact on our customers and probably reams of paperwork to get it all approved.

#6 – Configurability

I once worked on a system where the database connection string was stored in a compiled resource DLL.

Every time we wanted to make a change to that connection string we had to get a developer to compile that DLL and then we had to deploy it… as opposed to simply just editing a text configuration file and re-starting the service. It was, quite frankly, a PITA.

Where possible avoid hard-coding values into the code;  they should be in external configuration files that you load (and cache) at system initialisation. This is particularly important as we move the application between environments (Dev, Test, Staging etc) and need to configure the application for each environment.

That said, I’ve seen systems that had literally thousands of configuration options and settings, most of which weren’t documented and certainly were rarely, if ever, changed. An “overly configurable” system can also create a support nightmare as tracking down which one of those settings has been misconfigured can be extremely painful!

#5 – “Feature Flags”

A special case of configurability that deserves its own rule – “feature flags”.

We freakin’ love feature flags.

Why?

Because they give us a lot of control over how the application works that we can use to (1) easily back out something that isn’t working without having to roll-back the entire code base and (2) we can use it to help control performance and scalability.

#4 – Horizontal Scalability (for all tiers).

We all want the Product to be a success with customers BUT we don’t want to waste money by over-provisioning the infrastructure upfront (we also want to be able to scale up/down if we have a spiky traffic profile).

For that we need the application to support “horizontal scalability” and for that we need you to think about this when designing the application.

3 quick “For Examples”:

  1. Don’t tie user/session state to a particular web/application server (use a shared session state mechanism).
  2. Support for read-only replicas of the database (e.g. a separate connection string for “read” versus “write”)
  3. Support for multi-master or peer-to-peer replication (to avoid a bottleneck on a single “master” server if the application is likely to scale beyond a reasonable server specification). Think very carefully about how the data could be partitioned across servers, use of IDENTITY/@Auto_Increment columns etc.

#3 –Automation and “scriptability”

One of the key tenets in the CALMS DevOps Model is A for Automation (Culture-Automation-Lean-Metrics-Sharing if you want to know the others).

We want to automate the release process as much as possible, for example by packaging the application into versionable released or the “infrastructure-as-code” approach using tools like Puppet & Chef for the underlying “hardware”.

But this means that things need to be scriptable!

I can remember being reduced to using keystroke macros to automate the (GUI) installer of a 3rd party dependency that didn’t have any support for silent/unattended installation. It was a painful experience and a fragile solution.

When designing the solution (and choosing your dependencies) constantly ask yourself the question “Can these easily be automated for installation and configuration”? Bonus points if you can, in very large scale environments (1,000 of servers) build in “auto-discovery” mechanisms where servers automatically get assigned roles, service auto-discovery (e.g. http://curator.apache.org/curator-x-discovery/index.htm) etc.

#2 – Robust Regression Test suite

Another think we love, almost as much as “feature flags” is a decent set of regression test scripts that we can run “on-demand” to help check/verify/validate everything is running correctly in Production.

We understand that maintaining automated test scripts can be onerous and painful BUT automated testing is vital to an automation strategy – we need to be able to verify that an application has been deployed correctly, either as part of a software release or “scaling out” onto new servers, in a way that doesn’t involve laborious manual testing. Manual testing doesn’t scale!

The ideal test suite will exercise all the key parts of the application and provide helpful diagnostic messaging if something isn’t working correctly. We can combine this with the instrumentation (remember #10 above), synthetic monitoring, Application Performance Management (APM) tools (e.g. AppDynamics), infrastructure monitoring (e.g. SolarWinds) etc to create a comprehensive alerting and monitoring suite for the whole system. The goal is to ensure that we know something is wrong before the customer!

#1 – Documentation

Contrary to popular belief we (Operations people) are quite happy to RTFM.

All we ask is that you WTFM (that’s W as in WRITE!) J

Ideally we’d collaborate on the product-centric documentation using a Wiki platform like Atlassian Confluence as we think that this gives everyone the easiest and best way to create – and maintain – documentation that’s relevant to everyone.

As a minimum we want to see:

  1. A high-level overview of the system (the “big picture”) probably in a diagram
  2. Details on every dependency
  3. Details on every error message
  4. Details on every configuration option/switch/flag/key etc
  5. Instrumentation hooks, expected values
  6. Assumptions, default values, etc

Hopefully this “Top Ten” list will give you a place to start when thinking about your DevOps “Operational Requirements” but it’s by no means comprehensive or exhaustive. We’d love to get your thoughts on what you think are the key OR’s for your applications!

Our experienced DevOps team provides a fully-managed “application-centric” website support service to your business and  your customers. Contact us to today, to find out how we can help.

image source: - CC  - c_knaus via Flickr - http://www.flickr.com/photos/soda37/6496536471
12dayheader

The 12 Days of DevOps

The 12 Days of DevOps

We’re getting into the Holiday spirit here at DevOpsGuys Central so we’ve put together this little DevOps ditty – the “12 Days of DevOps”!

How many of the “12 Days of DevOps” do you follow in your IT organisation?