Tag Archives: TeamCity

5 useful #Devops lessons about flow you can learn while getting off a plane.

One of our favourite books here at DevOpsGuys is Donald Reinertsen’s “Principles of Product Development Flow”. It’s a compelling argument about the importance of “flow” – small batch sizes, frequent releases, rapid feedback, all the key tenets of continuous delivery and DevOps.

It is, sadly, not the sort of easily consumed management book that might appeal to your ADD-afflicted manager (should you be able to tear his/her attention away from his/her iPhone or iPad).

@TheDevMgr and myself (@TheOpsMgr) were discussing this last week as we landed at Munich airport to attend the @Jetbrains partner conference (we’re huge fans of TeamCity for continuous integration).

As we went thru the process of disembarking from the flight we realised we were in the middle of a real-world analogy for benefits of flow – an analogy that might be readily understandable to those senior managers we mentioned earlier.

Let’s walk thru it.

The plane lands and reaches the stand. Immediately everyone in the aisles seats un-clicks and stands up, congesting the aisle. Once the aisle is congested (=high utilisation) the ability of people to collect luggage from the overhead lockers is significantly reduced – everything becomes less efficient.

At high rates of utilisation of any resource waiting times, thrashing between tasks and the potential for disruption are likely to go up. So that’s Useful lesson #1.

All this activity and jamming of the aisle is, ultimately, futile because no-one is going anywhere until the stairs get there and the cabin doors are opened. This is the next constraint in the pipeline.

Useful lesson #2 – until you understand the constraints you might be just rushing headlong at the next bottleneck.

Eventually they arrive and we all start shuffling off like sheep (or zombies) and walk down the stairs… to the waiting buses on the tarmac.

Useful lesson #3 – if you’re trying to optimise flow you need to look beyond the immediate constraint.

In this, case the cabin door & stairs, and look at the entire system (this is the essential message of systems thinking and the “1st way of DevOps”).

The buses were fairly large and held about 60+ people each (=large batch size), and everyone tries to cram onto the first bus… which then waits until the second bus is full before both buses head across the tarmac. When we reach the terminal the buses park up… and the second bus is actually closer to the door we need to enter.

Useful lesson #4 – don’t assume that the batch size is what you think it is (i.e. 1 bus load). It might be more (2 buses!). Also, just because something looks FIFO doesn’t mean it is…

Once we enter the terminal we then hit another constraint – clearing Immigration control. Luckily we were able to go thru the EU Resident’s queue, which flowed at a fairly constant rate due to the minimal border control. But looking at the non-EU Residents queue that the flow was turbulent – some passengers went thru quickly but others took much longer to process due to their different nationality, visa requirements or whatever had caught the Border Control officer’s attention. Unfortunately, the people who could have been processed faster were stuck in the queue behind the “complex” processing.

Useful lesson #5 – If you can break your “unit of work” down to a relatively standard size the overall flow through the system is likely to improve. This is why most Scrum teams insist that a given user story shouldn’t’ take more than 2-3 days to complete, and if it would it then it should be split up into separate stories until it does.

Luckily we avoided the queue for checked luggage as we only had carry-on and we were able to get in the queue for the taxis instead… so that’s the end of our analogy for now.

So let’s think of some theoretical optimisations to improve the flow.

Firstly, why not only let the people on ONE side of the aisle stand to collect their overhead luggage and prepare to disembark, thereby avoiding the congestion in the aisle? You can then alternate sides until everyone’s off.

Secondly, why not see if you can get a second set of stairs so you can disembark from the forward AND rear cabin doors, and alleviate that bottleneck?

Thirdly, why not have smaller buses, and dispatch them immediately they are full, and thereby reduce the batch size that arrives at Immigration?

Fourthly, why not have more agents at Border Control to alleviate that bottleneck, or create different queue classes to try to optimise flow e.g. EU Residents, “Other countries we vaguely trust and generally wave through” and “countries we generally give a hard time to”. You could even have a special queue for “dodgy looking people from whatever nationality that are about to spend some quality time with a rubber glove”. Or why not create totally new categories like “those we hand luggage” versus “those with checked luggage who are only going to have to wait at the luggage carousel anyway so you might as well wait here anyway”.

These proposed optimisations might be simplistic. For example the reason the two buses leave together is probably because ground traffic at airports is strictly controlled (in fact there is a “ground traffic controller” just the same as an “air traffic controller”). So there are often constraints we might not be aware of until we do more investigation BUT the goal of any DevOps organisation should be to try and identify the constraints, and experiment with different ways to alleviate that constraint. Try something, learn from it, iterate around again.

Hopefully by using a common, real-world analogy for product development flow you’ll be able to convince your Boss to let you apply these principles to your DevOps delivery pipeline and improve flow within your organisation!
Photo credit: aselundblad / Foter / Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)

Upgrading to Octopus 2.0

On 20th December 2013, Paul Stovell and the Octopus Team released Octopus Deploy 2.0. We’ve had a chance to play with the product for sometime, and it’s a great next version.

Here’s 5 great reasons to upgrade;

  1. Configurable Dashboards
  2. Sensitive variables
  3. Polling Tentacles (Pull or Push)
  4. IIS website and application pool configuration
  5. Rolling deployments

So let’s take a look at how the upgrade process works.

Step 1 – Backup your existing database

Things to check before you upgrade;

  1. Your administrator username and password (especially, if you are not using Active Directory)
  2. Before attempting to migrate, make sure that you don’t have any projects, environments, or machines with duplicated names (this is no longer allowed in Octopus 2.0, and the migration wizard will report an error if it finds duplicates).
  3. IIS Configuration. Are you using ARR or rewrite rules of any kind.
  4. SMTP Server Settings (these are not kept in upgrade)

The first step is to ensure that you have a  recent database backup that you can restore in case anything goes wrong.

1 - Database Backup

Confirm the location of your backup file. You’ll need this file at a later stage in the upgrade process (and if anything goes wrong)

2-Database Backup Location

Step 2 – Install Octopus 2.0

Download the latest installer MSI from the Octopus Website at http://octopusdeploy.com/downloads

Next follow the installer instructions and the Octopus Manager process. This process is really straight forward, and if you’ve installed Octopus before you’ll be familiar with most of the process. There’s a great video here on how to install Octopus http://docs.octopusdeploy.com/display/OD/Installing+Octopus

Step 3 – Import Previous Octopus Database

Once you’ve installed Octopus 2.0, you’ll most likely want to import your old v1.6 database with all your existing configuration. To do this, select the “import from Octopus 1.6” option and choose the database back-up from the process you followed in step 1 and follow the instructions.

3 - Import Octopus Database

Step 4 – Post Upgrade Steps (server)

When you have completed the installation it’s essential to task a copy of the master key for backups. Instructions on how to backup the key can be found here http://docs.octopusdeploy.com/display/OD/Security+and+encryption

Once you have the server installed there’s a couple of items you’ll need to configure again.

  1. SMTP Server Settings are not kept, so these will need to be re-entered.

Step 5 – Upgrading Tentacles

If you are upgrading from version 1.6 – Octopus 2.0 server can no longer communicate with Tentacle 1.6. So in addition to upgrading Octopus, you’ll also need to upgrade any Tentacles manually. There are two ways to achieve this;

1. Download and Install the new Octopus Deploy Tentacle MSI on each target server

2. Use  a PowerShell script to download the latest Tentacle MSI, install it, import the X.509 certificate used for Tentacle 1.6, and configure it in listening mode. The powershell script is as follows;

function Upgrade-Tentacle
  Write-Output "Beginning Tentacle installation"
  Write-Output "Downloading Octopus Tentacle MSI..."
  $downloader = new-object System.Net.WebClient
  $downloader.DownloadFile("http://download.octopusdeploy.com/octopus/Octopus.Tentacle.", "Tentacle.msi")
  Write-Output "Installing MSI"
  $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/i Tentacle.msi /quiet" -Wait -Passthru).ExitCode
  Write-Output "Tentacle MSI installer returned exit code $msiExitCode"
  if ($msiExitCode -ne 0) {
    throw "Installation aborted"
  Write-Output "Configuring and registering Tentacle"
  cd "${env:ProgramFiles(x86)}\Octopus Tentacle 2.0\Agent"
  Write-Output "Stopping the 1.0 Tentacle"
  & .\tentacle.exe create-instance --instance "Tentacle" --config "${env:SystemDrive}\Octopus\Tentacle\Tentacle.config" --console
  & .\tentacle.exe configure --instance "Tentacle" --home "${env:SystemDrive}\Octopus" --console
  & .\tentacle.exe configure --instance "Tentacle" --app "${env:SystemDrive}\Octopus\Applications" --console
  & .\tentacle.exe configure --instance "Tentacle" --port "10933" --console
  & .\tentacle.exe import-certificate --instance "Tentacle" --from-registry  --console
  Write-Output "Stopping the 1.0 Tentacle"
  Stop-Service "Tentacle"
  Write-Output "Starting the 2.0 Tentacle"
  & .\tentacle.exe service --instance "Tentacle" --install --start --console
  Write-Output "Tentacle commands complete"

Step 6 – Upgrade TeamCity Integration

The JetBrains TeamCity Plugin has also been released and requires an upgrade. To do this

  1. Download the plugin from the Octopus Website http://octopusdeploy.com/downloads.
  2. Shutdown the TeamCity server.
  3. Copy the zip archive (Octopus.TeamCity.zip) with the plugin into the <TeamCity Data Directory>/plugins directory.
  4. Start the TeamCity server: the plugin files will be unpacked and processed automatically. The plugin will be available in the Plugins List in the Administration area.

Please Note: The API Key’s for the users will have changed and will need to be re-entered into the Octopus Deploy Runner Configuration within the Build Step


We think that overall the Octopus Team have done a great job making the upgrade process easy. There’s still a few little issues to iron out, but overall the process is robust and simple. It’s a shame that automatic upgrade of the tentacles couldn’t have been supported as this may cause pain for customers with a large server install base, but Powershell automation makes for a great workaround.

As we learn more about Octopus 2.0, we’ll continue to let you know what we find. Good Luck with Upgrading — it’s definitely worth it!

DevOpsGuys use Octopus Deploy to deliver automated deployment solutions – ensuring frequent, low-risk software releases into multiple environments. Our engineers have deployed thousands of software releases using Octopus Deploy and we can help you do the same. Contact us today – we’d be more than happy to help.


In a previous post, we mentioned NuGet as great way to package up code for deployment, and that got us thinking about NuGet feeds.

NuGet Overview

In just about every Visual Studio project we create today, we will eventually find a need for a 3rd party library of some kind.

Managing these libraries has traditionally been a cumbersome process. It can be difficult to find available libraries, although Google does a good job, but it’s hard to find the latest version of libraries you’ve used and keep them updated.

That’s when NuGet comes to the rescue. NuGet is a Visual Studio extension that makes it easy to add, remove, and update libraries and tools in Visual Studio projects that use the .NET Framework.

What’s great about NuGet is that we can use it to package our own applications for publishing, which means it becomes a good tool for deployment.

This post won’t cover using NuGet, there’s some great information at NuGet.org, but we are going to explore some options for publishing NuGet packages once they’ve been created.

Public Publishing at NuGet.org

The public NuGet Gallery is the best place to publish if you’re producing a library, component or framework you want others to use. If however, you’ve just developed a bespoke website for a new client this not the place to publish to.

The NuGet Gallery currently boasts more than 10,000 packages and claims over 42 million downloads, so if you have something to share this will be your publishing target.

Publishing to this feed is pretty straight-forward, you need to

1.  Register for an account at http://nuget.org/

2. Grab the API Key, located in the “My Account” page. This is generated for you.

3. Open a command console and run the command:

nuget setApiKey Your-API-Key

4. Confirm you can view your NuGet package in the NuGet Gallery using the search or in the NuGet Package Manager in Visual Studio.

That’s it. There is however a word of caution, and here it is direct from Microsoft;

NuGet.org does not support permanent deletion of packages, because that would break anyone who is depending on it remaining available. This is particularly true when using the workflow that restores packages on build.

Instead, NuGet.org supports a way to ‘unlist’ a package, which can be done in the package management page on the web site. When a package is unlisted, it no longer shows up in search and in any package listing, both on NuGet.org and from the NuGet Visual Studio extension (or nuget.exe). However, it remains downloadable by specifying its exact version, which is what allows the Restore workflow to continue working.

If you run into an exceptional situation where you think one of your packages must be deleted, this can be handled manually by the NuGet team. e.g. if there is a copyright infringement issue, or potentially harmful content, that could be a valid reason to delete it.

  1. Easy to setup.
  2. Publicly available.
  1. NuGet.org does not support permanent deletion of packages.
  2. Publicly available without a mechanism to restrict packages.

Hosting your own NuGet Feeds

There are several reasons you may want to host your own NuGet feed, but the most common two are;

1. You want to restrict access to the packages/libraries that your development team can use. E.g. filter the public NuGet gallery by creating your own.

2. You want to publish your own packages, but not have them publicly available.

If this is the case, then you’re going to want to consider creating your own NuGet Feed. To do this, you have a couple of options;

1. Shared Folders

Download packages from the NuGet gallery or package your own code with NuGet and then put the packages (.nupkg files) in a shared folder location. Network drives, DropBox or any other document sharing tool are great for this.  Then, simply add the shared folder location to the list of available package sources in NuGet package manager in Visual Studio.

2. Feed Server

Using IIS you can create and host your own internal or remote feed on a server. By adding the NuGet.Server package to an empty website project you enable the site to serve up the package feed using Open Data Protocol (OData).

Once you’ve deployed the site to an IIS server you can then access the oData feed by using a URL. e.g. http://yourdomain/nuget/

Then, simply add the feed server URL to the list of available package sources in NuGet package manager in Visual Studio.

Detailed instructions on both of these methods can be found at Hosting your Own NuGet Feeds.


1. Allows you to control/protect what packages are in the NuGet feed.

2. Ensures that packages are not available in the NuGet Gallery

3. Additional packages can be added and deleted from the feed using NuGet.exe


1. Requires additional configuration and some development.

2. Shared network drives will work, but are not ideal.

3. Only supports one NuGet feed, so all packages are available in one feed.

JetBrains TeamCity as a NuGet Server

In Version 7 of TeamCity, JetBrains added native support for the NuGet Server.

Enabling the NuGet Server is easy. Select the “NuGet Server” option from the “Administration” section and click  the “Enable” button.

TeamCity NuGet Administration
TeamCity NuGet Administration

Once enabled the NuGet server URL’s will be shown.

TeamCity NuGet Server URL's
TeamCity NuGet Server URL’s

The public feed URL is only shown if the “Allow to login as a guest user” setting in “Global Settings” is ticked.

The final step is to add a build step, with a runner type of NuGet Pack. The NuGet Pack build runner allows you to create a NuGet package from a given nuspec file.

TeamCity NuGet Pack Build Type)
TeamCity NuGet Pack Build Type

Then, simply add the feed server URL to the list of available package sources in NuGet package manager in Visual Studio.


1. NuGet package creation becomes part of the build process.

2. NuGet server is integrated into TeamCity. No custom configuration or development is required.

3. Supports both authenticated and unauthenticated access to the NuGet server allowing for greater control.


1. Requires TeamCity.

2. Only supports one NuGet feed, so all packages are available in one feed.

3. Multiple feeds would require multiple TeamCity instances.


MyGet is a hosted NuGet feed, NuGet-as-a-Service (NaaS). The product has some really nice features, allowing you to create public and private feeds, apply fine-grained user security and also has a range of package management features, such as retention policy.

There is a free plan, but as with all -as-a-service offerings there is a price to pay, ranging from $7 USD/month to $599 USD/year for the Enterprise plan.

We’ve done some basic tests and the product works well. They certainly, consider themselves a SaaS provider, hosting on Windows Azure and making Uptime Status reports available, which is great to see.

What interested us most are two BETA features MyGet are currently running; Build Services and Package Sources.

Build Services

MyGet build services allows you to add packages to your feed by providing Git, Mercurial or Subversion repo details. MyGet servers download the code, build the package, create the package and  publish it.

Currently this service supports Microsoft.NET framework up to Version 4.5 and supports build hooks (commit build triggers) which can be linked to GitHub or BitBucket repositories and will trigger a new build using a HTTP POST.

Package Sources

Package sources are upstream repositories for your MyGet feed. You can aggregate package sources in a single MyGet feed, filter each of them individually, and even proxy upstream package sources to include them in your feed queries.


1. Supports multiple feeds, so packages can be separated into distinct groups.

2. Fine-grained security

3. Hosted solution means you don’t have to set up your own infrastructure

4. Extended feature set. Features and more in beta.


1. Private feeds are not free as shown in Pricing Plans.

2. Unsure of customer uptake.

Inedo ProGet

ProGet is an on-premise NuGet package repository server.

This product has some nice features which include multiple feed support, feed filtering (only in Enterprise), granular security and package caching.

The product has 3 licensing options;

1. Free, with limited support and some feature limitations.

2. Enterprise Annual $395 USD/year.

3. Enterprise Perpetual $995 USD/ 1st year, with additional support at $195 USD /year.


1. Supports multiple feeds, so packages can be separated into distinct groups.

2. Fine-grained security

3. Additional features when compared to the roll-your-own server solution.


1. Requires additional infrastructure to setup, including SQL Server, but will run using SQL Express.

2. Feature limitation in free edition, as shown in licensing.

NuGet Summary

We hope that gives you a flavor of the NuGet package hosting solutions available. Each solution has advantages and disadvantages, but our advice would be;

1. If you’re creating a library to be used by developers in the public domain, choose NuGet Gallery.

2. If you’re using TeamCity, use the inbuilt NuGet Server. It’s real easy.

3. The choice between Local Feed Servers, MyGet and Inedo ProGet is a close call, but our winner was MyGet. The beta features,  Build Services and Package Sources made it our choice.