Category Archives: Continuous Delivery

DevOpsGuys Launch Workshops & Training

DevOpsGuys, a leading provider of DevOps services for Enterprise clients, is pleased to announce the launch of their new DevOps Education platform which provides DevOps, Continuous Delivery and Agile training courses and workshop, enabling organisations to build the knowledge they need to succeed with DevOps transformation.

As part of DevOpsGuys on-going commitment to impart knowledge and expertise through the company’s training programmes, and to support organisations developing internal capabilities themselves, DevOpsGuys has built a series of public and onsite training solutions where people of all levels can experience DevOps.

“Enterprise IT has a critical need to identify, select, learn and implement new platforms.  Being a digital enterprise is so critical to the future of every company that knowledge must be built and managed internally”, said James Betteley, Head of DevOps Education at DevOpsGuys.

DevOpsGuys courses are delivered through a combination of interactive workshops and practical, engineering led classes, allowing people to experience DevOps process and practices or get hands-on with Automation, or understand how to design software to facilitate Continuous Delivery.

“We’ve been listening to what people in our industry have been asking us for and are excited to launch our DevOps from the Trenches workshop”, announced Betteley. “This workshop is based on our DevOps experiences working with dozens of leading organisations across Europe – from small start-ups to large enterprises.”

The course can be booked online using EventBrite at

DevOpsGuys offer a range of DevOps and Agile training courses and workshops, which can be found online at

About DevOpsGuys

DevOpsGuys deliver DevOps-as-a-Foundation to underpin digital transformation, achieving greater customer enablement and unlocking new revenue streams. Our solutions make enterprise organisations agile, scalable and responsive to rapidly changing business requirements.

Our platform is built on a foundation of education and DevOpsGuys offer a series of DevOps, Continuous Delivery and Agile training courses and workshop, enabling you to build the knowledge you need to succeed.

Our courses are delivered through a combination of interactive workshops and practical, engineering led classes, where people of all levels can experience DevOps, get hands-on with Automation, or understand how to design software to facilitate Continuous Delivery.

DSC and Octopus Deploy integration

When we first started looking at DSC for production use, we were using the same process and modules that came from Steven Murawski and Stack Exchange.  There are several blog posts talking about this tooling, but here’s a high level overview of what our pipeline looked like originally:

  • TeamCity is our Continuous Integration server; it monitors our DSC source control repository and kicks off the build process.
  • Stack Exchange / DSC tooling modules were used to take the data from source control and run tests, then produce configuration documents and resource module zip files.
  • Octopus Deploy was used to transfer these artifacts out to one or more DSC pull servers.
  • DSC pull servers delivered the bits down to clients.

This works, but what was missing was the ability to define environments (dev/test/prod/etc), and to version the configurations and modules in such a way that they could be promoted through these environments in our pipeline.

To address this, we’re trying out something new.  We’re cutting out the middle-man of a DSC pull server, and having Octopus Deploy deliver our DSC configurations and resources directly to the endpoints.  As far as DSC’s Local Configuration Manager is concerned, we’ve switched to a Push model.  Really, though, that depends on how the Octopus Deploy Tentacle is running (which can be set up for either push or pull, just like the LCM.)

With this setup,  we can promote versions of DSC configurations through our environments using the same mechanism that’s already in place for our application and website code.  We can also leverage Octopus Deploy’s variable-scoping functionality to control our DSC configurations; these Octopus Deploy variables are essentially taking the place of the entire DscConfiguration module in the new pipeline:


Now, Octopus Deploy’s dashboard shows us exactly which configuration version is deployed to each environment:


We’re still experimenting with this approach, but we’re getting pretty positive results so far.  A nice perk for our customers is that they have fewer new tools to learn when we’re first setting things up; they don’t have to grok both Octopus Deploy’s variables / environments concepts, as well as a separate solution doing the same thing for DSC.

#DevOps = #ContinuousDelivery + #ContinuousObsolescence?

During my talk at QCon last week we had started an interesting debate about whether DevOps was a subset of Continuous Delivery or vice versa.

We were lucky enough to have Dave Farley in the audience and it was clear from his definition of DevOps versus Continuous Delivery that he felt that CD was the superset and DevOps a component of that set… but he admitted that he if he was writing the Continuous Delivery Book again in 2015 would change the definition of “finished” to embrace the full product lifecycle from inception to retirement.

This aligns nicely with his co-author Jez Humble’s thoughts on being “Product-Centric” that we discussed in an earlier blog post.

“Finished” should mean “no longer delivering value to the client and hence retired from Production”.



That was the we joined a new term (live on stage as captured by Chris Massey’s tweet below) – “Continuous Obsolescence”


So what is #ContinuousObsolescence?

Continuous Obsolescence is the process of continuously grooming your product/feature portfolio and asking yourself “is this product/feature still delivering value to the customer?” because if it isn’t then it’s “muda” (waste), it’s costing you money and should be retired.

Continuous Obsolescence is the process of continuously grooming your product/feature portfolio and asking yourself “is this product/feature still delivering value to the customer?” because if it isn’t then it’s “muda” (waste), it’s costing you money and should be retired.

Every product/feature that’s not delivering value costs you by:

  • Wasted resources – every build, every release, you’re compiling and releasing code that doesn’t need to be shipped, and consuming server resources once deployed.
  • Overhead of Regression testing – it still needs to be tested every release
  • Unnecessary complexity – every product/feature you don’t need just adds complexity to your environment which complicates both the software development process and the production support process. Complex systems are harder to change, harder to debug and harder to support. If it’s not needed, simplify things and get rid of it!

I’d argue that DevOps could even be defined as
“#DevOps = #ContinuousDelivery + #ContinuousObsolescence”

If you’re doing DevOps and you’re a believer in the CALMS model then you should be practising “Continuous Obsolescence”… in fact I’d argue that DevOps could even be defined as “#DevOps = #ContinuousDelivery + #ContinuousObsolescence” – what do you think?

Send us your comments or join the debate on Twitter @DevOpsguys


Leave your Technical Debt behind this New Year

Or: How DevOps can put you back in the black for 2015

Courtesy of Paul Downey
Courtesy of Paul Downey

When you’re running a business there are so many things to focus on that it can be easier than you think to fall into technical debt. If IT is not your speciality or your business isn’t set up to deal with the perpetual developments of cloud computing patching things hurriedly together or using inexpert coding can seem like a quick fix that you can come back to and do properly in the future; in effect you’re borrowing time.

Unfortunately errors often beget errors, piling up ‘interest’ on your technical debt that your business might find it impossible to get out of in the long run. It’s a serious problem that can leave you wasting time and money, repeatedly solving problems caused by early errors.

With continuous development expert Alex Yates joining us for the 1st DevOps Meet Up of the year on Jan the 7th we took a look at his blog to see how a DevOps approach can help you tackle your software and IT problems before they become serious, unsolvable issues.

Here are some of Alex’s tips to prevent your technical debt reaching critical mass:


  • Source Control to better manage changes. Thankfully this is pretty much a given for most people now.
  • Thorough automated testing. Adopting Continuous Integration and/or Continuous Delivery.
  • Better teamwork/communication across IT teams. Adopting a more DevOps approach.
  • Better communication with business managers to help them understand the consequences of shipping features too fast without time to focus on quality. The business guys will want you to ship quickly now but they’ll also want you to be agile later in the year right?
  • Keeping track of technical debt visually, monitoring it and paying it back (refactoring it) when possible

Using actual case studies in Wednesday’s talk Alex will share his expertise on the need for changes in organisational structure, process and technology, which are necessary to arrive at a nimble, fast, automatable and continuous database deployment process.

The session will tackle how to customise common practices and tool sets to build a database deployment pipeline unique to your environment in order to speed your own database delivery while still protecting your organization’s most valuable asset, its data.

At DevOpsGuys we specialise in continuous delivery, application management and leading on development processes that allow your business to function to its greatest capacity; we manage your website so you can focus on your business.

To understand how  DevOpsGuys  can help  you get control of your technical debt in 2015 read more on our website or say hello on Twitter @DevOpsGuys.

Happy New year!


DSC Tooling: cDscResourceDesigner and DscDevelopment

In PowerShell v4 – the first release of Desired State Configuration – the experience of authoring DSC resource modules isn’t terribly pleasant.  You have to produce both a PowerShell module and a schema.mof file, both of which have to follow certain rules, and agree with each other about the parameters that can be passed to the module’s commands.  If you get it wrong, your resource won’t work.

One of the extra DSC-related goodies Microsoft has released is an experimental module called xDscResourceDesigner.  It contains several functions that help to take the pain out of this authoring experience.  For example, you can run a Test command on a resource to make sure that its PSM1 file and schema.mof are both legal and in sync.  DscDevelopment, another of Steve Murawski’s modules from Stack Exchange, takes this concept a step further, and allows you to do things like automatically generate a schema.mof file from a PSM1 file.

cDscResourceDesigner is a community-modified version of the original xDscResourceDesigner module.  Most of our focus, so far, has been on the Test-cDscResource function, which is called from Invoke-DscBuild and becomes part of our continuous delivery pipeline.

Microsoft’s experimental version of this Test command was pretty good, but it would report failures on some modules and schema files that are perfectly valid, as far as WMI and the Local Configuration Manager are concerned.  In fact, several of Microsoft’s own DSC resources (both experimental and those included in PowerShell 4.0) would fail the tests of the original version of this command.  We’re working on identifying and fixing these types of false failures, so the Test-cDscResource command can be trusted as a reliable reason to fail a build.

We’ve also got an eye on the future.  One of the big DSC-related enhancements coming in PowerShell v5 is the ability to author resources using a new class definition language.  There will no longer need to be a schema.mof file separate from the PowerShell code, as the class definitions will contain enough information for DSC to do everything it needs to do.  Once that new version is released and deployed, you’ll have a much better experience by using this new resource definition language.  At that time, we plan to add new commands to DscDevelopment or cDscResourceDesigner which will allow you to automatically convert resource modules between v4 and v5 syntax, in either direction.

PowerShell DSC tooling updates released!

Over the past few weeks, we’ve been working on incorporating Desired State Configuration into our continuous deployment pipeline.  We’ll be using this technology internally, eating the dog food, but we also want to be able to help our clients leverage this technology.

To get started, we decided to take advantage of the excellent work and experience of Steven Murawski.  During his time at Stack Exchange, Steven was one of the first and most visible adopters of DSC in a production environment.  Over time, he developed a set of PowerShell modules related to DSC, which have been published as open source over at

We’ve been working on some updates to this code and are now pleased to announce that it’s publicly available via GitHub at

A list of the changes that Dave Wyatt has been working on within DevOpsGuys over the past couple of months can be found at

The most extensive changes, and many of the design decisions were made in collaboration with Steve Murawski. There’s still more work to be done before this branch is ready to be merged into master; we just wanted to get this code out into the public repo asap. The motivation behind these changes was to get the tooling modules to a point where we can train clients to use them in a continuous delivery pipeline.

The biggest priorities were creating examples of the folder structures required by DscConfiguration and DscBuild, and simplifying the Resolve-DscConfigurationProperty API and code.

We’ve also tried to improve the overall user experience by making minor changes in other places; making Invoke-DscBuild run faster, making failed Pester tests or failed calls to Test-cDscResource abort the build, making Test-cDscResource stop reporting failures for resource modules that are actually valid, etc.

We’d love your feedback on the changes we are making, so please get in touch with your comments.

DSC Tooling: The DscConfiguration module

The DscConfiguration module is one of the tooling modules originally written by Steven Murawski for Stack Exchange.  It has since been released as open source, and the main repository for the code is hosted by

The DscConfiguration module provides us with two main pieces of functionality.  First, it allows us to store the ConfigurationData hashtable in several smaller, more manageable data files.  This is a vast improvement over the experience you would have if you had to maintain the entire ConfigurationData database in a large, single file.  Once you scale out beyond a few machines, you quickly wind up with a file that is thousands of lines long.  It can be easy to overlook errors in such a large file.

The other major piece of functionality provided by the DscConfiguration module is the introduction of the concept of Sites, Services, and Global settings, in addition to Nodes.  Instead of needing to define every setting directly on each Node – which could introduce a large amount of duplication throughout the ConfigurationData table – you can define settings that are inherited by all nodes, by nodes in a specific site, or by arbitrary groups of similar nodes (which are referred to as Services in the DscConfiguration module.)

When we began working with the Stack Exchange modules, the DscConfiguration module’s code had the most complexity.  As Steven described it, the module had grown organically over time, in response to changing needs within Stack Exchange.  We have collaborated with Steven to simplify the code in this module, making it easier to understand and to document, in some places.

Here are some of the changes we’ve made to the module at DevOpsGuys:

We moved the Resolve-DscConfigurationProperty function from the DscBuild module to the DscConfiguration module.

It makes more sense for it to live here, with all of the rest of the code that has explicit knowledge of how the ConfigurationData hashtable has been structured.We speculated that this function was originally in the DscBuild module due to a block of code which would look up the value of the $ConfigurationData variable if it were not explicitly passed into the function.  Refer to this gist for the original code an our updates.

Presumably, the intention of that code was to allow you to call Resolve-DscConfigurationProperty from a configuration script without having to bother to explicitly pass along a value to the -ConfigurationData parameter on every call.  The problem with the original implementation is that it used the Get-Variable cmdlet, which will only resolve variables in the same script module as its caller.  If you were using Invoke-DscBuild, you wouldn’t notice the difference, because that command also contains a $ConfigurationData variable that would eventually be resolved.  If you tried to call a configuration from outside of Invoke-DscBuild, though, the calls to Resolve-DscConfigurationProperty would fail (unless you passed in -ConfigurationData explicitly.)

By converting this block to use $PSCmdlet.GetVariableValue() instead, the function can now be called from a configuration even if you’re not using the Invoke-DscBuild command from the DscBuild module, and without explicitly using the -ConfigurationData parameter.

We took out the concept of “Applications”, replacing it with the ability to define a hierarchy of properties.

In the original Resolve-DscConfigurationProperty function, there were two ways you could use it:  you could pass in a -PropertyName string, or an -Application string.  If you used -Application, the function would look up values in a slightly different place, and would return a hashtable containing whatever values were needed to install a particular application (Name, ProductId, SourcePath, Installer, etc.)  The code itself was quite complex, due to having checks for whether $PropertyName or $Application was being used in virtually every helper function.

We speculated that the reason for this Application parameter’s existence was because Stack Exchange needed a way to return a container for other properties, instead of just a single flat PropertyName.  In order to simplify the code and make it more flexible for users, we extended this concept to all properties.  You can now pass in a value for PropertyName which looks like a file system path, and it will resolve those values by looking through nested hashtables.  Refer to this gist for examples of how this looks for the caller, before and after our changes.  One advantage of this new approach is that you can override individual values of the nested hashtables without duplication; with the old Application implementation, you would have to repeat the entire table even if you only wanted to override one key / value pair.