Category Archives: Continuous Delivery

#DevOps = #ContinuousDelivery + #ContinuousObsolescence?

During my talk at QCon last week we had started an interesting debate about whether DevOps was a subset of Continuous Delivery or vice versa.

We were lucky enough to have Dave Farley in the audience and it was clear from his definition of DevOps versus Continuous Delivery that he felt that CD was the superset and DevOps a component of that set… but he admitted that he if he was writing the Continuous Delivery Book again in 2015 would change the definition of “finished” to embrace the full product lifecycle from inception to retirement.

This aligns nicely with his co-author Jez Humble’s thoughts on being “Product-Centric” that we discussed in an earlier blog post.

“Finished” should mean “no longer delivering value to the client and hence retired from Production”.

DaveDefinitionOfDevOps

DaveOnCD

That was the we joined a new term (live on stage as captured by Chris Massey’s tweet below) – “Continuous Obsolescence”

 MasseyTweet

So what is #ContinuousObsolescence?

Continuous Obsolescence is the process of continuously grooming your product/feature portfolio and asking yourself “is this product/feature still delivering value to the customer?” because if it isn’t then it’s “muda” (waste), it’s costing you money and should be retired.

Continuous Obsolescence is the process of continuously grooming your product/feature portfolio and asking yourself “is this product/feature still delivering value to the customer?” because if it isn’t then it’s “muda” (waste), it’s costing you money and should be retired.

Every product/feature that’s not delivering value costs you by:

  • Wasted resources – every build, every release, you’re compiling and releasing code that doesn’t need to be shipped, and consuming server resources once deployed.
  • Overhead of Regression testing – it still needs to be tested every release
  • Unnecessary complexity – every product/feature you don’t need just adds complexity to your environment which complicates both the software development process and the production support process. Complex systems are harder to change, harder to debug and harder to support. If it’s not needed, simplify things and get rid of it!

I’d argue that DevOps could even be defined as
“#DevOps = #ContinuousDelivery + #ContinuousObsolescence”

If you’re doing DevOps and you’re a believer in the CALMS model then you should be practising “Continuous Obsolescence”… in fact I’d argue that DevOps could even be defined as “#DevOps = #ContinuousDelivery + #ContinuousObsolescence” – what do you think?

Send us your comments or join the debate on Twitter @DevOpsguys

-TheOpsMgr

Leave your Technical Debt behind this New Year

Or: How DevOps can put you back in the black for 2015

Courtesy of Paul Downey
Courtesy of Paul Downey

When you’re running a business there are so many things to focus on that it can be easier than you think to fall into technical debt. If IT is not your speciality or your business isn’t set up to deal with the perpetual developments of cloud computing patching things hurriedly together or using inexpert coding can seem like a quick fix that you can come back to and do properly in the future; in effect you’re borrowing time.

Unfortunately errors often beget errors, piling up ‘interest’ on your technical debt that your business might find it impossible to get out of in the long run. It’s a serious problem that can leave you wasting time and money, repeatedly solving problems caused by early errors.

With continuous development expert Alex Yates joining us for the 1st DevOps Meet Up of the year on Jan the 7th we took a look at his blog to see how a DevOps approach can help you tackle your software and IT problems before they become serious, unsolvable issues.

Here are some of Alex’s tips to prevent your technical debt reaching critical mass:

 

  • Source Control to better manage changes. Thankfully this is pretty much a given for most people now.
  • Thorough automated testing. Adopting Continuous Integration and/or Continuous Delivery.
  • Better teamwork/communication across IT teams. Adopting a more DevOps approach.
  • Better communication with business managers to help them understand the consequences of shipping features too fast without time to focus on quality. The business guys will want you to ship quickly now but they’ll also want you to be agile later in the year right?
  • Keeping track of technical debt visually, monitoring it and paying it back (refactoring it) when possible

Using actual case studies in Wednesday’s talk Alex will share his expertise on the need for changes in organisational structure, process and technology, which are necessary to arrive at a nimble, fast, automatable and continuous database deployment process.

The session will tackle how to customise common practices and tool sets to build a database deployment pipeline unique to your environment in order to speed your own database delivery while still protecting your organization’s most valuable asset, its data.

At DevOpsGuys we specialise in continuous delivery, application management and leading on development processes that allow your business to function to its greatest capacity; we manage your website so you can focus on your business.

To understand how  DevOpsGuys  can help  you get control of your technical debt in 2015 read more on our website or say hello on Twitter @DevOpsGuys.

Happy New year!

 

DSC Tooling: cDscResourceDesigner and DscDevelopment

In PowerShell v4 – the first release of Desired State Configuration – the experience of authoring DSC resource modules isn’t terribly pleasant.  You have to produce both a PowerShell module and a schema.mof file, both of which have to follow certain rules, and agree with each other about the parameters that can be passed to the module’s commands.  If you get it wrong, your resource won’t work.

One of the extra DSC-related goodies Microsoft has released is an experimental module called xDscResourceDesigner.  It contains several functions that help to take the pain out of this authoring experience.  For example, you can run a Test command on a resource to make sure that its PSM1 file and schema.mof are both legal and in sync.  DscDevelopment, another of Steve Murawski’s modules from Stack Exchange, takes this concept a step further, and allows you to do things like automatically generate a schema.mof file from a PSM1 file.

cDscResourceDesigner is a community-modified version of the original xDscResourceDesigner module.  Most of our focus, so far, has been on the Test-cDscResource function, which is called from Invoke-DscBuild and becomes part of our continuous delivery pipeline.

Microsoft’s experimental version of this Test command was pretty good, but it would report failures on some modules and schema files that are perfectly valid, as far as WMI and the Local Configuration Manager are concerned.  In fact, several of Microsoft’s own DSC resources (both experimental and those included in PowerShell 4.0) would fail the tests of the original version of this command.  We’re working on identifying and fixing these types of false failures, so the Test-cDscResource command can be trusted as a reliable reason to fail a build.

We’ve also got an eye on the future.  One of the big DSC-related enhancements coming in PowerShell v5 is the ability to author resources using a new class definition language.  There will no longer need to be a schema.mof file separate from the PowerShell code, as the class definitions will contain enough information for DSC to do everything it needs to do.  Once that new version is released and deployed, you’ll have a much better experience by using this new resource definition language.  At that time, we plan to add new commands to DscDevelopment or cDscResourceDesigner which will allow you to automatically convert resource modules between v4 and v5 syntax, in either direction.

PowerShell DSC tooling updates released!

Over the past few weeks, we’ve been working on incorporating Desired State Configuration into our continuous deployment pipeline.  We’ll be using this technology internally, eating the dog food, but we also want to be able to help our clients leverage this technology.

To get started, we decided to take advantage of the excellent work and experience of Steven Murawski.  During his time at Stack Exchange, Steven was one of the first and most visible adopters of DSC in a production environment.  Over time, he developed a set of PowerShell modules related to DSC, which have been published as open source over at https://github.com/PowerShellOrg/DSC.

We’ve been working on some updates to this code and are now pleased to announce that it’s publicly available via GitHub at https://github.com/devopsguys/DSC/tree/development

A list of the changes that Dave Wyatt has been working on within DevOpsGuys over the past couple of months can be found at https://github.com/PowerShellOrg/DSC/pull/80

The most extensive changes, and many of the design decisions were made in collaboration with Steve Murawski. There’s still more work to be done before this branch is ready to be merged into master; we just wanted to get this code out into the public repo asap. The motivation behind these changes was to get the tooling modules to a point where we can train clients to use them in a continuous delivery pipeline.

The biggest priorities were creating examples of the folder structures required by DscConfiguration and DscBuild, and simplifying the Resolve-DscConfigurationProperty API and code.

We’ve also tried to improve the overall user experience by making minor changes in other places; making Invoke-DscBuild run faster, making failed Pester tests or failed calls to Test-cDscResource abort the build, making Test-cDscResource stop reporting failures for resource modules that are actually valid, etc.

We’d love your feedback on the changes we are making, so please get in touch with your comments.

DSC Tooling: The DscConfiguration module

The DscConfiguration module is one of the tooling modules originally written by Steven Murawski for Stack Exchange.  It has since been released as open source, and the main repository for the code is hosted by PowerShell.org.

The DscConfiguration module provides us with two main pieces of functionality.  First, it allows us to store the ConfigurationData hashtable in several smaller, more manageable data files.  This is a vast improvement over the experience you would have if you had to maintain the entire ConfigurationData database in a large, single file.  Once you scale out beyond a few machines, you quickly wind up with a file that is thousands of lines long.  It can be easy to overlook errors in such a large file.

The other major piece of functionality provided by the DscConfiguration module is the introduction of the concept of Sites, Services, and Global settings, in addition to Nodes.  Instead of needing to define every setting directly on each Node – which could introduce a large amount of duplication throughout the ConfigurationData table – you can define settings that are inherited by all nodes, by nodes in a specific site, or by arbitrary groups of similar nodes (which are referred to as Services in the DscConfiguration module.)

When we began working with the Stack Exchange modules, the DscConfiguration module’s code had the most complexity.  As Steven described it, the module had grown organically over time, in response to changing needs within Stack Exchange.  We have collaborated with Steven to simplify the code in this module, making it easier to understand and to document, in some places.

Here are some of the changes we’ve made to the module at DevOpsGuys:


We moved the Resolve-DscConfigurationProperty function from the DscBuild module to the DscConfiguration module.

It makes more sense for it to live here, with all of the rest of the code that has explicit knowledge of how the ConfigurationData hashtable has been structured.We speculated that this function was originally in the DscBuild module due to a block of code which would look up the value of the $ConfigurationData variable if it were not explicitly passed into the function.  Refer to this gist for the original code an our updates.

Presumably, the intention of that code was to allow you to call Resolve-DscConfigurationProperty from a configuration script without having to bother to explicitly pass along a value to the -ConfigurationData parameter on every call.  The problem with the original implementation is that it used the Get-Variable cmdlet, which will only resolve variables in the same script module as its caller.  If you were using Invoke-DscBuild, you wouldn’t notice the difference, because that command also contains a $ConfigurationData variable that would eventually be resolved.  If you tried to call a configuration from outside of Invoke-DscBuild, though, the calls to Resolve-DscConfigurationProperty would fail (unless you passed in -ConfigurationData explicitly.)

By converting this block to use $PSCmdlet.GetVariableValue() instead, the function can now be called from a configuration even if you’re not using the Invoke-DscBuild command from the DscBuild module, and without explicitly using the -ConfigurationData parameter.


We took out the concept of “Applications”, replacing it with the ability to define a hierarchy of properties.

In the original Resolve-DscConfigurationProperty function, there were two ways you could use it:  you could pass in a -PropertyName string, or an -Application string.  If you used -Application, the function would look up values in a slightly different place, and would return a hashtable containing whatever values were needed to install a particular application (Name, ProductId, SourcePath, Installer, etc.)  The code itself was quite complex, due to having checks for whether $PropertyName or $Application was being used in virtually every helper function.

We speculated that the reason for this Application parameter’s existence was because Stack Exchange needed a way to return a container for other properties, instead of just a single flat PropertyName.  In order to simplify the code and make it more flexible for users, we extended this concept to all properties.  You can now pass in a value for PropertyName which looks like a file system path, and it will resolve those values by looking through nested hashtables.  Refer to this gist for examples of how this looks for the caller, before and after our changes.  One advantage of this new approach is that you can override individual values of the nested hashtables without duplication; with the old Application implementation, you would have to repeat the entire table even if you only wanted to override one key / value pair.

DSC Tooling: The DscBuild module

The DscBuild module contains the Invoke-DscBuild function.  This is essentially a complete implementation of a DSC continuous delivery pipeline which takes care of producing all of the artifacts which need to be published to your pull servers:  MOF documents, zip files for resource modules, and checksums for both.  It also uses Pester to run any unit tests on your resource modules, and runs the Test-cDscResource command from the cDscResourceDesigner module on them as well.  Failures in either of these two types of tests will abort the build.

It’s not strictly necessary to use Invoke-DscBuild; you could reproduce this process in a product like TeamCity using individual steps for each part of the process: running unit tests, compiling configurations, etc.  The Invoke-DscBuild function can serve as a template for setting up your own continuous delivery pipeline, or you can use it as-is.

We have made some small changes to the Invoke-DscBuild process to improve its performance and functionality.  Calls to Test-cDscResource were tweaked slightly to cut down on the number of calls to Get-DscResource (which is an extremely slow command, for whatever reason), and we modified the process so it only tests and rebuilds module zip files if the version number of the module has changed from what is already built on disk.  When we first reviewed the module, failed unit tests were not causing the build to abort; this has been corrected as well.

On the whole, we found the DscBuild module to be a great starting point for producing a DSC-based CD solution.

Desired State Configuration Basics

Before delving into the details of the tooling modules, one should have a basic understanding of the components of DSC.  This post consists of definitions of several key terms:

  • Configuration – Executable code which produces zero or more MOF documents that can be later processed by the Local Configuration Manager (or an equivalent non-Windows agent) on a server.  A configuration is similar to a PowerShell function, but it has different common parameters and different automatic variables.  There are also keywords available that can’t be used outside of a configuration; these are Node and Import-DscResource.
  • ConfigurationData – As mentioned, configurations have different common parameters than normal PowerShell functions.  One of these parameters is -ConfigurationData.  It’s a means to pass a complex set of parameters to the configuration.This parameter accepts a special hashtable which must conform to certain minimum requirements:  it must contain a key named AllNodes, and the value assigned to the AllNodes key must be an array which contains only other hashtables.  The hashtables inside the AllNodes key must contain a property called NodeName.Aside from AllNodes, Microsoft has stated that any other keys in the ConfigurationData table which begin with the prefix PS should be considered taboo; if they extend the required parameters at some point, the new keys will begin with PS.
  • Resource – DSC resources contain the executable code necessary to implement the actions described by a MOF document.  For Windows endpoints, DSC resources are small PowerShell modules which expose three commands, at minimum, named Get-TargetResource, Set-TargetResource, and Test-TargetResource.  The parameters passed to these commands are populated by the values in the MOF document.DSC resources must also contain a corresponding schema.MOF file which describes the parameters accepted by these PowerShell commands, in the MOF language.
  • Local Configuration Manager (LCM) – This is the Desired State Configuration agent which runs on every endpoint.  It’s part of Windows Management Framework 4.0.  The LCM is responsible for reading the MOF documents which describe the desired configuration of the local computer, loading an executing the resources associated with that configuration, and interacting with the pull server and compliance server to do things like download new MOF documents and resource modules, and report status.
  • MOF document – MOF stands for Managed Object Format; it’s a standard format for plain text documents which are used to define classes and objects in CIM (Common Information Model.)  By having the MOF document layer, Microsoft has decoupled the PowerShell configuration language from the Local Configuration Manager.  This means that it’s possible to produce MOF documents from toolsets other than PowerShell, such as Chef.  It’s also possible to use PowerShell configurations to manage non-Windows devices, provided that an agent exists on the non-Windows platform which will read and process the MOF documents (such as OMI on Linux.)
  • Pull Server – An OData-based web service which serves up MOF documents and resource modules to endpoints.  Resource modules are hosted in the form of zip files.  Both resource module zip files and MOF documents also have a corresponding checksum file, which allows the client to make sure they have a complete download.
  • Compliance Server – Another OData-based web service which accepts status reports from endpoints, and stores that data in a database which can be reported on later.

This is very much a birds-eye view of the DSC technology, without going into any great detail.  However, these definitions are enough for us to discuss the roles filled by each of the DSC tooling modules.