What does #DevOps mean to the roles of Change & Release managers?
One of our team raised this question in our internal #DevOps Slack channel this week and it sparked off an interesting discussion that we thought was worth sharing with a wider audience.
Firstly, let’s start with one of my favourite definitions of DevOps:
Now that’s a bit tongue in cheek, obviously, as the scope of DevOps in a CALMS model world is probably wider than just that but it’s not a bad way to start explaining it to someone from a long-term ITIL background.
They (should) know that a Standard Change is:
“Standard Changes are pre-approved changes that are considered relatively low risk, are performed frequently, and follow a documented (and Change Management approved) process.
Think standard, as in, ‘done according to the approved, standard processes.”
If we break that down into the 3 main parts we get:
- “relatively low risk” – DevOps reduces the risks via automation, test-driven development (of application AND infrastructure code), rapid detection of issues via enhanced monitoring and robust rollback.
- “are performed frequently” – this is a key tenet of DevOps… if it’s painful, do it more often, learn to do it better and stop trying to hide the pain away!
- “follow a documented (and Change Management approved) process” – DevOps is about build robust digital supply chains. This is your highly automated, end-to-end process for software development, testing, deployment and support and as part of that we are building in the necessary checks and balances required for compliance to change management processes. It’s just instead of having reams of documentation the automated workflow *IS* the documented process.
This starts to point us in the direction of how the role of the Change & Release Manager will change over time.
In the same way that the role of IT Operations will become more like Process Engineering in a factory the role of Change & Release Management will start to focus on more on the digital supply chain & software production line rather than trying to inspect every single package that flows along that pipeline.
In this way they become more like quality control engineers in that their job becomes to work with the process engineers to design the pipeline in such a way that what comes out the other end meets the quality, risk and compliance criteria to be considered a “Standard Change”.
They also need to move towards more sampling-based, audit-style approach for ensuring that people aren’t skipping steps and gaming the systems.
For example, if the approved process says that all changes must pass automated regression testing they might periodically pick one release and review that the regression suite was actually testing meaningful cases and hadn’t been replaced with a return:true; because no-one could be bothered to keep it up do date.
Similarly, if the process mandates “separation of duties” they could check so see that the person who initiates a change (via the pipeline) isn’t also in the same role required to approve the Jira ticket that Jenkin’s checks to see if that release can be deployed to Live.
The overall goal here is to move towards a “High Trust Culture” but keeping in mind the mantra of “Trust, but Verify” they need to work with DevOps to ensure the appropriate checks and balances.
That, and never, ever invite me to a CAB meeting, ever again. :-)