Tag Archives: DevOps

DevOps Express – the Suite smell of Success?

Fascinating news in the DevOps World last week with the announcement of the DevOps Express initiative.

“DevOps Express is the first-of-its-kind alliance of DevOps industry leaders and includes founding members CloudBees and Sonatype, joined by Atlassian, BlazeMeter, CA Technologies, Chef, DevOps Institute, GitHub, Infostretch, JFrog, Puppet, Sauce Labs, SOASTA and SonarSource.”

Anyone that’s heard me speak at conferences knows I am not a huge fan of “DevOps-in-a-Box” Enterprise Suites, mostly because they lock you into one vendors vision of what DevOps means and what your organisation needs. Worse than that, they lock you into one homogeneous skill set for your team – everyone must have skills with VendorX’s uber-suite and you lose out on the creativity and benefits from having different viewpoints about different tools and where they might work best (heterogeneity).

That said, I am really quite excited about DevOps Express because it *IS* a set of best-of-breed, heterogeneous tools (many of which DevOpsGuys are already partnered with e.g. Atlassian, Chef, Puppet etc).

In their words…

“DevOps Express is a solution-oriented approach designed to make it easier and more flexible for enterprises to adopt DevOps, using popular technologies spanning multiple solution components. DevOps Express provides a framework for industry partners to deliver reference architectures that are better integrated and better supported. Creating reliable and actionable reference architectures for organizations will accelerate DevOps adoption and minimize risk for organizations.”

The vendors saw that the market was already using their tools together anyway as part of their digital supply chains so the question then became “how can we drive adoption by making this easier for our joint customers by providing reference architectures, better integration etc?” – and this is the question that the DevOps Express Initiative seeks to answer.

This sort of bottom-up, “spot a problem and try to solve it” is a great example of DevOps Culture in action – the “First Way of DevOps” is “systems thinking” so the vendors have taken a step back and said “hey, we need to stop optimising our one component and start focusing on the system as a whole” which is exactly what they should be doing in order to improve the DevOps toolchain.

Where should they go next?

Based on our experiences with large Enterprises their top 3 requirements are:

  1. Integrated Authentication (SSO) and roles-based access control (RBAC) – enterprises that have huge IT teams (>1000 staff) want the granularity to enable only certain people, with certain skills or levels of authorisation, to perform certain tasks. If DevOps Express can create a shared model for this across all the vendor tools that would be amazing. Perhaps Atlassian Crowd might be a place to start?
  2. Compliance and audit – large Enterprises have large regulatory burdens (e.g. SOX, HIPPA etc). Regardless of our views of the underlying McGregor Type X Command & Control mindset that underpins a lot of this thinking it’s the law and they have to comply. So mechanisms like standardised log formats, centralised logging, ELK-style log querying and analysis etc are all very helpful.
  3. “one throat to choke” – Enterprise customers want Enterprise-grade support, 24x7x365 with a <1hr MTTR, and they definitely DON’T want to get involved in a multi-vendor finger pointing exercise with each vendor blaming the other for the problem. DevOps Express recognise this explicitly in their Charter but it remains to be seen how well this works in practice.

We’d also love to see some of our other partners get involved e.g. Ansible, Hashicorp, OctopusDeploy, and of course experienced DevOps consultancies to help their joint customers on their DevOps journey (hint, hint, cough, cough…)🙂

-TheOpsMgr

#DevOps – the Intern Perspective Part 1

Greg Sharpe is a second year student studying Computer Science at Aberyswyth University and has just started his year long internship at DevOpsGuys. This is the first of a series of posts that talks about his DevOps journey.

When I first started my internship I knew basically nothing about Web Automation and Website Deployment in general but, after being here only a few weeks I have already started to begin to deploy websites for many different projects in AWS.

I can honestly say that there is no way that I would of learnt not only the skills needed by this fast paced career but about the new up and coming technologies involved in deploying these sites without DevOpsGuys.

It’s a great opportunity for anyone to gain great experience as I was taught everything I needed to know straight away and given the support needed to do so. Not only is it a great place to work, but the people there are always happy to help with what I’m working on and I’m really looking forward to working with DevOpsGuys in the coming future as I’m made to feel important and a big part of the team.

“…it was not until working at DevOpsGuys that I really understood how and why it was so important to use version control software”.

Some of the many technologies I’ve have been working with are Terraform, Ansible and Git. Although I have used git before, it was not until working at DevOpsGuys that I really understood how and why it was so important to use version control software and after only a few weeks I can say I’m become more and more confident in Git and can use it on a daily basis without problems.

As for Ansible and Terraform, I had no experience working with these tools prior to joining the team at DevOpsGuys and was a little daunted by the thought of using them. But, after some training sessions with some of the senior engineers, I soon started to understand how both these technologies worked and how to implement them.

I am now onto my third week at DevOpsGuys and have already started deploying the infrastructure for the new www.devopsguys.com site. I have found this not only a great experience to gain knowledge about how Terraform and Ansible work but have also found it enjoyable mainly due to the fact that everyone is so helpful and always available to answer answer any questions I have.

Within my first week, I researched a lot of different options within AWS EC2 Cloud and started with manually creating services from Nginx to Jenkins servers.

I found building these services manually a great inside to how they work and then through the use of Ansible and Terraform I was able to create services in minutes.At first this meant writing a lot of code but when you only have to type in one command to create a complete website, I found that unbelievable.

“…At first this meant writing a lot of code but when you only have to type in one command to create a complete website, I found that unbelievable.”

I’m really looking forward to learning about all things DevOps in the coming future and with the opportunities that DevOpsGuys has to offer (training and talks), I’m sure I’ll be learning a lot more than I ever expected from an Intern placement.

DevOpsGuys Launch Workshops & Training

DevOpsGuys, a leading provider of DevOps services for Enterprise clients, is pleased to announce the launch of their new DevOps Education platform which provides DevOps, Continuous Delivery and Agile training courses and workshop, enabling organisations to build the knowledge they need to succeed with DevOps transformation.

As part of DevOpsGuys on-going commitment to impart knowledge and expertise through the company’s training programmes, and to support organisations developing internal capabilities themselves, DevOpsGuys has built a series of public and onsite training solutions where people of all levels can experience DevOps.

“Enterprise IT has a critical need to identify, select, learn and implement new platforms.  Being a digital enterprise is so critical to the future of every company that knowledge must be built and managed internally”, said James Betteley, Head of DevOps Education at DevOpsGuys.

DevOpsGuys courses are delivered through a combination of interactive workshops and practical, engineering led classes, allowing people to experience DevOps process and practices or get hands-on with Automation, or understand how to design software to facilitate Continuous Delivery.

“We’ve been listening to what people in our industry have been asking us for and are excited to launch our DevOps from the Trenches workshop”, announced Betteley. “This workshop is based on our DevOps experiences working with dozens of leading organisations across Europe – from small start-ups to large enterprises.”

The course can be booked online using EventBrite at http://devopsguys.eventbrite.com

DevOpsGuys offer a range of DevOps and Agile training courses and workshops, which can be found online at https://www.devopsguys.com/devops_training

About DevOpsGuys

DevOpsGuys deliver DevOps-as-a-Foundation to underpin digital transformation, achieving greater customer enablement and unlocking new revenue streams. Our solutions make enterprise organisations agile, scalable and responsive to rapidly changing business requirements.

Our platform is built on a foundation of education and DevOpsGuys offer a series of DevOps, Continuous Delivery and Agile training courses and workshop, enabling you to build the knowledge you need to succeed.

Our courses are delivered through a combination of interactive workshops and practical, engineering led classes, where people of all levels can experience DevOps, get hands-on with Automation, or understand how to design software to facilitate Continuous Delivery.

What does #DevOps mean to the roles of Change & Release managers?

What does #DevOps mean to the roles of Change & Release managers?

One of our team raised this question in our internal #DevOps Slack channel this week and it sparked off an interesting discussion that we thought was worth sharing with a wider audience.

Firstly, let’s start with one of my favourite definitions of DevOps:

“DevOps is just ITIL with 90% of stuff moved to ‘Standard Change’ because we automated the crap out of it” – TheOpsMgr

Now that’s a bit tongue in cheek, obviously, as the scope of DevOps in a CALMS model world is probably wider than just that but it’s not a bad way to start explaining it to someone from a long-term ITIL background.

They (should) know that a Standard Change is:

“Standard Changes are pre-approved changes that are considered relatively low risk, are performed frequently, and follow a documented (and Change Management approved) process.
Think standard, as in, ‘done according to the approved, standard processes.”
http://itsmtransition.com/2014/03/name-difference-standard-normal-changes-itil/

If we break that down into the 3 main parts we get:

  • relatively low risk” – DevOps reduces the risks via automation, test-driven development (of application AND infrastructure code), rapid detection of issues via enhanced monitoring and robust rollback.
  • are performed frequently” – this is a key tenet of DevOps… if it’s painful, do it more often, learn to do it better and stop trying to hide the pain away!
  • follow a documented (and Change Management approved) process” – DevOps is about build robust digital supply chains. This is your highly automated, end-to-end process for software development, testing, deployment and support and as part of that we are building in the necessary checks and balances required for compliance to change management processes. It’s just instead of having reams of documentation the automated workflow *IS* the documented process.

This starts to point us in the direction of how the role of the Change & Release Manager will change over time.

In the same way that the role of IT Operations will become more like Process Engineering in a factory the role of Change & Release Management will start to focus on more on the digital supply chain & software production line rather than trying to inspect every single package that flows along that pipeline.

In this way they become more like quality control engineers in that their job becomes to work with the process engineers to design the pipeline in such a way that what comes out the other end meets the quality, risk and compliance criteria to be considered a “Standard Change”.

They also need to move towards more sampling-based, audit-style approach for ensuring that people aren’t skipping steps and gaming the systems.

For example, if the approved process says that all changes must pass automated regression testing they might periodically pick one release and review that the regression suite was actually testing meaningful cases and hadn’t been replaced with a return:true; because no-one could be bothered to keep it up do date.

Similarly, if the process mandates “separation of duties” they could check so see that the person who initiates a change (via the pipeline) isn’t also in the same role required to approve the Jira ticket that Jenkin’s checks to see if that release can be deployed to Live.

The overall goal here is to move towards a “High Trust Culture” but keeping in mind the mantra of “Trust, but Verify” they need to work with DevOps to ensure the appropriate checks and balances.

That, and never, ever invite me to a CAB meeting, ever again.🙂

-TheOpsMgr

#DevOps and automating the repayment of technical debt

One of the common things we find in Enterprise organisations looking to move to a DevOps model is high levels of technical debt.

To be more accurate, they are caught in the “vicious cycle of technical debt” to the point that trying to ship anything in a rapid, agile way is nearly impossible. It’s the Greek Debt Crisis level of technical debt.

In many cases layers and layers of process and management have been added into the software development lifecycle in order to try and fix the symptoms of the problem (low quality releases, bugs in production, unstable environments, poor performance etc) but they are just Band-Aids on the underlying issues.

http://technical-debt.org/cycle.png

So how do we get out of this death spiral before the organisation can’t compete anymore and a disruptive innovator comes along and eats their lunch?

Well, one trend we see is people looking to DevOps automation to try to create the breathing room they need to break out of this vicious cycle and try to create a virtuous circle instead.

.

If we can automate some of the routine, error-prone and time-intensive tasks we can leverage that productivity gain and invest that time we’ve freed up into re-paying technical debt.

As we pay back technical debt we get a higher quality, more stable and more agile application, we can then re-invest in more automation to start the next cycle of improvement.

We know that this works because we’ve seen it and done it with clients BUT it comes with a caveat (or two).

Firstly, you need to get commitment from the Product Owner etc that the productivity gains will be spent on paying down technical debt and not on an endless cycle of feature bloat (that was probably one of the causes of the problem in the first place).

I’m not going to wave a magic wand and say that this is easy – it’s not – but if you can find the right analogy (“Technical debt is like walking through quicksand” or “Technical debt is like trying to run a marathon with an 80lb backpack on” or “Technical debt is what start-ups don’t have…”) then you might have a chance.

Secondly, DevOps is more than just A for Automation – it’s Culture-Automation-Lean-Metrics-Sharing (CALMS) – so ideally you’d doing more than just “automate some stuff” and begin to start looking at being “product-centric”, coaching your product owner to understand operational requirements and moving away from the Finance-driven project-centric model.

I’d love to hear some stories from our blog readers about how you’ve paid down technical debt, what your “tech debt repayment model” looks like so please leave some comments or links to your own blog thoughts in the comments below

-TheOpsMgr

What does the future of IT Operations look like (in a #DevOps world)?

What does the future of IT Operations look like? As more businesses rely on virtualisation, containers, cloud, Infrastructure as a Service and Microservices is there still a role for IT Operations? How do these teams change to continue to deliver value when supporting Agile Operations techniques?

Is there still a role for IT Operations? Absolutely 100% (we believe that so much we started a company to offer application-centric cloud operations!).

We blogged about this back in 2013 when we said that “Devops Does Not Equal “Developers Managing Production”. We said then:

“Operations is a discipline, with its own patterns & practices, methodologies, models, tools, technology etc. Just because modern cloud hosting makes it easier to deploy servers without having to know one end of a SCSI cable from another doesn’t mean you “know” how to do Operations (just like my knowledge of SQL is enough to find out the information I need to know monitor and manage the environment but a long way from what’s required to develop a complex, high-performing website).” – @TheOpsMgr

This still holds true today.

That said, the role of Operations is changing – Ops has to become more “Application-Centric” and understand the applications that are running on the platforms they provide. It’s not enough anymore to take a narrow view that says “my servers are OK, it’s not my fault the application doesn’t work”. Well, it might not be your “fault” but you share the responsibility for making sure the application is available for your customers. Stop passing the buck!

Operations people almost certainly need to learn to code, since we are heading towards a code-driven, API enabled world.  If you can’t code (or at least have solid scripting skills) you risk being left behind will be left behind.

More importantly, the Operations Engineer/Developer of the future will be filling a role more akin to that of a “process engineer” in a physical factory or logistical supply chain.

A process engineer designs a process and production line that transforms raw materials into a finished product.

The Operations Engineer/Developer of the future will be building Digital Supply Chains and Digital Production Lines.

These Digital Supply Chains will transform raw materials (source code) via continuous integration, test automation, packaging, release automation, infrastructure-as-code etc into applications running in cloud-hosted environments.

The rate of changes flowing along the Digital Supply Chain will far exceed “old school” Change and Release methodologies – you can’t have a weekly CAB (Change Advisory Board) meeting if you’re doing multiple deployments per day (or every 11.6 seconds à la Amazon).

So, just like a physical production line includes statistical sampling, automated testing etc., so will the Digital Supply Chain of the future. We already do this with TDD/BDD, automated testing with tools like Selenium etc but it will become the Operations Engineer/Developer job to ensure that the digital production line delivers release packages of sufficient quality to ensure the stability of the application (and the organisation’s revenue that depends on it!).

Modern supply chains are complex and have many interdependencies on 3rd parties, particularly if you’re operating a “Just-In-Time” (JIT) model. Modern software applications have the ultimate in JIT dependencies due to their integrations with 3rd party SaaS API’s like payment gateways, recommendation engines, authentication gateways, cloud providers etc. Modern Operations Engineers will need to ensure that they design the digital supply chain that can cope with failures in these interdependencies, or at least ensure that they select the right 3rd party partners who can offer the right levels of performance and availability needed for their applications.

In summary, will the Operations Engineer/Developer of the future be “just managing (virtual) servers”? No, almost certainly not.

What they will be doing is designing and building complex digital supply chains with complex interdependencies both internally and externally to the organisation, digital supply chains designed to meet the needs of applications that are designed to meet the needs of their customers, safely, securely and cost-effectively.

The Q&A above is part of material prepared as our contribution to an CA ebook on “Agile Operations”. We wrote our thoughts on 6 questions, of which 4 will be used in the ebook, scheduled to come out in August 2015. You can read the earlier Q&A here – https://blog.devopsguys.com/2015/06/23/what-is-important-for-an-it-ops-to-team-more-effectively-with-preproduction-teams-devops/  

What is important for an IT Ops to team more effectively with preproduction teams? #DevOps

“DevOps can present IT Operations teams with new ‘customers’ in development and test. What traditional or new tools and technologies are most likely to be important for IT Ops to team more effectively with preproduction teams? What information does IT Ops need to pass right to left and which tools are most likely to aid in that?”

The short answer is “A whiteboard marker, a pad of Post-It notes and a couple of pizzas”🙂

That answer is a bit tongue-in-cheek, but there is a serious side to it; whilst new tools can be an important part of DevOps (particularly in Automation) you can get started in changing your Culture and improving your Sharing with very simple tools i.e. the aforementioned whiteboard marker, Post-It notes and pizza.

Start to break down the silos by getting key people in a room with some blank walls and whiteboards and start sharing information, mapping out your value stream and trying to find out, collaboratively, where the bottlenecks in your existing processes are. Once you’ve identified your key constraints then fire up Google and start searching for the tools to solve your problems (or visit a site like DevOpsBookmarks).

DevOpsGuys, like most organisations, have our own “Opinionated Stack” – we like the Atlassian Toolset for managing our Agile workflow, TeamCity or Jenkins as our CI tool of choice, Ansible as our configuration management tool for Linux, Powershell DSC for Windows, AppDynamics as our APM tool, Redgate for our Database Lifecycle Management (DLM) and so on. We partner with many of these companies now because we’ve “dogfooded” the products internally and with our customers and they’ve worked well for our use cases. We always “try before we buy” and we “try before we partner” too because, as they say, “your mileage may vary” (YMMV).

This comes back to fostering a culture of experimentation – give something a try and see what works for you. We started off using Atlassian HipChat as our chat tool and we really liked it. Then we tried Slack and we liked that one more, so we switched. YMMV.

One additional point worth mentioning – the premise of the question is flawed!

They aren’t customers they’re colleagues.

There isn’t a silo of “Us” (IT Ops=supplier) versus “Them” (Everyone Else=customer).

We are supposed to be breaking down these silos to create cross-functional, multi-disciplinary, product-based teams. Development, Test, IT Security, Networks shouldn’t be silos any more – they are people in our team, sitting over the desk from us, attending our daily standups, eating our pizza🙂

The Q&A above is part of material prepared as our contribution to an CA ebook on “Agile Operations”. We wrote our thoughts on 6 questions, of which 4 will be used in the ebook, scheduled to come out in August 2015. We’ll post the remaining 2 questions with our answers onto the blog over the next 2 weeks.