The first 2 day Agile conference to hit Cardiff

Agile-Wales-Logo-215

 

The DevOpsGuys team are excited about Cardiff’s very first two-day Agile conference:

Agile Cymru in association with Adaptagility, is a jam-packed 2 day conference with presentations from leading experts in the agile space. The conference will take place in the heart of Cardiff and the home of the Welsh Arts – the Millennium Centre on July 7 and 8, 2015.

As the first of its kind to grace the capital of Wales, Agile Cymru is all set to inspire and educate those of the Welsh community looking for a more collaborative, flexible and responsive way of working. The event format will consist of interactive and storytelling workshops, giving first-hand advice and techniques to attendees, from industry experts and practitioners.

Agile Cymru presents a unique opportunity to help teams gain actionable insights to significantly improve the way they operate, as well as help develop and nurture the agile community in Wales.

Conference host James Scrimshire, says: “I’m really excited about AgileCymru, it’s the first large Agile conference held in Wales, we’ve got an incredible lineup and a stunning venue. Agile is now becoming more prevalent in Wales and I can’t wait to help bring exciting ideas and talent here to help adopters learn and improve.”

Early bird tickets are now on sale, with a limited time offer saving attendees a massive £100 off the standard price.

If you’re interested in getting involved and securing your seat at Agile Cymru this year, register your interest here

For further information on what’s involved and the speakers lined up, please visit http://www.agilecymru.uk

info@agilecymru.uk

DSC and Octopus Deploy integration

When we first started looking at DSC for production use, we were using the same process and modules that came from Steven Murawski and Stack Exchange.  There are several blog posts talking about this tooling, but here’s a high level overview of what our pipeline looked like originally:

  • TeamCity is our Continuous Integration server; it monitors our DSC source control repository and kicks off the build process.
  • Stack Exchange / PowerShell.org DSC tooling modules were used to take the data from source control and run tests, then produce configuration documents and resource module zip files.
  • Octopus Deploy was used to transfer these artifacts out to one or more DSC pull servers.
  • DSC pull servers delivered the bits down to clients.

This works, but what was missing was the ability to define environments (dev/test/prod/etc), and to version the configurations and modules in such a way that they could be promoted through these environments in our pipeline.

To address this, we’re trying out something new.  We’re cutting out the middle-man of a DSC pull server, and having Octopus Deploy deliver our DSC configurations and resources directly to the endpoints.  As far as DSC’s Local Configuration Manager is concerned, we’ve switched to a Push model.  Really, though, that depends on how the Octopus Deploy Tentacle is running (which can be set up for either push or pull, just like the LCM.)

With this setup,  we can promote versions of DSC configurations through our environments using the same mechanism that’s already in place for our application and website code.  We can also leverage Octopus Deploy’s variable-scoping functionality to control our DSC configurations; these Octopus Deploy variables are essentially taking the place of the entire DscConfiguration module in the new pipeline:

Variables

Now, Octopus Deploy’s dashboard shows us exactly which configuration version is deployed to each environment:

OctopusDashboard

We’re still experimenting with this approach, but we’re getting pretty positive results so far.  A nice perk for our customers is that they have fewer new tools to learn when we’re first setting things up; they don’t have to grok both Octopus Deploy’s variables / environments concepts, as well as a separate solution doing the same thing for DSC.

Pipeline 2015

 

pipelineconf-logo-medium

A couple of weeks ago, we sent a crack squad of DevOps ninjas (plus marketing unicorn) down to London to attend Pipeline Conf, a one-day not-for-profit ‘unconference’ event focused on Continuous Delivery. Two of our team (James Betteley and Matthew Macdonald-Wallace) were speaking at the conference, and it gave the rest of us a chance to see what other people were doing in this space and chat with them about the problems they were facing. It also gave us the chance to give people our snazzy new free DevOpsGuys t-shirts!

Keep CALMS

Linda Rising delivered an excellent keynote about “Fearless Change” and how to encourage colleagues to adopt new ideas and technologies (and dare I mention it, new “cultures”) in your organisation, whilst dispelling a number
of myths about how we communicate with each other, including that “facts and reason” matter far less than how enthusiastic you are about a project. Linda also pointed out that sometimes the best thing to do is stay quiet and let someone who disagrees with you argue themselves around to your point of view, although she was quick to point out that this doesn’t always work!

First up after Linda’s keynote was James Betteley, who delivered an experience report on doing Continuous Delivery with Legacy code. It was a well-attended session with lots of great questions and feedback from the audience. The main points to take away from this talk were:

– Tackle legacy thinking before tackling legacy code

– Don’t worry if your testing pyramid looks more like a testing rectangle

– Automate all of the things

The video for this session is now available online, so you can watch James do his thing as if you were actually there (just remember not to ask questions at the end, coz he won’t hear you).

Whilst James delivered his presentation on Continuous Delivery with Legacy Code a couple of us crept away to watch @pr0bablyfine and @benjiweber discuss “Testing in Production”.  It turns out that they had very similar ideas to our own when it comes to using Monitoring Driven Development and showed that by running your functional tests against your production environment you can ensure that you get results you can trust when it comes to application performance.  This led to a dangerously productive post-talk conversation in the corridor about how our views were aligned, and we had some discussion about promoting Monitoring Driven Development in future, so watch this space!

Matthew  – @proffalken – then delivered his talk on “Pipelines for Systems Administrators”.  This was also a well-attended session that generated some interesting questions. His talk focused on the advantages of having a deployment pipeline for testing changes to server configuration, using your existing monitoring toolset.

The event was a great chance to meet some like-minded professionals and to share ideas, tips, tools and knowledge across the industry. Can’t wait for Pipeline 2016!

5 Definitions of #DevOps

In my talk at QCon last week I was lucky enough to be the first speaker on the DevOps track so I took the opportunity to poll my fellow speakers for THEIR definition of DevOps and the results where fascinating.

What I ended up with was different 5 Definitions of DevOps:

DaveDefinitionOfDevOps

DevOps - Matthew

DevOps - Anna

DevOps - Amy

DevOps - Steve

So what are the key take away messages from these DevOps definitions?

Firstly – there is no single, definitive version of what DevOps “is” as you can see from the responses above!

DevOps clearly means different things to different people and we (as the DevOps community) have to ask ourselves is this a good or bad thing? Good in that it’s promoting a diversity of ideas that contribute to the DevOps movement, but “bad” in that is causes confusion amongst people trying to learn more about DevOps.

Secondly – there is a clear message around communication, collaboration and culture being a key part of the DevOps message, which is excellent” #NoMoreSilos! J

Thirdly – sadly the name “DevOps” might be seen as being exclusionary in itself and not embracing critical parts of the SDLC community – specifically in Amy’s case the Test community. Amy expands on this in her blog post “DevOps with Testers”.

A number of people would say that DevOps should be called “DevTestOps” or “BizDevOps” but at QCON Dave Farley was arguing that the correct term should be “Continuous Delivery” as DevOps is just a subset of CD. We discussed this earlier in our post on Continuous Obsolescence.

  • So what does DevOps, DevTestOps or BizDevOps means to you?
  • Has the term DevOps had it’s day and should we just extend the definition of Continuous Delivery to embrace the full application lifecycle?
  • Do we need to rename our company? :-)

Your thoughts in the comments please?

-TheOpsMgr

#DevOps = #ContinuousDelivery + #ContinuousObsolescence?

During my talk at QCon last week we had started an interesting debate about whether DevOps was a subset of Continuous Delivery or vice versa.

We were lucky enough to have Dave Farley in the audience and it was clear from his definition of DevOps versus Continuous Delivery that he felt that CD was the superset and DevOps a component of that set… but he admitted that he if he was writing the Continuous Delivery Book again in 2015 would change the definition of “finished” to embrace the full product lifecycle from inception to retirement.

This aligns nicely with his co-author Jez Humble’s thoughts on being “Product-Centric” that we discussed in an earlier blog post.

“Finished” should mean “no longer delivering value to the client and hence retired from Production”.

DaveDefinitionOfDevOps

DaveOnCD

That was the we joined a new term (live on stage as captured by Chris Massey’s tweet below) – “Continuous Obsolescence”

 MasseyTweet

So what is #ContinuousObsolescence?

Continuous Obsolescence is the process of continuously grooming your product/feature portfolio and asking yourself “is this product/feature still delivering value to the customer?” because if it isn’t then it’s “muda” (waste), it’s costing you money and should be retired.

Continuous Obsolescence is the process of continuously grooming your product/feature portfolio and asking yourself “is this product/feature still delivering value to the customer?” because if it isn’t then it’s “muda” (waste), it’s costing you money and should be retired.

Every product/feature that’s not delivering value costs you by:

  • Wasted resources – every build, every release, you’re compiling and releasing code that doesn’t need to be shipped, and consuming server resources once deployed.
  • Overhead of Regression testing – it still needs to be tested every release
  • Unnecessary complexity – every product/feature you don’t need just adds complexity to your environment which complicates both the software development process and the production support process. Complex systems are harder to change, harder to debug and harder to support. If it’s not needed, simplify things and get rid of it!

I’d argue that DevOps could even be defined as
“#DevOps = #ContinuousDelivery + #ContinuousObsolescence”

If you’re doing DevOps and you’re a believer in the CALMS model then you should be practising “Continuous Obsolescence”… in fact I’d argue that DevOps could even be defined as “#DevOps = #ContinuousDelivery + #ContinuousObsolescence” – what do you think?

Send us your comments or join the debate on Twitter @DevOpsguys

-TheOpsMgr

Why aren’t more women in tech jobs?

Androgyny Edited

Last month we asked some questions about gender quotas in the workplace and whether or not that was the way to encourage more women into the world of IT professionals.

Our poll indicated that 86% of our Blog readers think that gender quotas are NOT the most effective way to equalise the IT gender gap, as one comment on our post stated:

To me, quotas seem like the well-intentioned, but ill-conceived path to diversity by avoiding cultural change and avoiding the problem of biases and institutions that create monoculture– Jyee

But what is the answer? Or perhaps we need to reconsider the question; instead of exploring how to encourage more women to embark on an IT-based career, maybe we first need to find out why they don’t already consider it an option.

  • In a traditionally male-dominated industry fully equipped with its own negative stereotypes it’s difficult for young women embarking on their careers to find role models in IT: ‘you can’t be what you can’t see’. Often women report hostility, or lack of opportunity in the workplace, finding themselves overlooked in favour of male counterparts:

The top two reasons why women leave are the hostile macho cultures — the hard hat culture of engineering, the geek culture of technology or the lab culture of science … and extreme work pressures – Laura Sherbin, a director at the Center for Work-Life Policy

  • It isn’t obvious what sort of tech roles are available until you start in the industry. A tech knowledge is a major part of almost all professional roles in contemporary society – it’s not just coding. The sector is developing rapidly, but a huge amount of this is not visible externally – girls in school may never consider that they could be in tech jobs creating robots, designing games, running start-ups, because this side of the industry is rarely touched upon as a career choice in schools. If young women in secondary education were aware of the potential of developing tech skills they might be more inclined to engage with tech-related subjects at school.
  • Women in tech have not had the best time of it. In the public eye women can see things like GamerGate and the ruthless persecution that can follow women who actively become involved in tech or web-based activities. Although this is far from the norm it’s what is predominantly shown in the media.

Clearly a culture shift is required, rather than short-term measures. Actively encouraging school-aged girls to take up an interested in technology is a way of opening up routes into the industry early on. Eliminate the notion of ‘gendered subjects’ and highlight the opportunities that a tech role brings both to individuals and to the future: it’s an ever-developing, changing exciting industry to be a part of and that’s something that will inspire boys and girls alike.

Technology pervades almost every element of contemporary culture and at least a basic knowledge of technology is becoming standard for the majority of professional roles. Using this as a way to destroy the trope that tech jobs are done by geeky men in a basement with no social skills and a penchant for online gaming.

But what are industries currently doing to tackle the problem of gender imbalance?

Companies like Google have introduced training schemes which aim to fight cultural biases and make the workplace a more positive and neutral environment. Participants take part in word association games and are often surprised at how often women are habitually associated with less technical roles. Hackathon are running a series of women-only events in India, Microsoft are running ‘DigiGirlz’ days all over the world and the Welsh Government’s WAVE scheme aims to help women improve their career prospects though training opportunities, networking opportunities and career development.

But what can smaller businesses do to help?

Companies need to research the biases that prevent women from getting ahead and then devise ‘interrupts’. Instead of single training sessions companies need to make systematic changes. – Joan C Williams.

This, to us, sounds like a DevOps approach. Like optimising your cloud-computing potential, gender equality in the tech industry is not something you can ‘do’ once; it’s an ongoing series of cultural and attitude changes that will deliver the best results for companies and employees. And remember: change takes time

We’d love to hear the strategies companies in the UK are undertaking to address this gender imbalance in the workplace. If you have any thoughts, suggestions or news you’d like to share, please get in touch.

 

How Do Databases Fit Into DevOps?

 

red-gate

Towards the end of 2014, we started getting to know the folks over at Redgate, and learning about what they’re doing around Database Lifecycle Management. It’s prompted some interesting conversations.

Redgate have been spending a lot of their time thinking about continuous delivery for databases. One thing we’ve discussed recently is how it’s not uncommon to see tensions mount when databases are added to the mix of a Continuous Delivery process. Particularly relational databases, and especially transactional relational databases. As Redgate thinks about the opinions among the DevOps community on how to manage and deploy databases, it’s prompted a few interesting questions – for example:

 

Why do the tensions around databases seem less pronounced in the DevOps community?

 

It’s About Perspective

Now I’m not 100% convinced that there are fewer tensions – I suspect they may just be different tensions. That said, there are plenty of good explanations – both cultural and technical – for why the CD & DevOps communities perceive databases differently. Perhaps some of it comes down to what they’re respectively is focused on.

 

 

DBDevOps?

Naturally, Redgate are coming to this from their established position in the RDBMS space (predominantly SQL Server, but with a twist of Oracle and MySQL as well), and so the second question that comes to light is:

 

How do DBAs fit into a DevOps world?

If we look at how the roles and responsibilities break down, it looks like DBAs are simply Ops specialists. Does that mean that database developers and DBAs should be blended in a single DBDevOps team? Having two flavours of DevOps team running side-by-side on the same overall application feels like a poor compromise, but are databases special enough to warrant it?

Alternatively, does this mean that the DBA dissolves into just another Ops facet of the overall team? Given the amount of specialist training and knowledge we so often see invested in DBAs, it’s hard to imagine how a team could be high-performing without some degree of specialization when it comes to managing data. This could be the straw that breaks the Full-Stack Engineers back, or it could be that you can actually get a very long way before you have to dip into the skills of a true specialist. I’m sure that, like so many questions, the answer is “It depends”

Either way, the fact that we’re even talking about Database Lifecycle Management (and uttering sentences which include both “database” and “DevOps”) suggests that software engineering as a whole is maturing at the pace of Moore’s Law, and starting to embrace the ideas that were obvious to W. Deming. Not only are we tackling these questions thoughtfully, but we’re also being more aware of the practices that have emerged in software engineering. As a result of that maturity, we’re also starting to see tools emerging which are actually fit to be used in a DevOps environment. In the case of databases, part of that is about treating them as stateful artifacts, and I particularly like Redgate’s SQL Lighthouse in this regard – a free little tool currently in beta for monitoring when database states drift.

So Many Questions

The sharp-eyed among you will have noticed that I’ve suggested several possible ideas, but avoided stating which one is the Right One™. That’s mostly because a) I don’t know yet, and b) it really does depend. And c) it’s hard to put together a solid case in a relatively short blog post!

We’ve seen a few different approaches to the problem, but what kinds of practices are you forming around your databases (and how are your DBAs being involved)?

And given that databases are tricky and stateful, what kinds of tools and processes are you using to monitor and deploy them? Open source libraries sanctified by Etsy, Netflix et al, something home-grown (and quite possibly a bit of a snowflake), or something off-the-shelf?

Follow

Get every new post delivered to your Inbox.

Join 3,129 other followers

%d bloggers like this: