Tag Archives: delivery pipeline

BamiWvqCIAAfBgw

Upgrading to Octopus 2.0

On 20th December 2013, Paul Stovell and the Octopus Team released Octopus Deploy 2.0. We’ve had a chance to play with the product for sometime, and it’s a great next version.

Here’s 5 great reasons to upgrade;

  1. Configurable Dashboards
  2. Sensitive variables
  3. Polling Tentacles (Pull or Push)
  4. IIS website and application pool configuration
  5. Rolling deployments

So let’s take a look at how the upgrade process works.

Step 1 – Backup your existing database

Things to check before you upgrade;

  1. Your administrator username and password (especially, if you are not using Active Directory)
  2. Before attempting to migrate, make sure that you don’t have any projects, environments, or machines with duplicated names (this is no longer allowed in Octopus 2.0, and the migration wizard will report an error if it finds duplicates).
  3. IIS Configuration. Are you using ARR or rewrite rules of any kind.
  4. SMTP Server Settings (these are not kept in upgrade)

The first step is to ensure that you have a  recent database backup that you can restore in case anything goes wrong.

1 - Database Backup

Confirm the location of your backup file. You’ll need this file at a later stage in the upgrade process (and if anything goes wrong)

2-Database Backup Location

Step 2 – Install Octopus 2.0

Download the latest installer MSI from the Octopus Website at http://octopusdeploy.com/downloads

Next follow the installer instructions and the Octopus Manager process. This process is really straight forward, and if you’ve installed Octopus before you’ll be familiar with most of the process. There’s a great video here on how to install Octopus http://docs.octopusdeploy.com/display/OD/Installing+Octopus

Step 3 – Import Previous Octopus Database

Once you’ve installed Octopus 2.0, you’ll most likely want to import your old v1.6 database with all your existing configuration. To do this, select the “import from Octopus 1.6″ option and choose the database back-up from the process you followed in step 1 and follow the instructions.

3 - Import Octopus Database

Step 4 – Post Upgrade Steps (server)

When you have completed the installation it’s essential to task a copy of the master key for backups. Instructions on how to backup the key can be found here http://docs.octopusdeploy.com/display/OD/Security+and+encryption

Once you have the server installed there’s a couple of items you’ll need to configure again.

  1. SMTP Server Settings are not kept, so these will need to be re-entered.

Step 5 – Upgrading Tentacles

If you are upgrading from version 1.6 - Octopus 2.0 server can no longer communicate with Tentacle 1.6. So in addition to upgrading Octopus, you’ll also need to upgrade any Tentacles manually. There are two ways to achieve this;

1. Download and Install the new Octopus Deploy Tentacle MSI on each target server

2. Use  a PowerShell script to download the latest Tentacle MSI, install it, import the X.509 certificate used for Tentacle 1.6, and configure it in listening mode. The powershell script is as follows;

function Upgrade-Tentacle
{ 
  Write-Output "Beginning Tentacle installation"
  Write-Output "Downloading Octopus Tentacle MSI..."
  $downloader = new-object System.Net.WebClient
  $downloader.DownloadFile("http://download.octopusdeploy.com/octopus/Octopus.Tentacle.2.0.5.933.msi", "Tentacle.msi")
  
  Write-Output "Installing MSI"
  $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/i Tentacle.msi /quiet" -Wait -Passthru).ExitCode
  Write-Output "Tentacle MSI installer returned exit code $msiExitCode"
  if ($msiExitCode -ne 0) {
    throw "Installation aborted"
  }
  
  Write-Output "Configuring and registering Tentacle"
   
  cd "${env:ProgramFiles(x86)}\Octopus Tentacle 2.0\Agent"
  Write-Output "Stopping the 1.0 Tentacle"
  
  & .\tentacle.exe create-instance --instance "Tentacle" --config "${env:SystemDrive}\Octopus\Tentacle\Tentacle.config" --console
  & .\tentacle.exe configure --instance "Tentacle" --home "${env:SystemDrive}\Octopus" --console
  & .\tentacle.exe configure --instance "Tentacle" --app "${env:SystemDrive}\Octopus\Applications" --console
  & .\tentacle.exe configure --instance "Tentacle" --port "10933" --console
  & .\tentacle.exe import-certificate --instance "Tentacle" --from-registry  --console
  Write-Output "Stopping the 1.0 Tentacle"
  Stop-Service "Tentacle"
  Write-Output "Starting the 2.0 Tentacle"
  & .\tentacle.exe service --instance "Tentacle" --install --start --console
  
  Write-Output "Tentacle commands complete"
}
  
Upgrade-Tentacle

Step 6 – Upgrade TeamCity Integration

The JetBrains TeamCity Plugin has also been released and requires an upgrade. To do this

  1. Download the plugin from the Octopus Website http://octopusdeploy.com/downloads.
  2. Shutdown the TeamCity server.
  3. Copy the zip archive (Octopus.TeamCity.zip) with the plugin into the <TeamCity Data Directory>/plugins directory.
  4. Start the TeamCity server: the plugin files will be unpacked and processed automatically. The plugin will be available in the Plugins List in the Administration area.

Please Note: The API Key’s for the users will have changed and will need to be re-entered into the Octopus Deploy Runner Configuration within the Build Step

Summary

We think that overall the Octopus Team have done a great job making the upgrade process easy. There’s still a few little issues to iron out, but overall the process is robust and simple. It’s a shame that automatic upgrade of the tentacles couldn’t have been supported as this may cause pain for customers with a large server install base, but Powershell automation makes for a great workaround.

As we learn more about Octopus 2.0, we’ll continue to let you know what we find. Good Luck with Upgrading — it’s definitely worth it!

DevOpsGuys use Octopus Deploy to deliver automated deployment solutions – ensuring frequent, low-risk software releases into multiple environments. Our engineers have deployed thousands of software releases using Octopus Deploy and we can help you do the same. Contact us today – we’d be more than happy to help.

7348317140_e30eeda034

DevOps and the Product Owner

In a previous post we talked a lot about the “Product-centric” approach to DevOps but what does this mean for the role of the Agile “Product Owner”?

So what is the traditional role of the Product Owner? Agile author Mike Cohn from MountainGoat Software defines it thus:

“The Scrum product owner is typically a project’s key stakeholder. Part of the product owner responsibilities is to have a vision of what he or she wishes to build, and convey that vision to the scrum team. This is key to successfully starting any agile software development project. The agile product owner does this in part through the product backlog, which is a prioritized features list for the product.

The product owner is commonly a lead user of the system or someone from marketing, product management or anyone with a solid understanding of users, the market place, the competition and of future trends for the domain or type of system being developed” – Mike Cohn

The definition above is very “project-centric” – the Product Owner’s role appears to be tied to the existence and duration of the project and their focus is on the delivery of “features”.

DevOps, conversely, asks us (in the “First Way of DevOps”) to use “Systems Thinking” and focus on the bigger picture (not just “feature-itis”) and the “Product-centric” approach says we need to focus on the entire lifecycle of the product, not just the delivery of a project/feature/phase.

Whilst decomposing the “big picture” into “features” is something we completely agree with, as features should be the “unit of work” for your Scrum teams or “Agile Software Development Factory”, it needs to be within the context of the Product Lifecycle (and the “feature roadmap”).

So the key shift here then is to start talking about the “Product Lifecycle Owner”, not just the Product Owner, and ensure that Systems Thinking is a critical skill for that role.

The second big shift with DevOps is that “Non-Functional Requirements” proposed by Operations as being critical to the manageability and stability of the product across its full lifecycle “from inception to retirement” must be seen as equally important as the functional requirements proposed by the traditional Product Owner role.

In fact, we’d like to ban the term “Non-Functional Requirements” (NFR’s) completely, as the name itself seems to carry an inherent “negativity” that we feel contributes to the lack of importance placed on NFR’s in many organisations.

We propose the term “Operational Requirements” (OR’s) as we feel that this conveys the correct “product lifecycle-centric” message about why these requirements are in the specification – “This is what we need to run and operate this product in Production across the product’s lifecycle in order to maximise the product’s likelihood of meeting the business objects set for it”.

We propose the term “Operational Requirements” (OR’s) as we feel that this conveys the correct “product lifecycle-centric” message about why these requirements are in the specification.

For the slightly more pessimistic or combative amongst you the “OR” in Operational Requirements can stand for “OR this doesn’t get deployed into Production…” .

The unresolved question is do we need an “Operational Product Owner” or does the role of the traditional, business-focussed Product Owner extend to encompass the operational requirements?

You could argue that the “Operational Product Owner” already partly exists as the “Service Delivery Manager” (SDM) within the ITIL framework but SDM’s rarely get involved in the software development lifecycle as they are focussed on the “delivery” part at the end of the SDLC. Their role could be extended to include driving Operational Requirements into the SDLC as part of the continual service improvement (CSI) process however.

That said, having two Product Owners might be problematic and confusing from the Agile development team perspective so it would probably be preferable if the traditional Business product owner was also responsible for the operational requirements as well as the functional requirements. This may require the Product Owner to have a significantly deeper understanding of technology and operations than previously otherwise trying to understand why “loosely-coupled session state management” is important to “horizontal scalability” might result in some blank faces!

So in summary a “DevOps Product Owner” needs to:

  • Embrace “System Thinking” and focus on the “Product Lifecycle” not just projects or features
  • Understand the “Operational Requirements” (and just say “No to NFR’s”!)
  • Ensure that the “OR’s” are seen as important as the “Functional Requirements” in the Product roadmap and champion their implementation

In future posts we’ll examine the impact of DevOps on other key roles in the SDLC & Operations. We’ve love to get your opinions in the comments section below!

-TheOpsMgr

image source: – CC  - CannedTuna via Flickr - http://www.flickr.com/photos/cannedtuna/7348317140/sizes/m/

Continuous Integration & Delivery – Christmas Reading Essentials

Continuous Integration: Improving Software Quality and Reducing Risk
Continuous Integration: Improving Software Quality and Reducing Risk
by Paul M. DuvallSteve MatyasAndrew Glover
Learn more

Add to Wish List

Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler))
Continuous Delivery: Reliable Software Releases through Build…
by Jez HumbleDavid Farley
Learn more

Add to Wish List

Agile Testing: A Practical Guide for Testers and Agile Teams
Agile Testing: A Practical Guide for Testers and Agile Teams
by Lisa CrispinJanet Gregory
Learn more

Add to Wish List

Continuous Delivery and DevOps: A Quickstart guide
Continuous Delivery and DevOps: A Quickstart guide
by Paul Swartout
Learn more

Add to Wish List

Test Driven Development: By Example
Test Driven Development: By Example
by Kent Beck
Learn more

Add to Wish List

Jenkins Continuous Integration Cookbook
Jenkins Continuous Integration Cookbook
by Alan Berg
Learn more

Add to Wish List

How Google Tests Software
How Google Tests Software
by James A. WhittakerJason ArbonJeff Carollo
Learn more

Add to Wish List

Jenkins: The Definitive Guide
Jenkins: The Definitive Guide
by John Ferguson Smart
Learn more

Add to Wish List

Growing Object-Oriented Software, Guided by Tests
Growing Object-Oriented Software, Guided by Tests
by Steve FreemanNat Pryce
Learn more

Add to Wish List

Succeeding with Agile: Software Development Using Scrum
Succeeding with Agile: Software Development Using Scrum
by Mike Cohn
Learn more

Add to Wish List

User Stories Applied: For Agile Software Development
User Stories Applied: For Agile Software Development
by Mike Cohn
Learn more

Add to Wish List

Refactoring to Patterns
Refactoring to Patterns
by Joshua Kerievsky
Learn more

Add to Wish List

Essential Scrum: A Practical Guide to the Most Popular Agile Process (Addison-Wesley Signature Series (Cohn))
Essential Scrum: A Practical Guide to the Most Popular Agile…
by Kenneth S. Rubin
Learn more

Add to Wish List

Service Design Patterns: Fundamental Design Solutions for SOAP/WSDL and RESTful Web Services
Service Design Patterns: Fundamental Design Solutions for SOAP/WSDL…
by Robert Daigneau
Learn more

Add to Wish List

Web Operations: Keeping the Data On Time
Web Operations: Keeping the Data On Time
by John AllspawJesse Robbins
scream

DevOps does not equal “Developers managing Production”

I’ve had a few conversations lately, mainly with smaller start-ups or development houses, who tell me “yes, we work in a DevOps model”.

What they really mean is “We pretty much have no Operations capability at all, and we rely on the Developers to build, deploy and manage all of the environments from Development to Test to Production. Mostly by hand. Badly”.

As someone from a predominately Operations background I find this quite frustrating!

Operations is a discipline, with its own patterns & practices, methodologies, models, tools, technology etc. Just because modern cloud hosting makes it easier to deploy servers without having to know one end of a SCSI cable from another doesn’t mean you “know” how to do Operations (just like my knowledge of SQL is enough to find out the information I need to know monitor and manage the environment but a long way from what’s required to develop a complex, high-performing website).

DevOps means Development and Operations working together collaboratively to put the Operations requirements about stability, reliability, performance into the Development practices, whilst at the same time bringing Development in the management of the Production environment (e.g. by putting them on-call, or by leveraging their development skills to help automate key processes).

It doesn’t mean a return to the laissez-faire “anything goes” model where developers have unfettered access to the Production environment 24x7x365 and can change things as and when they like.

Change control was invented for a reason, and whilst change control has becomes its own “cottage industry” involving ever more bureaucratic layers of “management by form filling” the basic discipline remains sound – think about what you want to change, automate it if you can, test it, understand what to do if it screws up (roll back plan), document the change, make sure everyone knows when, where and how you are making the change, and make sure the business owner approves.

When I took over the Operations of a high-volume UK website about 8 years ago I spend the first 3 weekend working fighting fires and troubleshooting Production issues.

My first action after that baptism of fire was to revoke access to production for all developers (over howls of protests). Availability and stability immediately went up. Deafening silence from Development – Invitations to beers from the Business owners.

Next step was to hire a build manager to take over the Build and Deployment automation, and a Release Manager to coordinate with the Business what was going into each release, when etc. End result – 99.98% availability, with more releases, being deployed within business hours without impacting the users, and a lower TCO. The Business was much happier, and so was the Development Manager, as he was losing far fewer developer-hours to fire-fighting Production issues, and hence the overall development velocity improved considerably. Win-Win.

Was that a DevOps anti-pattern? Did I create more silos? Probably… but in a fire-fight a battlefield commander doesn’t sit everyone down for a sharing circle on how they are going to address the mutual challenge of killing the other guy before he kills you. Sometimes a command & control model is the right one for the challenge you face (like getting some supressing fire on the target while you radio in for some air support or artillery!).

That said, once we had developed a measure of stability we did move partway to a more DevOps pattern – we had developers on-call 24×7 as 3rd line support, we virtualised our environment(s) and gave Developers more control over them, and we increased our use of automation.

Organisationally we remained siloed however – we were incentivised in different ways (Operations emphasising availability, Development emphasising feature delivery), we remained in essentially a waterfall delivery model and Ops VS Dev was a constant struggle for manpower & resources. All the usual problems that the DevOps movement is trying to address.

In summary, what I am trying to get at is please don’t devalue the “DevOps” concept by saying you do DevOps when you don’t.

Unless you currently do both Development AND Operations separately, and do them well, AND you’re now trying to synthesise a better, more agile, more cloud-oriented way of working that takes the best part of BOTH disciplines… you aren’t doing DevOps!

– TheOpsMgr

Maturing the Continuous Delivery Pipeline

The Maturity Model is a useful assessment tool for understanding your organizations level of Continuous Delivery adoption. Many organizations today have achieved what is needed to move from Level-1 (Regressive) to Level-0 (Repeatable), which is a significant accomplishment and as a reader of this blog post, you’re either about to start your journey of improvement or are already underway.

Continuous Delivery Maturity Model
Continuous Delivery Maturity Model

The Maturity Model

The advice for organizations wanting to adopt Continuous Delivery is ever more abundant, but for organizations that started adoption some time ago, the guidance on how to mature the process is still sparse. In this article, we explore one continuous improvement methodology that may help your organization mature its’ Continuous Delivery process.

Humble and Farley outline maturity Level 0 (Repeatable) – as one having process which is documented and partly automated.1 For this to be true, an organization must have first classified its’ software delivery maturity, identified areas for improvement, implemented some change and measured the effect. As Humble observes;

The deployment pipeline is the key pattern that enables continuous delivery.2

Humble also identifies that Deming’s Cycle is a good process to apply to initial adoption. 1 The process, according to Deming, should then be repeated so that further improvements can be planned and implemented; having the advantage the  data and experience from previous cycles is available. This process of continuous improvement is the first step to maturing the continuous delivery process.

Continuous Delivery and Kaizen

Continuous Delivery is a set of practices and principles aimed at building, testing and releasing software faster and more frequently.3 One key philosophy at the heart of Continuous Delivery, is Kaizen. The Japanese word, Kaizen, means continuous improvement. By improving standardized activities and processes, Kaizen aims to eliminate waste.  Within Kaizen are three major principles;

  1. 5S
  2. Standardization
  3. The elimination of waste; known as muda.

The 5S principle characterizes a continuous practice for creating and maintaining an organized and high-performance delivery environment. As 5S has the focus of waste reduction, it appears prudent to explore the 5s methodology as an approach to optimizing the Continuous Delivery pipeline.

What is 5S?

5S is a lean engineering principle focused on waste reduction and removal. The methodology is recursive and continuous and it’s primary goal is to make problems visible  so that they can be addressed.4

There are five primary 5S phases, derived from five Japanese words that translate to; sort (seiri), straighten (seiton), shine (seiso), standardize (seiketsu) and sustain (shitsuke).5

The 5s principle is more commonly applied to workplace organization or housekeeping. Since the delivery pipeline is a key component of the process, it is therefore a fundamental place where work will take place. The pipeline exhibits elements that are common with any physical work environment including;

  1. Tools, process, paperwork.
  2. Storage areas for artifacts, such as files.
  3. A requirement that components must be, physically or virtually, located in the correct location, and that they are connected by process.
  4. Tools and process need maintenance and cleaning. E.g. retention policies.
  5. Systems and procedures can be standardized and can benefit from standardization.

When applied to the delivery pipeline, the five primary 5S phases can be defined as;

  • Phase 1 – sort (seiri)
    • Analysis of processes and tools used in the delivery pipeline. Tools and processes not adding value should be removed.
  • Phase 2 - straighten (seiton)
    • Organization of processes and tools to promote pipeline efficiency.
  • Phase 3 – shine (seiso)
    • Inspection and pro-active/preventative maintenance to ensure clean operation throughout the delivery pipeline.
  • Phase 4 – standardize (seiketsu)
    • Standardization of working practice and operating procedure to ensure consistent delivery throughout pipeline.
  • Phase 5 – sustain (shitsuke)
    • Commitment to maintain standards and to practice the first 4S. Once the 4S’s have been established, they become the new way to operate.

Implementing 5s in the Delivery Pipeline

Experience has shown that there are several key factors that should be considered in the adoption of 5S.

  1. Begin with small changes. Remember that 5s is a continuous and recursive process.
  2. Involvement is crucial.6 Encompass everyone involved in the deployment pipeline.
  3. Create Champions – find leaders who love &  live the philosophy and encourage them to promote it.

5s implementation involves working through each phase (or S) in a methodical way. This is commonly achieved by Kaizen events, which are focused activities, designed to quickly identify and remove wasteful process from the value stream and are one of the prevalent approaches to continuous improvement.

Phase 1 – sort (seiri)

The first step of 5s is to sort. The purpose is to identify what is not needed in the delivery pipeline and remove it.

Additional process and practice can easily creep into the delivery mechanism, especially in the early phases of Continuous Delivery adoption, when you are experimenting.

The removal of items can either mean total elimination of a tool or process, since it is redundant, or the flagging, known as “Red Tagging”, of a tool or process which needs to be evaluated before it is disposed of. The review of red tagged tools and process should evaluate their importance and determine if they are truly required.

Suggested Steps

1. Evaluate all tools and process in the delivery pipeline and determine what is mandatory.

2. Ascertain what is not needed. Assess what it’s elimination status is, e.g. immediately or Red Flag.

3. Remove unnecessary or wasteful tools and  process.

Phase 2 - straighten (seiton)

The second step requires items in the delivery pipeline to be set in order. This requires the pipeline to be arranged so that  maximum efficiency is achieved for delivery.

To achieve seiton, the key components of the automated pipeline should be analysed, and should include the following;

  • Automated build process.
  • Automated unit, acceptance tests including code analysis.
  • Automated deployment process.

It is suggested that for the pipeline to be truly efficient the most commonly used items are the easiest and quickest to locate, such as;

  • Build status reports and metrics including KPI’s such as Mean-time-to-release (MTTR)
  • Automated test coverage, error and defect reports

In this step, actions can also be taken to visually label elements of the pipeline so that demarcation of responsibility  is clear. This may mean categorizing tools, process and target locations E.g. servers into  groups such as production and pre-production or development and test.

Suggested Steps

1. Analyse the value stream including tools and process.

2. Ensure the necessary information is available quickly to those who need it.

3. Visually distinguish or categorize key pipeline components.

Phase 3 – shine (seiso) 

Once the waste has been removed and only what is required has been kept and ordered, the next phase inspects the delivery pipeline to ensure that it’s operation is as clean as possible.

During pipeline execution large amounts of data can be produced. If this information is not dealt with efficiently it can become a waste product, detracting from the value of the delivery process.

Inspection of this data is vital. Data generated in the pipeline is likely to include instrumentation, metrics and logging information, as well as build artifacts and report data.

In context of the delivery pipeline, cleaning can be defined as archiving, roll-up or removal of delivery pipeline data. House keeping routines should be evaluated to ensure that data retention rules are applied and that waste, in the form of build artifacts, metric and report data and instrumentation data are correctly archived or removed as required.

Implementation of this phase requires assignment of cleaning responsibilities and scheduling of cleaning process.

Suggested Steps
1. Inspect data to understand what should be rolled-up, archived or deleted.

2. Ensure that responsibilities for data management are clearly assigned within the team.

3. Create a scheduled, automated cleaning processes.

Phase 4 – standardize (seiketsu)

The primary focus of step four is to ensure that  the working practice, training and operating procedures remain consistent. Standardization allows for continuous improvement and each of the first 3S’s should be regulated.

A critical element of seiketsu is to ensure that the process is reviewed on a regular basis or cycle. This ensures that best practices are maintained and areas  where standards have slipped are quickly identified.

A settled and regular process or practice becomes one that is hard to give up. Therefore,  if standardization, is implemented correctly, it can support culture change.

Suggested Steps

1. Ensure that the team have clear responsibilities for the tasks that need to be completed.

2. Create ownership and pride in the process.

3. Ensure champions provide guidance and support changes to the 5s process.

Phase 5 – sustain (shitsuke)

The purpose of the final phase is to ensure that the 5s process is not a one-time event. To achieve this,  the focus of the fifth  phase centers on culture. To sustain 5s organizations must engage in support, commitment and gainful communication, with the aim of keeping the team devoted to continuous improvement.

There are a number of elements which influence cultural support of 5s . Which elements take precedence varies within every organization and is heavily dependent on the existing culture of that organization. Key points are:

  1. Communication – aims & objectives need to be clear for all  team members.
  2. Education -  5s methodology, concepts and practices need to be communicated.
  3. Recognition -  team members need to feel that their efforts are recognized.

It is interesting to note that the literal translation of shitsuke is discipline. The final phase of 5s can also be one of the hardest to implement.

Suggested Steps

1. Establish an open channel for discussion and feedback, to facilitate continual learning.

2. Adopt the habit of auditing regularly and reward the best teams.

3. Ensure that all  problems are addressed quickly make sure that root cause analysis is completed to discover the cause.

Summary

The adoption of Kaizen events to implement 5s within the Continuous Delivery pipeline can provide significant support to maturing the model, by providing a continuous improvement process within a well-defined and standardized structure.

Lean manufacturing has been using the 5S pratice as a common method for lean implementation for years. Nearly 70% of of lean manufacturing companies use this approach 7, but the use of 5S as an adoption method for lean in software development would appear less common, as it has not be commonly applied to development practices.

Lean software development principles provide the promise of companies becoming more competitive by delivering software faster, increasing quality, and reducing cost leading to greater customer satisfaction, all of which align closely to the key principles of  Continuous Delivery.

– TheDevMgr

References

1. Humble, J. Farley D. (2010). Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. Indiana: Addison Wesley. 419.

2.  Humble, J. Farley D. Continuous Delivery: Anatomy of the Deployment Pipeline.  [September 7, 2010.]  InformIT. http://www.informit.com/articles/article.aspx?p=1621865

3. McGarr, Mike. Continuous Delivery. SlideShare.net [June 17, 2011.] http://www.slideshare.net/jmcgarr/continuous-delivery-8341276

4. Liker, Jeffery K. The Toyota Way. New York City : McGraw Hill, 2004.

5. MLG UK. 5S – The Housekeeping Approach Within Lean. MLG UK. [Cited: February 5th 2013] http://www.mlg.uk.com/html/5s.htm

6. Maready, Brian. Transparency = speed . Lean Lessons. [May 27, 2010] http://leanbuilt.blogspot.com/2010/05/transparency-speed.html.

7. Compdata Surveys. Lean Manufacturing and Safety Help Manufacturers Survive Tough Times.  Compdata Surveys. [December 13, 2010.] http://www.compdatasurveys.com/2010/12/13/lean-manufacturing-and-safety-help-manufacturers-survive-tough-times/