Applications use session state. Whether session state should be used or not is a hot topic of debate, but lots of applications use session state – fact.
Recently, I was involved in troubleshooting a performance issue for a customer. As it turns out, their application was hammering session state into a Microsoft SQL Server Database. The web application was taking some fairly meaty load – 100,000’s page requests per hour spread across 15 or so web servers. When I dug into the detail with the development team it became clear that they’d used SQL Server as a session state provider because – and I quote; “We had nowhere else to put it”.
In Microsoft ASP.NET applications this is something I’ve seen quite often, but this time it got me thinking……storing session state in SQL server seems like using a sledge hammer to crack a nut, so I decided to take a look at some alternatives.
I wanted to look beyond the typical Session State Server solution and NoSQL databases seemed the obvious route. Some are perfectly designed for Session State (some are not) and it was logical that they’d make a great alternative. But which would be the best fit?
To start off I set some rules, I wanted the following;
The NoSQL database should be supported by a Session State provider which was already available. I don’t want to build one myself.
Performance is paramount. I want to pick the best performing provider.
After some quick searches on NuGet and Google I discovered a couple of Session State Providers for Couchbase, Redis, MemCached and RavenDb. That was easy; four options found pretty quickly. However, when I started to try and understand the best performing provider, I just couldn’t get a clear answer.
So which is the best performing session state provider for ASP.NET?
To answer this, I needed some hard evidence, but there’s not much out there apart from conjecture, so I decided to load test each provider and find out.
In this series of blog posts I thought it would be useful to share more than just the results of my test. So I’ll walk you through setting up each of the Session State Providers and their pre-requisites. However, since some of you won’t care about that, in the first post we’ll cut straight to the chase and look at the results.
We are big fans of OctopusDeploy and have been using it since its beta release in early 2012. If you’re not familiar with OctopusDeploy, here’s the background;
1. It’s an automated release management tool for Microsoft .NET, comprising of a centralized server and agents, known as Tentacles, which are installed on each target server.
2. It uses the NuGet package format, an open source package manager for the Microsoft .NET Framework.
3. Tentacle agents listen on TCP port 10933 by default and HTTP requests are encrypted using a pair of X509 certificates.
The following diagram shows the release management components;
Release Management On-premise or Off-premise
One of the highlights of using OctopusDeploy is its’ ability to deploy software to servers located in any environment. The highlight reel goes a little something like this;
Whether your servers live in the cloud, in a data centre, or under your desk , Octopus can securely upload, install and configure your applications.
Just install a tiny service agent called Tentacle on your servers, and then use our easy installation wizard to set up a secure connection between the Octopus and Tentacle based on public/private key encryption technology. Your server is ready for deployments!
We’ve installed OctopusDeploy many times and have used some of the following installation scenarios to deploy to on-premise and off-premise data centres.
Scenario 1: On-Premise Installation with On-Premise Deploy
One of the more common ways of installing Octopus is one where the NuGet Server and Octopus Server are located on your own network, along-side Continuous Integration servers or source control systems.
In this installation, OctopusDeploy will pull NuGet packages from a NuGet Server (such as MyGet, JetBrains TeamCity, a NuGet Feed Server or just a local file path) and push the deployment out to target servers located on a local network.
Since all servers are located on the same network, this configuration is straight-forward and requires not much more than Windows Firewall configuration to be completed, where TCP traffic over port 10933 (presuming the default ports) is allowed between the OctopusDeploy server and the target servers.
Scenario 2 :On-Premise Configuration with Off-Premise Deploy
This installation presumes that your target servers are not within your own network. E.g. hosted in an off-premise data centre E.g. a cloud environment. Configuration is the same as Scenario 1, and Windows Firewall will still need to be configured.
However, in this scenario it is likely that additional layers of security will be in place E.g. a hardware firewall. Network teams will need to ensure that TCP traffic on the OctopusDeploy port (10933 by default) is allowed into each server.
This deployment scenario has some complications however. Each target server needs to be publicly addressable. For load balanced server farms this is going to add a level of complexity, as firewalls and content switches/load balancers will need to be configured to allow each target server to be individually and publicly available to an external caller.
More common, is the fact that servers located in the off-premise data centre might not be publicly addressable with good reason. E.g. Web Servers in a DMZ can receive traffic originating from public networks, whereas application and database servers (and firewalls) are configured only to receive traffic generated within the off-premise data centre.
If you do decide to expose servers publicly to allow Octopus to deploy from an on-premise to an off-premise location, you can take some solace from the fact that the Paul Stovell from OctopusDeploy has ensured deployment is secured with HTTP requests that are encrypted using a pair of X509 certificates.
Scenario 3 :Off-Premise Installation with Off-Premise Deploy
Finally, you might want to consider a scenario where the Octopus Server is hosted within the off-premise data centre that software will be deployed into.
The NuGet server can remain on-premise in this scenario. Since Octopus will generate outbound HTTP (TCP:80) traffic to a known destination ( in our experience ), security guys (not all) are cool (don’t complain complain less) with this approach.
Advantages over Scenario 2
1. Octopus and target servers are now located within the same network, or are on trusted networks.
2. Servers do not have to be made publicly addressable. Non public servers can be deployed to.
3. Load balanced challenges in Scenario 2 are overcome. Octopus can now individually address all servers.
4. Deployment time can be reduced, as NuGet packages are transferred once from NuGet source and are deployed across local or trusted networks.
1. Additional server infrastructure is required in data centre to host OctopusDeploy Server.
We hope that’s given you a flavor of the installation scenarios available to Octopus Deploy! More information on OctopusDeploy can be found at http://octopusdeploy.com
DevOpsGuys use Octopus Deploy to deliver automated deployment solutions – ensuring frequent, low-risk software releases into multiple environments. Our engineers have deployed thousands of software releases using Octopus Deploy and we can help you do the same.Contact us today – we’d be more than happy to help.