If you’re developing software of any kind, you inevitably need to test it somewhere before you deploy to production. “I’ll just spin up a VM,” you say. This might work initially in small shops – but even if you don’t have onerous red tape, you’re eventually going to run out of resources (like host machines, RAM or space). So you’ll have to have someone who can shut down or decommission unused test machines. And what about performance testing? What if you wanted to test your application with 100 machines for just 1 day? You’d have to get a lot of tin that would sit idle for most of the time, just so that you have some capacity when you need it. So what other options do you have?
The Cloud. I hear you sighing from here: “The Cloud again. All I hear nowadays is Cloud this and Cloud that.” That’s true, but nonetheless the Cloud does offer some solutions. Most Cloud platforms have pay-as-you-go models, allowing you to spin up environments, use them for short periods, and throw them away again, paying only for the time you used. However, you’re still going to need someone to manage the test lab – ensuring that some well-meaning tester creates 100 instead of 1000 machines. Or that she turns them off before going home so you don’t get charged for idling machines!
The Old Way
Before DevOps became the cool kid on the block, there were typically three teams involved in testing:
- Ops – responsible for provisioning test labs and (hopefully) matching them to production as closely as possible
- Testers – responsible for doing the actual testing in the test labs
- Developers – writing code and deploying to the test labs
Let’s consider a typical cycle. A developer – say Tony – makes a change. He commits the change, and hopefully he has at least some sort of automated build in place to produce a deployable package of some sort. He then contacts Ops to request a machine for testing. Sherry in Ops checks current capacity, and realizes that there’s not enough space (or RAM or some other constraint). So she requisitions new hardware. A couple of (days|weeks|months) later, the test environment is ready. Sherry contacts Frank on the test team – he then has to badger Tony into deploying the build. A couple of (hours|days|weeks) later, the build is finally ready for testing. Frank’s team find a bug – Tony’s team picks it up, fix it and the cycle starts again. Eventually the testing is done – and the new hardware idles.
Out with the Old and In with the New
In the Brave New World of DevOps, the lines between these teams are blurring and even disappearing. “Configuration as Code” and Test Automation are changing the landscape. Couple these practices with automated resource deployment, and you have a recipe for much faster cycles. As an added bonus, if your lab is in the Cloud, you can quickly get rid of unused resources, saving costs as well.
Instead of the Ops team manually installing and configuring “golden templates”, we can now express environments as “configuration as code” scripts. A good example is PowerShell DSC. Using a declarative script that can check and environment (and “make it so” if required) allows developers and Ops to work together to solve the problem of what environments should look like (and other bonuses like drift detection). Another example is Chef. Chef can also be used to create new resources. On Azure, you can create new resources from a declarative script using Azure Resource Management templates. If you then add test automation to the automated resource creation and deployment, you move beyond just Continuous Integration to Continuous Deployment and Continuous Testing. More automation means much faster (and less error prone) cycles – which is good for business. The automation means you can get to self-service too – since there’s so much automation, anyone can create a new lab.
Aside: Containers
Before we consider some of the benefits and pitfalls of self-service Dev/Test labs, we should have a brief look at Container Technology, since they are relevant to the DevOps discussion. Containers are the new Cloud – everyone is talking about containers – especially Docker. Containers are the next evolution in virtualization technology, allowing greater density of resource utilization, with less space. Containers are isolated “machines” – complete with their own storage, networking, registries and so on. But they’re much smaller than VMs and are also portable (meaning I can create a container here and then run it over there and it will be exactly the same) make this a great technology in the DevOps world. Add declarative scripting for creating containers, and you have a great new option in the DevOps world.
The same principles that apply to VMs apply somewhat to containers – they’re just another way you can host your applications. The deployment mechanism and management may differ, but conceptually they’re still resources that require configuration and management.
Azure Dev/Test Labs
One of the dangers of the self-service model, though, is managing the spend. Invariably teams will create more machines than they need, and then forget to decommission unused machines, and so all the cost-savings go out the window. That’s where Azure Dev/Test Labs comes in. Think of Dev/Test Labs as a way to create a sandbox. The team can play as much as they want to in the sandbox – but never outside. And once they’ve used all the sand – they have to wait till next month to get more. Currently, Dev/Test labs is still in preview – but it brings together all the benefits of Cloud labs plus some automated management.
When you create a Lab, you specify the monthly budget that the Lab can utilize – say $500. That way, you’re controlling cost while giving the team self-service within the Lab. Labs also have the following features (among others):
- Total VM quotas
- User VM quotas
- Allowable VM sizes – you can restrict the size options allowed when users create new VMs
- Templates – you can restrict what templates are used when creating new VMs. You’d also add your “golden templates” here so that new VMs have the configuration that you want.
- Artifacts – these are “script-based packages” that you may want to be installed in the VMs (for example, Office or Debugger Tools). When you create templates you specify which artifacts should be included – making self-service a snap
- RBAC (role-based access control) – you can specify who can use or create VMs and who can change security
- Set auto-shutdown policies (idle-time or scheduled), so that machines automatically shut themselves down
Team members “claim” VMs that are created, or create new VMs using the defined templates. Quotas ensure that users think about their resource usage – getting rid of unused resources before creating new ones. Dashboards and alerts keep you informed of your usage.
Dev/Test Labs are also going to be integrated into Visual Studio Online’s Automated Build Engine (and soon on-premises TFS Build Engine too) via Build Tasks that you can use during Builds or Releases.
Roadmap
What does this all mean for you and your team? You may want to re-evaluate how you’re managing your Dev/Test infrastructure currently – and start moving to the Cloud (ahem, Azure). Start investing into learning automation scripting such as Azure Resource Management (ARM) templates and PowerShell DSC. Investing in these technologies will position you well for the next wave of innovation in the Dev/Test space. You may also want to start investigating what pay-as-you-go Dev/Test Labs will end up costing you – I’m guessing that in most cases it’s significantly lower than the cost of on-premises (under-utilized) tin.
Oh, and while you wait for Azure Dev/Test Labs, make sure you turn off your VMs before you leave!
http://blog.nwcadence.com/evolution-modern-devtest-labs-devops-world/