It has recently come to my attention that many people don’t use virtual machines for development, instead polluting their system with various dependencies and making life harder for themselves. Unfortunately, even the people who do use VMs for development often perform provisioning and updates by hand, thus ending up with slightly different configurations for development, testing and production, which often leads to uncaught bugs on production.
In this post, I will not only attempt to detail some best practices I’ve learned, but I will also list provisioning and deployment configurations that will make this a one-command process.
The easiest way to do repeatable deployments is to create scripts which will handle everything for you. By the end of this post, you will be able to get from a new installation to a complete Django/postgres/gunicorn/redis stack running (and accepting users) with one command.
Starting off, the most important thing to remember is that you should never make any changes to any of the machines by hand. Any change you perform must be automatic and repeatable. If you make a change on the development VM, you’d better be damn sure it’s documented somewhere and will run on every other environment, including other people’s VMs, staging, and production.
The way to do this is with a deployment framework and scripts. You can use Puppet/Chef/Salt stack/CFengine/whatever you like (but don’t use fabric! Fabric is great for some cases, but it’s not for deployment). My tool of choice is Ansible, it’s simple to learn and extend, does the job quickly and without hassle.
Step 0: Directory structure.
Here’s the basic directory/file structure I use for my projects:
. ├── deployment/ │ ├── ansible │ ├── deploy.yml │ ├── files/ │ │ ├── conf/ │ │ │ └── nginx.conf │ │ ├── init/ │ │ │ └── gunicorn.conf │ │ └── ssl/ │ │ ├── myproject.csr │ │ ├── myproject.key.encrypted │ │ └── myproject.pem │ ├── handlers.yml │ ├── hosts │ ├── key │ ├── known_hosts │ ├── provision.yml │ ├── vars.yml │ └── webapp_settings/ │ ├── local_settings.local.py │ ├── local_settings.production.py │ └── local_settings.staging.py ├── djangoproj/ └── requirements.txt
You will notice the
deployment directory, containing various ansible scripts, configuration, SSL certificates, init scripts, etc. This will all be explained later.
Step 1: The hosts.
Before doing anything else, we need to declare the hosts file. This is what will run where. The
hosts file in the deployment directory has the following contents:
[remote:children] production staging [servers:children] production staging local [production] www.myproject.com nickname=production vm=0 branch=master [staging] staging.myproject.com nickname=staging vm=0 branch=develop [local] local.myproject.com nickname=local vm=1 branch=develop
As you can see, I define three hosts:
local is the VM I use for development (that’s what the
vm=1 means), and it tracks the
develop git branch.
staging is the staging server, it’s not a VM (
vm=0) and it also tracks
The various sections are just so I can refer to the various machines more easily. I can deploy everything to
remote, which includes
staging, or I can deploy to
local, which is just the local VM, or I can deploy to
servers, which is everything.
Step 2: Setting up a VM.
For my VMs, I use VirtualBox. It’s free, it’s great, if you aren’t using it already, what are you waiting for? Go get it and start setting up VMs.
To develop, we will need to set up our VM to share a code directory with the host computer, so we can easily see the changes we make without needing to commit or do anything else. I add a shared directory (read/write, because that’s sometimes necessary for migrations) to the VM, and I give it the project’s name, as we will need it later on in the scripts.
Step 3: The variables.
This is the
vars.yaml file. It’s pretty straightforward, it includes some names and the system packages/python packages/init files you would like to install.