Deployments (and Rollbacks) with AWS and Python
Creating a deploy process for AWS in Python over the last couple of weeks presented some interesting frustrations. What follows are my observations on some of the fundamental differences between local builds vs. deploys, how those differences impact the design of their processes.
The build process that I developed looked roughly like this:
- Take an image of the live, production instance
- Launch a new instance from that image
- Deploy new code
- Dump and restore the live database to new database
- Run migrations on the new database
- Run smoketests
- Update DNS record with IP address of new instance
With these tools:
This build process obviously has flaws, but it was a good beginning for the budding startup that I was contracted to do the work for. The rest of this post is written with the above process in mind, particularly the part about launching (and shutting down) new EC2 instances.
Naming Resources and Tracking State
One of the special characteristics of cloud deployments, as opposed to a local build procedure, is that in a local build procedure you generally control the naming of artifacts (such as files and folders). In a PaaS such as AWS, you often do not. For example, AWS reserves the responsibility of assigning unique IDs to EC2 instances.
This difference entails consequences for how cleanups and rollbacks are handled during a failed build process.
If, during a local build procedure, a stage fails (maybe
libcurl could not
be found), the process for cleaning up is usually simple and invariable:
rm -r build/.* or something similar. It is not usually necessary to track
the names of files and folders produced during the build, because, as a rule,
they are pre-determined by the build process. Traditional build tools excel
when constrained by these kinds of rules.
Contrast this situation to an aborted cloud deploy. If, after launching a dozen new AWS instances, a subsequent stage fails, we are now in a position where we have to rollback (i.e. terminate) those dozen instances. Since we did not generate the unique identifiers for those instances, they must be tracked somewhere.
How to keep track of the unique identifiers?
AWS allows tagging of EC2 resources. Given this, a deploy process could launch new EC2 instances, always tagging them with the tag and value. Rolling back after a failed deploy, in this case, would be a matter of finding the instance(s) with this tag and value, and shutting it/them down.
This allows the deploy process to forget about tracking resource IDs, and behave more like a local build process, by predetermining an attribute of the new resources.
A couple of concerns:
- AWS does not offer a way to enforce uniqueness of a key across resources
- Launching-and-tagging is a two-step over-the-network process
In other words, I am concerned about the ease and reliability of maintaining the correct tags and values on instances, and ensuring that the special tag is a "reserved" keyword across the organization.
Local filesystem tracking
Would look something like this
- Launch a bunch of stand by instances -> write their IDs to
- Run smoketests, uh-oh they failed -> shutdown all instances in
Track in central storage
Like filesystem tracking, except instead of storing deploy history locally, write to a remote storage system.
Dirty Filesystems vs. Dirty Clouds
Another characteristic difference between local builds and deploys is that local builds produce limited artifacts, whereas cloud deploys can, depending on the design of the deploy process, create new ones endlessly.
Compiling a package locally produces artifacts in, typically, the package
/tmp, and whatever paths installation artifacts occupy. Re-running
./configure; make intall produces new artifacts, in a sense, but
over-writes the previous ones, thus capping the total artifact count.
Because of the way AWS EC2 instances are created, on the other hand, a deploy process could easily proliferate instances unless care is taken to ensure that resources created during previous, aborted deploys are cleaned up.
(Why produce new artifacts instead of just deploying to the same set of instances over and over? Here is a great look at that question.)
On a related note, we care more about resources that are produced during an aborted deploy than we do about a dirty filesystem produced by a failed local build, because EC2 instances, generally speaking, cost more money! Because of this, the time window for performing cleanup in the cloud may be shorter than a local build process.
Avoiding resource proliferation
A simple way to avoid resource proliferation is to forbid a deploy process
from proceeding when there is a previous, aborted deploy that was not fully
cleaned up. This is very different from the way local build processes work,
./configure; make install can be run with impunity at no cost.
Preventing a deploy from running when there is "dirt" could look something like this:
- Check for presence of deploy log
- Abort deploy if present
- Create deploy log
- Write instance IDs to deploy log
- Shutdown IDs in deploy log upon abort
- When deploy finishes, archive/rotate deploy log
- Delete deploy log
Cleaning up resources in a timely manner
In some cases, it may be OK to leave the cloud "dirty" for a while after a failed deploy. Resources can be cleaned up by the following deploy attempt. However, if the cost of leaving the cloud dirty for a long time is a concern, cleaning up during the deploy itself is an option, and looks something like this:
@task def deploy(live_instance_id): image_id = create_image(live_instance_id) try: new_instance = run_instance(image_id) ip_address = new_instance.ip_address try: scp(code, ip_address) ssh(ip_address, 'httpd restart') tests_pass = smoketest(ip_address) if not tests_pass: raise AbortDeploy() update_dns(ip_address) except AbortDeploy as e: terminate_instance(new_instance.id) raise e except AbortDeploy as e: deregister_image(image_id) raise e
Some cloud deploys can benefit from following rules that differ from those followed by traditional, locally based build processes.
- Deploy processes keep track of newly created resource IDs since they are defined by the PaaS and are needed during cleanup/rollback
- Deploy processes do not run if there is a previous, uncleaned failed deploy
- Failed deploys clean up after themselves