I’ve only been in the python community for about a year, so I’m still new, but I really do love coding in python. Anyway, I decided to open source a library (‘WeatherAlerts’) I started writing a few weeks ago. I’ll post about it at some point in the future…. At first it was just a code repo on github, but after a bit of time working with it I wanted to package it and put it on pypi so it could be installed via easy_install or pip. With the state of change that python is in right now figuring out how to write the installer, have it support both python 2 and python 3 and get the package available on pypi wasn’t trivial. But now that it’s behind me, I have a much better understanding of python package management.
I had a simple test script that I was manually running every few commits and before pushing any changes out to the public repo. It worked, but being a Linux Systems Engineer though, this wasn’t going to stay that way long term. A lot of what I do in my day job is systems automation. Deploy servers, mange changes to large clusters, Continuous Integration and Continuous Delivery at the system level… So while my simple little python project was perfectly fine with being a manual build and deploy process, I couldn’t stand it.
So, I searched out a pythonic way to automate these tasks.
I quickly found ‘Tox‘, it’s a tool for automating the creation of virtual environments (running a specific python versions) then executing the builds, installs and acceptance tests. I can’t stress enough how much I like this tool.
[tox] envlist = py26,py27,py31,py32[testenv] deps=nose commands=nosetests weatheralerts
Take the above five lines of configuration, this is the extent of my tox config file. When tox runs using this config file, it builds 4 virtual environments, one each in python2.6, 2.7, 3.1 and 3.2. It then installs the package and in each along with the package ‘nose’ which is what I wrote my tests in. Once those are installed, it runs the test, in each environment.
This still is a huge step, but I still have to manually kick it off and if I forget to before pushing code out, it’s untested. So I needed something to automate the builds/testing based on commits, a continuous integration (CI) server of some sort.
After spending some time with Google, I had a long, long list of pythonic options to accomplish this. Unfortunately, they all sucked. I tested a few out but none did what I wanted in a manner that left me with as much hair as I had before starting. I’m sure projects like Buildbot and such will do what I want, but I wasn’t happy with any python option I’ve seen thus far.
So, I left things as they were for a few days. In my previously mentioned day job, a few of the servers I deal with in my relm (the relm of all things Linux) are build servers running. These are used for Continuous Integration and Continuous Delivery. These build servers use the project formally known as hudson, a.k.a. Jenkins. Jenkins is not written in Python, in fact it’s written in java almost as far away from pythonic as one can get. But as I work on this cluster on occasion I’ve found one thing about Jenkins that I really like… It seems to have been written by an intelligent form of life. It just works. So after a few days of putting it off still looking for a 100% python option, I spun up a new VM on my rack at home and installed Jenkins. That took a total of 10 minutes (including VM spinup), I took a look through the Jenkins plugin list and installed the git, github and python plugins as well as a few others.
I wanted to add a build job to watch my ‘dev’ branch on github and build/test any commits, emailing me if anything failed. It took a minute or two to plug in the details of the repo, branch and build steps and less than five minutes to work out a bug or two in my build steps. These were not having tox installed on the VM and simple stuff like that. No more than 10 minutes after starting to configure, I had the job working as desired. 20 minutes. In 20 minutes I installed the Jenkins server and had it watching my git repo. After any commit it pulls down the changes builds it, creates 4 virtual environments installs my python package in them and runs the tests, if there are any problems, I have an email waiting for me in my inbox. I’m adding a job now to watch the master branch for new release tags, that’ll build and upload new releases up to PyPi, taking the other manual task off the table. Some might say this is a bit of overkill for a small python module that few people will ever want to use, but there is a lot to be said for an automated and repeatable process.
I’m sad that I couldn’t find a python solution and I’ll continue testing options as they come along but you can’t go wrong with Jenkins.