…or Data-Driven Tests
Often I find myself in the situation of wanting to run some unit test for a part of my system with different data values. I want to verify that the system works correctly with some combinations of data – and not just have a single good case test with a specific combination of parameters (which is often what people are contented with).
I do not want to create a test that loops over the test data, exercises the code, and verifies the correct execution with some assertions… that would make for a single test case that would fail at the first data combination that doesn’t work. I want it to be run as one test case per test data value.
The typical naive approach to this is to write a method that runs the actual test and performs assertions on the results (say
verify_xxx_works), and create some
test_xxx_* methods that call that one with different data values. Boy is that lame.
nose includes the concept of test functions as generators. If you write your test as a generator that spits out tuples of
(callable, argument...), nose will call it and run one test case per yielded value, thus effectively multiplying your number of test cases. You will see OKs for values that pass, and one failure per each set of arguments that fails. Great!
Oops, wait, not so great. If you read the docs carefully (I didn’t on my first try) you will find the small print:
Please note that method generators are not supported in unittest.TestCase subclasses
Meaning that if your tests are written using
unittest.TestCase you’re on your own again.
Unhappy with the situation of not being able to run one
TestCase method with different sets of data in a non-clumsy way, I’ve been playing around and the result is a small library that I’ve called “DDT” (which could either stand for Data-Driven or Data-Decorated Tests). It allows developers to create multiple test cases from a single test method, feeding it different data using a decorator.
I just recently changed jobs: my new development team is larger, it’s a very dynamic environment with multiple branches being created in
git repositories all the time, and people are creating Jenkins build jobs in order to run automated builds and tests for these branches on a daily basis.
At some point the manual copy-pasting of Jenkins jobs that was the general practice was beginning to annoy me and I decided to give a try to Jenkins’ HTTP API. The goal was to be able to script the creation of jobs, from consistent updateable descriptions. The result has been the creation of the
autojenkins package written in Python, that will allow us to query the status of Jenkins jobs, trigger manual builds, create new ones, and cleanup by removing old jobs.
Autojenkins is published on PyPI, you install it simply with:
pip install autojenkins
Once installed, it provides an easy to use Python client for the Jenkins HTTP API. Here’s a taste of the API with some sample usage:
from autojenkins import Jenkins
j = Jenkins('http://jenkins.pe.local')
# trigger a manual build and check results
# get only the result string (one of 'SUCCESS', 'UNSTABLE', 'FAILURE'):
# get the configuration file for a job:
# Create a new job from a job named 'template', replacing variables
# check result and delete if successful:
result = j.last_result('my-new-job')['result']
if result == 'SUCCESS':
According to web browsing statistics only about 20% of the web users use displays 1024 pixels wide or less, and this count is fast declining. It looks like 1366px will quickly become the most used width. And still so many web pages are designed around fixed-width layouts of 960px, leaving most of us with half-empty displays with vertically laid-out information. Haven’t we realized displays are very horizontal nowadays?
I believe designers should seriously start considering adapting to the real sizes of their users’ screens. Be creative. Think of web pages as something horizontal rather than vertical – don’t make us scroll down all the time while leaving lots of empty space at the sides. Think adaptive layouts. Think. Do not copy what is already old and doesn’t suit most of us.
Hopefully at some point we developers will all use media queries and designers insisting on fixed width (which I understand gives them much more control) will provide for at least a couple of alternative layouts, one of them looking good on 1024px, and another one on 1366px or wider. Maybe things like this will help.
(For a time, this might be a little like when people supported IE6 and other browsers via loading alternative style sheets, but this time by using standard methods.)
Is anyone seriously raising to the challenge? Please let me know. I’d love to hear of high traffic sites already using some approach to multi-resolution support.
This post is built on some assumptions.
First, I assume that you already know that writing unit tests is good for you. Well, to be honest, if you are not systematically writing tests for your code, you shouldn’t be calling yourself a software engineer anyway. No excuses.
In consequence I also assume that your latest Django project includes its dose of unit testing. But do you have a clear idea of which parts of your Django site are not being tested? Are you taking action to improve on that area? In other words, are you already obtaining and analysing coverage data for your project?
If so, lucky you. I didn’t, decided it was about time, and set out to the task.
I will try to demistify the process, since it takes very little effort and you can reap substantial benefits from it – provided, of course, that you take a look at the coverage reports on a regular basis, and add tests for the uncovered methods… but you promise you will do that, won’t you? Great!
We will start by generating the reports manually, and will then move on to automating them into Jenkins, our friendly build butler.
These are the steps remaining from our previous article. In order to complete our desired setup we must configure Apache with
mod_wsgi pointing to the new virtual environment.
But before we can even do that, we need to setup
mod_wsgi, which in our case will require building it and installing it from source.
mod_wsgi with our virtual Environment
Detailed explanations for using virtual environments in
mod_wsgi can be found here.
Just by reading over a bit, and based on some prior experience (2+ years ago, though), I was expecting this to be the main pain area of the whole process. From the docs I read that
mod_wsgi has to be compiled against the same version of Python your code will be running under. Which means I will have to build from source, since Ubuntu 9.04’s version of mod_wsgi is linked with its included version of Python 2.6.
First, we must build and install
mod_wsgi to use the correct version of Python. The installation guide for
mod_wsgi is very clear, you just need to follow it. Below are the distilled steps my installation needed.
We start by installing the Apache2 development libraries (which we will need in order to install
mod_wsgi), and downloading
mod_wsgi from its subversion repository.
sudo apt-get install apache2-dev
svn co http://modwsgi.googlecode.com/svn/branches/mod_wsgi-2.X mod_wsgi-2.x