Pretenders – fake servers for testing

Finally, we released pretenders to the general public, hooray! This is a project I have been developing with my friend and now ex-colleague Alex Couper. It has been a very interesting piece of work, and I am really glad it is out for other test-minded people to enjoy.

Pretenders are fake servers for testing purposes. They are used to mock external servers your code interacts with (such as HTTP APIs or SMTP servers). Mocking is done at the protocol level, making these appropriate for integration test environments, and usable to test applications written in any language (not just Python, which is the language we used to write pretenders).

As a starter, here are the slides for the lightning talk I gave at PyconUK 2012:

Pretenders is an open source project. You fill find the source code in Github, and the documentation in readthedocs. As it is just fresh out of the oven, it has some rough edges, mostly around documentation. Feedback and contributions are welcome.

Example usage

In order to use pretenders in your tests you will have to start a main server we call boss. The boss will spin off various fakes (pretenders) on demand, and assign them a free port from a configured range. The following examples assume a running pretenders boss on localhost at port 8000.

This is a taste of how you would write a test using pretenders to mock an external HTTP API your code depends on:

from pretenders.client.http import HTTPMock

# Assume a running boss server at localhost:8000
# Initialise the mock client and clear all responses
mock = HTTPMock('localhost', 8000)

# For GET requests to /hello reply with a body of 'Hello'
mock.when('GET /hello').reply('Hello')

# For the next POST  or PUT to /somewhere, simulate a BAD REQUEST status code
mock.when('(POST|PUT) /somewhere').reply(status=400)

# For the next request (any method, any URL) respond with some JSON data
mock.reply('{"temperature": 23}', headers={'Content-Type': 'application/json'})

# Point your app to the pretender's URL, and exercise it
set_service_url(mock.pretend_access_point)  # how you do this is app-specific

# ... run stuff

# Verify requests your code made
r = mock.get_request(0)
assert_equal(r.method, 'GET')
assert_equal(r.url, '/weather?city=barcelona')

Similarly, a test for an application that sends e-mails, by mocking the SMTP server:

from pretenders.client.smtp import SmtpMock

# Create a mock smtp service
smtp_mock = SMTPMock('localhost', 8000)

# Get the port number that this is faking on and assign as appropriate to the 
# system being tested (how yo do this will again depend on your application)
set_stmp_host_and_port("localhost", smtp_mock.pretend_port)

# ...run functionality that should cause an email to be sent

# Check that an email was sent
email_message = smtp_mock.get_email(0)
assert_equals(email_message['Subject'], "Thank you for your order")
assert_equals(email_message['From'], "foo@bar.com")
assert_equals(email_message['To'], "customer@address.com")
assert_true("Your order will be with you" in email_message.content)

Multiplying Python Unit Test Cases with Different Sets of Data

…or Data-Driven Tests

Often I find myself in the situation of wanting to run some unit test for a part of my system with different data values. I want to verify that the system works correctly with some combinations of data – and not just have a single good case test with a specific combination of parameters (which is often what people are contented with).

I do not want to create a test that loops over the test data, exercises the code, and verifies the correct execution with some assertions… that would make for a single test case that would fail at the first data combination that doesn’t work. I want it to be run as one test case per test data value.

The typical naive approach to this is to write a method that runs the actual test and performs assertions on the results (say verify_xxx_works), and create some test_xxx_* methods that call that one with different data values. Boy is that lame.

Fortunately, nose includes the concept of test functions as generators. If you write your test as a generator that spits out tuples of (callable, argument...), nose will call it and run one test case per yielded value, thus effectively multiplying your number of test cases. You will see OKs for values that pass, and one failure per each set of arguments that fails. Great!

Oops, wait, not so great. If you read the docs carefully (I didn’t on my first try) you will find the small print: Please note that method generators are not supported in unittest.TestCase subclasses

Meaning that if your tests are written using unittest.TestCase you’re on your own again.

Unhappy with the situation of not being able to run one TestCase method with different sets of data in a non-clumsy way, I’ve been playing around and the result is a small library that I’ve called “DDT” (which could either stand for Data-Driven or Data-Decorated Tests). It allows developers to create multiple test cases from a single test method, feeding it different data using a decorator.
Continue reading

Publishing Django Test Coverage Reports in Jenkins

This post is built on some assumptions.

First, I assume that you already know that writing unit tests is good for you. Well, to be honest, if you are not systematically writing tests for your code, you shouldn’t be calling yourself a software engineer anyway. No excuses.

In consequence I also assume that your latest Django project includes its dose of unit testing. But do you have a clear idea of which parts of your Django site are not being tested? Are you taking action to improve on that area? In other words, are you already obtaining and analysing coverage data for your project?

If so, lucky you. I didn’t, decided it was about time, and set out to the task.

I will try to demistify the process, since it takes very little effort and you can reap substantial benefits from it – provided, of course, that you take a look at the coverage reports on a regular basis, and add tests for the uncovered methods… but you promise you will do that, won’t you? Great!

We will start by generating the reports manually, and will then move on to automating them into Jenkins, our friendly build butler.
Continue reading