Sprinting with pytest in Freiburg

IMG_9666
On my way to the venue

Last week was the pytest development sprint located in the beautiful town of Freiburg, Germany. I had been really looking forward to the sprint, and being immediately after the Mozilla all-hands in London I was still buzzing with excitement when I started my journey to Freiburg.

On the first morning I really wasn’t sure about how to get to our sprint venue via public transport, and it didn’t seem to be far to walk from my hotel. It was a lovely sunny morning, and I arrived just in time for the introductions. Having been a pytest user for over five years I was already familiar with Holger, Ronny, and a few others, but this was the first time meeting them. We then spent some time planning out our first day, and coming up with things to work on throughout the week. My first activity was pairing up to work on pytest issues.

Breakfast at Cafe Schmidt
Breakfast at Cafe Schmidt

For my first task I paired with Daniel to work on an issue he had recently encountered, which I had also needed to workaround in latest versions of pytest. It turned out to be quite a complex issue related to determination of the root directory, which is used for relative reference to test files as well as a number of other things. The fix seemed simple at first – we just needed to exclude arguments that are not paths from consideration for determining the root directory, however there were a number of edge cases that needed resolving. The patch to fix this has not yet landed, but I’m feeling confident that it will be merged soon. When it does, I think we’ll be able to close at least three related issues!

Ducks in the town centre
Ducks in the town centre

Next, I worked with Bruno on moving a bunch of my plugins to the pytest-dev GitHub organisation. This allows any of the pytest core team to merge fixes to my plugins and means I’m not a blocker for any important bug fixes. I’m still intending on supporting the plugins, but it feels good to have a larger team looking after them if needed. The plugins I moved are pytest-selenium, pytest-html, pytest-variables, and pytest-base-url. Later in the week we also moved pytest-repeat with the approval of the author, who is happy for someone else to maintain the project.

If you’ve never used pytest, then you might expect to be able to simply run pytest  on the command line to run your tests. Unfortunately, this isn’t the case, and the reason is that the tool used to be part of a collection of other tools, all with a common prefix of py.  so you’d run your tests using the py.test command. I’m pleased to say that I worked with Oliver during the sprint to introduce pytest  as the recommended command line entry point for pytest. Don’t worry – the old entry point will still work when 3.0 is released, but we should be able to skip a bunch of confusion for new users!

Break day hikeOn Thursday we took a break and took a cable-car up a nearby peak and hiked around for a few hours. I also finally got an opportunity to order a slice of Schwarzwälder Kirschtorte (Black Forest gateau), which is named after the area and was a favourite of mine growing up. The break was needed after my brain had been working overtime processing the various talks, demonstrations, and discussions. We still talked a lot about the project, but to be out in the beautiful scenery watching para-gliders gracefully circling made for a great day.

Hefeweizen!
Hefeweizen!

When we returned to our sprint venue on Friday I headed straight into a bug triage with Tom, which ended up mostly focusing on one particular issue. The issue relates to hiding what is at first glance redundant information in the case of a failure, but on closer inspection there are actually many examples where this extra line in the explanation can be very useful.

Unfortunately I had to leave on Saturday morning, which meant I missed out on the final day of the sprint. I have to say that I can’t wait to attend the next one as I had so much fun getting to know everyone, learning some handy phrases in a number of different languages, and commiserating/laughing together in the wake of Brexit! I’m already missing my fellow sprinters!

Python testing sprint 2016

In June, the pytest developer community are gathering in Freiburg, Germany for a development sprint. This is being funded via an Indiegogo campaign, which needs your help to reach the goal! I am excited to say that I will be attending, which means that after over 5 years of using pytest, I’ll finally get to meet some of the core contributors.

I first learned about pytest when I joined Mozilla in late 2010. Much of the browser based automation at that time was either using Selenium IDE or Python’s unittest. There was a need to simplify much of the Python code, and to standardise across the various suites. One important requirement was the generation of JUnit XML reports (considered essential for reporting results in Jenkins) without compromising the ability to run tests in parallel. Initially we looked into nose, but there was an issue with this exact requirement. Fortunately, pytest didn’t have a problem with this – JUnit XML was supported in core and was compatible with the pytest-xdist plugin for running tests in parallel.

Ever since the decision to use pytest was made, I have not seen a compelling reason to switch away. I’ve worked on various projects, some with overly complex suites based on unittest, and I’ve always been grateful when I’ve been able to return to pytest. The active development of pytest has meant we’ve never had to worry about the project becoming unsupported. I’ve also always found the core contributors to be extremely friendly and helpful on IRC (#pylib on irc.freenode.net) whenever I need help. I’ve also more recently been following the pytest-dev mailing list.

I’ve recently written about the various plugins that we’ve released, which have allowed us to considerably reduce the amount of duplication between our various automation suites. This is even more critical as the Web QA team shifts some of the responsibility and ownership of some of their suites to the developers. This means we can continue to enhance the plugins and benefit all of the users at once, and our users are not limited to teams at Mozilla. The pytest user base is large, and that means our plugins are discovered and used by many. I always love hearing from users, especially when they submit their own enhancements to our plugins!

There are a few features I particularly like in pytest. Highest on the list is probably fixtures, which can really simplify setup and teardown, whilst keeping the codebase very clean. I also like being able to mark tests and use this to influence the collection of tests. One I find myself using a lot is a ‘smoke’ or ‘sanity’ marker, which collects a subset of the tests for when you can’t afford to run the entire suite.

During the sprint in June, I’d like to spend some time improving our plugins. In particular I hope to learn better ways to write tests for plugins. I’m not sure how much I’ll be able to help with the core pytest development, but I do have my own wishlist for improvements. This includes the following:

Maybe I’ll even be able to work on one of these, or any of the open issues on pytest with guidance from the experts in the room.

Selenium tests with pytest

When you think of Mozilla you most likely first associate it with Firefox or our mission to build a better internet. You may not think we have many websites of our own, beyond perhaps the one where you can download our products. It’s only when you start listing them that you realise how many we actually have; addons repository, product support, app marketplace, build results, crash statistics, community directory, contributor tasks, technical documentation, and that’s just a few! Each of these have a suite of automated functional tests that simulate a user interacting with their browser. For most of these we’re using Python and the pytest harness. Our framework has evolved over time, and this year there have been a few exciting changes.

Over four years ago we developed and released a plugin for pytest that removed a lot of duplicate code from across our suites. This plugin did several things; it handled starting a Selenium browser, passing credentials for tests to use, and generating a HTML report. As it didn’t just do one job, it was rather difficult to name. In the end we picked pytest-mozwebqa because it was only specific in addressing the needs of the Web QA team at Mozilla. It really took us to a new level of consistency and quality across all our our web automation projects.

Enhanced HTML report generated by pytest-htmlThis year, when I officially joined the Web QA team, I started working on breaking the plugin up into smaller plugins, each with a single purpose. The first to be released was the HTML report generation (pytest-html), which generates a single file report as an alternative to the existing JUnit report or console output. The plugin was written such that the report can be enhanced by other plugins, which ultimately allows us to include screenshots and other useful things in the report.

Next up was the variables injection (pytest-variables). This was needed primarily because we have tests that require an existing user account in the application under test. We couldn’t simply hard-code these credentials into our tests, because our tests are open source, and if we exposed these credentials someone may be able to use them and adversely affect our test results. With this plugin we are able to store our credentials in a private JSON file that can be simply referenced from the command line.

The final plugin was for browser provisioning (pytest-selenium). This started as a fork of the original plugin because much of the code already existed. There were a number of improvements, such as providing direct access to the Selenium object in tests, and avoiding setting a default implicit wait. In addition to supporting Sauce Labs, we also added support for BrowserStack and TestingBot.

Now that pytest-selenium has been released, we have started to migrate our own projects away from pytest-mozwebqa. The migration is relatively painless, but does involve changes to tests. If you’re a user of pytest-mozwebqa you can check out a few examples of the migration. There will no longer be any releases of pytest-mozwebqa and I will soon be marking this project as deprecated.

The most rewarding consequence of breaking up the plugins is that we’ve already seen individual contributors adopting and submitting patches. If you’re using any of these plugins let us know – I always love hearing how and where our tools are used!

pytest-mozwebqa 1.1 released

It’s been a long time coming, but pytest-mozwebqa 1.1 has finally been released! The main feature of this new version is the ability to specify a proxy server for the browsers launched. It will also use this in conjunction with upcoming plugins pytest-browsermob-proxy (to record and report network traffic) and pytest-zap (to spider and scan for known security vulnerabilities). Check out the complete changelog for 1.1.

Announcing pytest-mozwebqa 1.0

Finally I can announce that I have released version 1.0 of the pytest plugin used by Mozilla’s Web QA team! It’s been in use for several months now, but I’ve paid off some long standing technical debt, and now consider it stable!

I should say that although this plugin was primarily developed for Mozilla’s Web QA team, anyone that wants to write test automation for websites in Python can take advantage of it. There’s very little that is specific to Mozilla, and this can easily be overridden on the command line, or you could simply fork the project and create your own version of the plugin. Anyway, as I haven’t previously announced the plugin, it’s probably a good idea for me to explain what it actually does…

Selenium integration
The primary feature of the plugin is the ability for it to launch and interact with a web browser. It does this by integrating with either the RC or WebDriver APIs provided by the Selenium browser automation framework. The browser is launched ahead of every test, unless the test is specifically decorated to indicate that the test does not require a browser:

The plugin works with a local WebDriver instance, with a remote server (RC or WebDriver), and with Selenium Grid (also RC or WebDriver).

Sauce Labs integration

You can also use this plugin to run your tests in the cloud using your Sauce Labs account. The integration allows you to specify a build identifier and tags, which help when filtering the Sauce Labs jobs. To enable Sauce Labs integration, you simply need to specify a credentials file on the command line, which is a YAML file in the following format:

When tests are run using Sauce Labs, there will also be additional items in the HTML report, such as the video of the test and a link to the job.

If you don’t have a Sauce Labs account already, you can sign up for one here. Sauce Labs is used by Mozilla whenever we need to run on browser/platform combinations that our own Selenium Grid doesn’t support, or whenever we need boost up the number of testruns such as before a big deployment.

Credentials
The plugin allows you to store your application’s credentials in a YAML file as specified on the command line. This is an important feature for Mozilla, where the credentials files are stored in a private repository. Anyone wanting to contribute or run the tests themselves, simply has to create an account and a YAML file.

Fail fast
Have you ever been frustrated when you’ve kicked off your suite of several hundred tests, just for every single one of them to launch a browser despite the application under test being unavailable? It’s happened to me enough times that I added an initial check to ensure the base URL of the application is responding with a 200 OK; This saves so much time.

Protect sensitive sites
I’ll leave the debate on whether you should be running your tests against sensitive environments such as production for someone else, but if you do decide to do this, the plugin gives you a little bit of extra protection. For a start, all tests are considered destructive and therefore will not run by default. You can explicitly mark tests as non-destructive. Having an opt-in system is more maintenance (I know it’s a pain), but much lower risk. I’d rather accidentally not be running a non-destructive test against production than accidentally run a destructive one, and I have felt this pain before!

Of course, for some environments, you will want to run your destructive tests, and you can do so by specifying the --destructive command line option.

There’s also a final safety net just in case you try running destructive tests against a sensitive environment. This skips any destructive tests that are run against a URL that matches a regular expression. For Mozilla, this defaults to mozilla\.(com|org) and any skipped tests will give a suitable reason in your reports.

HTML report

Digging through console logs or JUnit reports can be a little frustrating when investigating failures, so the plugin provides a nicely formatted HTML report. This shows the options used when running the tests, a short summary of the results, and then lists each test along with timings and additional resources where appropriate.

I call the additional resources the test’s “death rattle” as it’s only captured for failing tests (it’s not particularly useful for passing tests, and consumes unnecessary resources). For tests that use Selenium this should at least include a screenshot, the HTML, and the current URL when the failure occurred. If you’re running in Sauce Labs then you should also see the video of the test and a link to the test job.

For full details and documentation of the plugin, take a look over the project’s README file on github. If you find any bugs or have any feature requests please raise them in github issues.

Case Conductor pytest plugin proposal

Case Conductor is the new test case management tool being developed by Mozilla to replace Litmus. I’ve recently been thinking about how we can improve the relationship between our automated tests and our test case management, and want to sare my thoughts on how a plugin could help our WebQA team do just that.

Annotating tests

Currently our automated tests include a docstring referencing the Litmus ID. This is inconsistent (some even include a full URL to the test case) and hard to do anything with. It’s important to reference the test case, but I see this as the bare minimum.

Current method

I would prefer to use a custom pytest mark, which would accept a single ID or a list. By doing this we can cleanly use the IDs without having to write a regex or conform to a strict docstring format.

Proposed method

Submitting results

There’s already an API in development for Case Conductor, so it would be great to interface directly with it during automated test runs. We could, for example prompt the user for the product, test cycle, and either a test run or a collection of test suites. With these details it should be possible for every automated run to create a new test run in Case Conductor and mark the linked test cases as passed/failed depending on the result. In addition to the existing reports, we can then also offer a link to the Case Conductor report for the relevant test run.

Result reports

We could also use the Case Conductor plugin to enhance the existing HTML report generated by the plugin already in use by WebQA. For example, we could link to the Case Conductor report for the test run, and provide a link for each test case. In the following mockup the new details are highlighted.

Coverage reports

By knowing all test cases in the specified product/cycle/run/suites we can report on the automated coverage. This could be used to set goals such as ‘automate 75% of product A’s tests’, which suddenly become a lot easier to measure. Here’s another mockup of how this command line report may look.

We could also use tags to indicate test cases that aren’t worth automating so the coverage is more realistic.

Command options

I would propose several command line options in order to cover the above mentioned functionality. In the form of output from –help, here are my suggestions:

Some of these would be mandatory but could fail with useful messages if omitted. For example, if the product was not provided then a list of available products could be returned. The same could be done for test cycles and test runs.