pytest-mozwebqa 1.1 released

It’s been a long time coming, but pytest-mozwebqa 1.1 has finally been released! The main feature of this new version is the ability to specify a proxy server for the browsers launched. It will also use this in conjunction with upcoming plugins pytest-browsermob-proxy (to record and report network traffic) and pytest-zap (to spider and scan for known security vulnerabilities). Check out the complete changelog for 1.1.

Announcing pytest-mozwebqa 1.0

Finally I can announce that I have released version 1.0 of the pytest plugin used by Mozilla’s Web QA team! It’s been in use for several months now, but I’ve paid off some long standing technical debt, and now consider it stable!

I should say that although this plugin was primarily developed for Mozilla’s Web QA team, anyone that wants to write test automation for websites in Python can take advantage of it. There’s very little that is specific to Mozilla, and this can easily be overridden on the command line, or you could simply fork the project and create your own version of the plugin. Anyway, as I haven’t previously announced the plugin, it’s probably a good idea for me to explain what it actually does…

Selenium integration
The primary feature of the plugin is the ability for it to launch and interact with a web browser. It does this by integrating with either the RC or WebDriver APIs provided by the Selenium browser automation framework. The browser is launched ahead of every test, unless the test is specifically decorated to indicate that the test does not require a browser:

The plugin works with a local WebDriver instance, with a remote server (RC or WebDriver), and with Selenium Grid (also RC or WebDriver).

Sauce Labs integration

You can also use this plugin to run your tests in the cloud using your Sauce Labs account. The integration allows you to specify a build identifier and tags, which help when filtering the Sauce Labs jobs. To enable Sauce Labs integration, you simply need to specify a credentials file on the command line, which is a YAML file in the following format:

When tests are run using Sauce Labs, there will also be additional items in the HTML report, such as the video of the test and a link to the job.

If you don’t have a Sauce Labs account already, you can sign up for one here. Sauce Labs is used by Mozilla whenever we need to run on browser/platform combinations that our own Selenium Grid doesn’t support, or whenever we need boost up the number of testruns such as before a big deployment.

Credentials
The plugin allows you to store your application’s credentials in a YAML file as specified on the command line. This is an important feature for Mozilla, where the credentials files are stored in a private repository. Anyone wanting to contribute or run the tests themselves, simply has to create an account and a YAML file.

Fail fast
Have you ever been frustrated when you’ve kicked off your suite of several hundred tests, just for every single one of them to launch a browser despite the application under test being unavailable? It’s happened to me enough times that I added an initial check to ensure the base URL of the application is responding with a 200 OK; This saves so much time.

Protect sensitive sites
I’ll leave the debate on whether you should be running your tests against sensitive environments such as production for someone else, but if you do decide to do this, the plugin gives you a little bit of extra protection. For a start, all tests are considered destructive and therefore will not run by default. You can explicitly mark tests as non-destructive. Having an opt-in system is more maintenance (I know it’s a pain), but much lower risk. I’d rather accidentally not be running a non-destructive test against production than accidentally run a destructive one, and I have felt this pain before!

Of course, for some environments, you will want to run your destructive tests, and you can do so by specifying the --destructive command line option.

There’s also a final safety net just in case you try running destructive tests against a sensitive environment. This skips any destructive tests that are run against a URL that matches a regular expression. For Mozilla, this defaults to mozilla\.(com|org) and any skipped tests will give a suitable reason in your reports.

HTML report

Digging through console logs or JUnit reports can be a little frustrating when investigating failures, so the plugin provides a nicely formatted HTML report. This shows the options used when running the tests, a short summary of the results, and then lists each test along with timings and additional resources where appropriate.

I call the additional resources the test’s “death rattle” as it’s only captured for failing tests (it’s not particularly useful for passing tests, and consumes unnecessary resources). For tests that use Selenium this should at least include a screenshot, the HTML, and the current URL when the failure occurred. If you’re running in Sauce Labs then you should also see the video of the test and a link to the test job.

For full details and documentation of the plugin, take a look over the project’s README file on github. If you find any bugs or have any feature requests please raise them in github issues.

Case Conductor pytest plugin proposal

Case Conductor is the new test case management tool being developed by Mozilla to replace Litmus. I’ve recently been thinking about how we can improve the relationship between our automated tests and our test case management, and want to sare my thoughts on how a plugin could help our WebQA team do just that.

Annotating tests

Currently our automated tests include a docstring referencing the Litmus ID. This is inconsistent (some even include a full URL to the test case) and hard to do anything with. It’s important to reference the test case, but I see this as the bare minimum.

Current method

I would prefer to use a custom pytest mark, which would accept a single ID or a list. By doing this we can cleanly use the IDs without having to write a regex or conform to a strict docstring format.

Proposed method

Submitting results

There’s already an API in development for Case Conductor, so it would be great to interface directly with it during automated test runs. We could, for example prompt the user for the product, test cycle, and either a test run or a collection of test suites. With these details it should be possible for every automated run to create a new test run in Case Conductor and mark the linked test cases as passed/failed depending on the result. In addition to the existing reports, we can then also offer a link to the Case Conductor report for the relevant test run.

Result reports

We could also use the Case Conductor plugin to enhance the existing HTML report generated by the plugin already in use by WebQA. For example, we could link to the Case Conductor report for the test run, and provide a link for each test case. In the following mockup the new details are highlighted.

Coverage reports

By knowing all test cases in the specified product/cycle/run/suites we can report on the automated coverage. This could be used to set goals such as ‘automate 75% of product A’s tests’, which suddenly become a lot easier to measure. Here’s another mockup of how this command line report may look.

We could also use tags to indicate test cases that aren’t worth automating so the coverage is more realistic.

Command options

I would propose several command line options in order to cover the above mentioned functionality. In the form of output from –help, here are my suggestions:

Some of these would be mandatory but could fail with useful messages if omitted. For example, if the product was not provided then a list of available products could be returned. The same could be done for test cycles and test runs.