Q3/2011 in review

In the hope that I might inspire others to do the same, I’ve created a few screencasts showing some of the cool things I worked on in the last quarter. I’ve tried to keep them all short, and they’re all available in HD so no need to squint to see details.

pytest plugin for WebQA

Endurance tests daily results

System graphics details in endurance reports

Running the Mozmill tests in Jenkins

Running the Selenium IDE Mozmill tests in Bamboo

Adding Mozmill tests to the Selenium IDE build system

Back in April I blogged about the Mozmill tests I’d written to test Selenium IDE. I followed up in June with a blog post covering how to run these tests. The natural progression is to add these tests into the existing Selenium IDE build environment, which is run using Atlassian’s continuous integration server, Bamboo.

At Mozilla, we want to run the Mozmill tests for the released versions of add-ons against the latest builds of Firefox. This is to determine any regressions in Firefox that will potentially cause issues for the add-on authors. It also gives the add-on authors an early warning if there is a potential compatibility issue with an upcoming release of Firefox. We currently run these tests on a schedule, but will soon be looking to move to a continuous integration solution ourselves. You can see the results of these daily tests on our dashboard.

The add-on authors (Selenium in this case) are typically more interested to know if the latest build of their add-on is functioning in the current version of Firefox, to give them confidence to release bug fixes and new features without regressions. As the Selenium project already uses continuous integration, adding the Mozmill test step to this is a great step towards achieving the same within Mozilla, and immediately benefits the add-on author.

The Selenium IDE project plan in Bamboo has three stages: Build, Test, and Package. I’m going to focus on the Test stage in this blog post, as the other stages are very specific to Selenium. If you’re reading this and you develop a Firefox add-on then you should be able to apply the following to your project without too much tweaking.

Prerequisites

There are just two prerequisites for running these tests:

  • Python: The automation scripts are written in Python, so this is required.
  • Mercurial: You must have the Python modules for Mercurial installed, as this is the source code management tool that the automation scripts and tests are using.

Tasks

There are several tasks to complete for the Tests build. Below I list these tasks, with an explanation and configuration steps for each.

Clone mozmill-tests

Because we’re wanting to run the tests against a specific build of the add-on rather than the latest release, we need to clone the mozmill-tests repository and override the addons.ini file with the location of our target add-on. This is simply a Mercurial task, as follows:

hg clone http://hg.mozilla.org/qa/mozmill-tests

Note that if you have Mercurial set up as an executable in your continuous integration server then you may not need the ‘hg’ part of this command, as it will be substituted based on the agent running the command.

Switch branch

By default the mozmill-tests repository will be set to the default branch. This is paired with the mozilla-central branch of Firefox, and therefore is relevant only when running tests against the very latest nightly builds of Firefox. As we’re interested in testing against the latest release, we switch to the mozmill-release branch.

hg checkout mozilla-release

You will need to make sure this command is run within the directory of the cloned mozmill-tests repository, which by default will be simply mozmill-tests.

Create addon.ini

The addon.ini file tells the script where to download/install the add-on from. If you didn’t create your own then it would likely be downloaded from addons.mozilla.org or the add-on author’s preferred download location for the latest release. The file is essentially an ini file, with locations for linux, mac, and windows. If your continuous integration server provides a link to latest artifacts then you can add this, or you can use some sort of artifact sharing such as there is in Bamboo. The following commands create a suitable addon.ini for Selenium IDE.

echo "[download]" > tests/addons/ide@seleniumhq.org/addon.ini
echo "linux=file://${bamboo.build.working.directory}/selenium-ide.xpi" >> tests/addons/ide@seleniumhq.org/addon.ini
echo "mac=file://${bamboo.build.working.directory}/selenium-ide.xpi" >> tests/addons/ide@seleniumhq.org/addon.ini
echo "win=file://${bamboo.build.working.directory}/selenium-ide.xpi" >> tests/addons/ide@seleniumhq.org/addon.ini

If you’re using Jenkins, then the Copy Artifact Plugin could be useful for sharing artifacts between builds.

Commit addon.ini

It’s necessary to commit the replacement addon.ini file so that it is included when the local repository is cloned when running the tests.

hg commit -m 'Specify latest build of addon.'

Note: It really doesn’t matter what you put for the commit message here as this commit is not preserved between builds.

Download latest Firefox release

Rather than having to make sure your build agent always has the latest version of Firefox installed, there’s a handy script that can download this for you. This is a Python script, and therefore needs to be set up to use the Python executable.

./download.py --directory=latest-release --platform=mac --type=release --version=latest

Substitute the value of the platform for whatever platform your build agent is running. The latest release of Firefox will be downloaded to the latest-release directory. Note that the version value of ‘latest’ is relying on a symlink on Mozilla’s FTP server, that points to the directory of the latest released version number.

Run Mozmill tests

You can now run the script that executes the Mozmill tests.

./testrun_addons.py --junit=results.xml --logfile=results.log --repository=mozmill-tests --target-addons=ide@seleniumhq.org --with-untrusted latest-release

Here’s an explanation of the command line options:

  • The junit command line option determines where the results will be stored. The JUnit report format is one supported by many continuous integration servers, and often provides some nice reporting and visualizations of the results. This destination filename is substituted with a counter for each file created, for example results.xml will be results_0.xml.
  • Adding the logfile is optional, however this can be a useful build artifact if you have failures.
  • As we’re using a locally modified repository, we need to specify the location of this using the repository command line option. The default location will be mozmill-tests.
  • We need to set the target add-on using the target-addon option. This must match the directory beneath tests/addons, which in the case of Selenium IDE is ide@seleniumhq.org.
  • The with-untrusted flag is necessary if the add-on is not hosted at addons.mozilla.org. Any add-on hosted by Mozilla will be implicitly trusted. As Selenium IDE is not currently hosted by Mozilla, this flag is necessary.
  • Finally, the path to the Firefox binary is needed. It’s possible to simply point to a directory that contains a downloaded copy of Firefox, so we just use latest-release as that’s where our download task was told to put Firefox.

Parse test results

As mentioned above, a lot of continuous integration servers support results in JUnit report format, so your final task may be to specify the location of these files. If you used the example given above, then you will specify these using results*.xml.

The Bamboo instance for Selenium is publically viewable, so you can see the results of recent builds for Selenium IDE, and the report for the latest build. You can also see the results on the Mozmill archive dashboard. The Selenium IDE project is built whenever a change is committed to the core or dependent code sections of the repository. It can also be triggered manually.

Unfortunately the Selenium build hardware is experiencing stability issues at the time of writing this, meaning that there is not always a suitable build agent for the Mozmill tests.

Known issues

Currently there is an issue with the Mozmill automation script, in that it will exit without an error code even when tests have failed. Fortunately, the continuous integration servers that I’ve been working with update the build success based on the JUnit reports. If this wasn’t the case then builds with failing tests would be incorrectly reported as successful. We have a bug 626712 on file for this issue, and it will hopefully be resolved soon.

Running the Selenium IDE Mozmill tests

A short while ago I posted about the Mozmill tests I’ve created for Selenium IDE, however I didn’t cover how you can run these tests yourself. Currently I run these manually as needed to ensure that the nightly Firefox builds have not regressed or introduced changes in any areas that the addon depends on. We ultimately intend this to be a scheduled job.

I also have added a job on the Selenium continuous integration server that runs the tests against a released version of Firefox. In the future this will test the latest build of Selenium IDE, and will run every time the addon is built.

In order to run the Selenium IDE tests you will need to have Mercurial and Mozmill installed, which you can do simply by using pip install mercurial mozmill. Once you have these you can clone the mozmill-automation repository, using the following command:

hg clone http://hg.mozilla.org/qa/mozmill-automation

Then from the repository directory run the following:

./testrun_addons.py --report=http://mozmill-crowd.brasstacks.mozilla.com/db/ --target-addons=ide@seleniumhq.org --with-untrusted /Applications/Firefox.app

Reports will be sent to our dashboard as specified by the --report parameter, and available to see here.

The --target-addons parameter specifies that we only want to run the Selenium IDE tests, and not all of the addons tests we have, and the --with-untrusted parameter is required because Selenium IDE is not listed on addons.mozilla.org and is therefore ‘untrusted’.

The final parameter is the version of Firefox you want to run the tests against. These tests can currently be run against Nightly (7.0), Aurora (6.0), Beta (5.0), as well as the current releases (4.0, 3.6, and 3.5).

Below is a short screencast demonstrating how to run the tests:

With the recent release of Selenium IDE 1.0.11, I was able to push some new tests. These check a few more commands, and brings the total number of tests up to 40. If you’re interested in helping out and you have any questions, then you can either get in touch with me directly, ask in the #selenium IRC channel on Freenode, or post a message to the selenium-developers Google group.

Testing Selenium IDE with Mozmill

Mozmill tests can be written for any Gecko based application, and can therefore be used to test Firefox extensions (add-ons). Since October I have been working on a new suite of tests for the Selenium IDE extension in the hope that we will be able to discover any regressions in new versions of either the add-on or in Firefox itself. Another reason for creating the suite is to demonstrate the ease of which such tests can be written and to encourage add-on developers to create test suites themselves.

Once tests have been created they can be checked into the Mozmill Tests repository. We will soon be running these on a daily basis against nightly Firefox builds and making reports available on our public dashboard.

The Selenium IDE tests currently comprise of three major parts:

  1. The shared module (selenium.js) abstracts the tests from the location of elements, provides centralised methods for common tasks, and exposes properties based on the UI.
  2. The checks helper module (checks.js) provides methods for common assertions to avoid duplication across tests.
  3. The tests themselves.

There are currently just 20, which basically executes Selenium commands and check that they pass or fail as expected. Below is a guide to construct one of these tests if you weren’t using the shared module or checks helper module. The test verifies that the assertText command executes and passes as expected.

First, we open Selenium IDE using the Tools menu item:

Then we clear the current contents of the Base URL field by selecting all of the text in it and hitting the delete key, and then type in our test data. You will notice here a reference to the getElement method, which allows us to gather all locators in a single location for less duplication, and much simpler test maintenance:

Now we add three new commands to our Selenium test case by selecting the next available row and typing into the various fields:

With our commands in place, we click the toolbar button to execute the test and wait for the test to complete:

Now that the test has run, we check that the suite progress indicator has the ‘success’ class:

We also check that the run counts are correct. The total number of tests run should be 1, and the number of failed tests should be 0:

Now we check that there are no errors in the log console:

Because the command we are testing should pass, we also check that the final command was executed:

Finally, we close Selenium IDE:

As there are a lot of things here that will be shared across several tests, you can see that there would be a lot of duplicated code. This is the reason we abstract the useful user interface interactions into the shared module and the useful checks into a helper module. A nice side-effect of this is the test becoming much more readable. Below is the same test as above but calling out to the additional modules:

As mentioned, there are currently only 20 automated tests for Selenium IDE, and we need more! If you’re interested in helping out and you have any questions, then you can either get in touch with me directly, ask in the #selenium IRC channel on Freenode, or post a message to the selenium-developers Google group.

Automated Firefox tests with add-ons installed

Mozmill has a feature that allows the tester to install an add-on during the test run. Until recently this was only used by one of our automated testruns, which was for specifically testing the installed add-on rather than Firefox.

With the recent development on the endurance tests project, it has been necessary to take more advantage of this feature. A bug was reported where if Adblock Plus – our most popular add-on – is installed memory usage increased rapidly when navigating a web page, and none of the memory was being released. To start investigating this I created a very basic (and specific) test for the site mentioned in the bug report, and then simply hacked something together based on the existing add-ons test run. A short time later, the need to run the endurance tests with multiple add-ons installed came up, so I hacked some more to get that in place too. Rather than keep these hacks around, it made sense to allow testers to specify add-ons to be installed during any of our testruns, so I started to work on the necessary patches.

As a result, testers can now run any of our automation scripts with one or more add-ons installed by simply specifying the addons command line parameter. To install multiple add-ons simply repeat the parameter. The argument can either be a path on your machine, or a web/ftp server. In the latter case the add-on will be downloaded to a temporary location before the testrun and removed at the end. The latest version of Mozmill (1.5.2) also now disables the compatibility check, meaning that we can run tests with add-ons that are not marked as compatible with the version of Firefox in use.

An example of running the endurance tests with two add-ons installed:

./testrun_endurance.py --addons=https://addons.mozilla.org/firefox/downloads/latest/748/addon-748-latest.xpi --addons=/Users/dave/Downloads/noscript.xpi --delay=1000 --iterations=10 /Applications/Firefox.app/