Test day + Meetup = Success!

Yesterday WebQA held a combined test day and meetup event in San Francisco. The idea was to bring everyone together to hack on our QA projects alongside us, and in return we would be able to share our own knowledge and experiences, and demonstrate how we test.

The team had managed to pull together over 50 tasks for the evening, ranging from simple beginner level to a few advanced ones, and using a variety of technologies and projects. As people arrived we encouraged them to slap on a name tag and grab a task.

Once we had everyone in the room (I think there were 18 of us in total) we gave a brief introduction from the team over some beer and pizza, and then the fun really started! Of course it didn’t all go smoothly, and we quickly realised that the entire python.org domain was unavailable, which meant that nobody could download the required Python packages, such as Selenium and pytest! Fortunately we found after a little scrambling that we could use the –use-mirrors command line argument to get us unstuck. In fact, we’ve now even added this to our test jobs running on Jenkins to hopefully make them a little more resilient to PyPI downtime, so in a way I’m glad we had the issues.

Not everyone was working on new tests, in fact Zac set up a mini grid of Android devices (tablets and mobile), and was giving demonstrations on running web tests on mobile. If you’ve not tried this yet then you should as it’s great fun – and the future is mobile, right?

We also had some great feedback on our documentation, which will soon lead to some further improvements and simplifications. I was really impressed when one of the attendees told me there were some issues with the documentation on our wiki, and then told me they just logged in and fixed it themselves. That’s totally the kind of involvement we’re encouraging and it’s so great to see if happening.

By the end of the evening we had three pull requests submitted (all for different projects), and I’m almost certain that number will increase as some of the other tasks were near completion. We’ll now work on reviewing those pulls and merging them in, and look forward to any more that come our way!

Finally, I want to say a huge thanks to everyone that was able to attend for making it such a great event. We all hope you enjoyed it as much as we did, and look forward to seeing you at future events (and on #mozwebqa on irc.mozilla.org). Oh and thanks to Kedo for the Ctrl+W keyboard shortcut for Terminal!

Case Conductor pytest plugin proposal

Case Conductor is the new test case management tool being developed by Mozilla to replace Litmus. I’ve recently been thinking about how we can improve the relationship between our automated tests and our test case management, and want to sare my thoughts on how a plugin could help our WebQA team do just that.

Annotating tests

Currently our automated tests include a docstring referencing the Litmus ID. This is inconsistent (some even include a full URL to the test case) and hard to do anything with. It’s important to reference the test case, but I see this as the bare minimum.

Current method

I would prefer to use a custom pytest mark, which would accept a single ID or a list. By doing this we can cleanly use the IDs without having to write a regex or conform to a strict docstring format.

Proposed method

Submitting results

There’s already an API in development for Case Conductor, so it would be great to interface directly with it during automated test runs. We could, for example prompt the user for the product, test cycle, and either a test run or a collection of test suites. With these details it should be possible for every automated run to create a new test run in Case Conductor and mark the linked test cases as passed/failed depending on the result. In addition to the existing reports, we can then also offer a link to the Case Conductor report for the relevant test run.

Result reports

We could also use the Case Conductor plugin to enhance the existing HTML report generated by the plugin already in use by WebQA. For example, we could link to the Case Conductor report for the test run, and provide a link for each test case. In the following mockup the new details are highlighted.

Coverage reports

By knowing all test cases in the specified product/cycle/run/suites we can report on the automated coverage. This could be used to set goals such as ‘automate 75% of product A’s tests’, which suddenly become a lot easier to measure. Here’s another mockup of how this command line report may look.

We could also use tags to indicate test cases that aren’t worth automating so the coverage is more realistic.

Command options

I would propose several command line options in order to cover the above mentioned functionality. In the form of output from –help, here are my suggestions:

Some of these would be mandatory but could fail with useful messages if omitted. For example, if the product was not provided then a list of available products could be returned. The same could be done for test cycles and test runs.

Hooking Android up to the (Selenium) Grid

When I set myself a Q4 goal of getting a small suite of Mozilla’s WebQA tests running on Android I didn’t think it would be much work. The AndroidDriver has been around for some time, and from what I understood it was pretty mature – I had even run a couple of tests locally against it and they worked well. What I found was there were a couple of important gotchas…

The first was that the port forwarding that’s necessary to run tests on an Android emulator or device binds to localhost. This isn’t a problem so long as the test commands originate from the same machine as the emulator/device is attached to, however that doesn’t make it very useful for Selenium Grid, where we want to run several emulators/devices remotely.

I was suprised that this issue had not come up before… I didn’t think I could possibly be the first person to want to hook up an AndroidDriver to Selenium Grid..! After searching around for a solution I decided to leave things for a while to take care of other priorities. I was then pulled back into the problem when my fellow Mozillian Raymond Etornam came across the same issue and was looking for a solution.

Raymond and I approached Dounia Berrada, who heads up the AndroidDriver efforts at Google, and as a result of this discussion, Raymond raised an issue in the Selenium project. Dounia got to the bottom of the issue and found a very simple solution, using socat to listen on another port on all interfaces and forward traffic to the localhost bound port forward configured by ADB. The Selenium project’s wiki has been updated with the solution.

The second issue I had was registering the AndroidDriver with Selenium Grid. This is simply a case of posting the correct JSON to the server, and is something that’s done for you when launching a Selenium server. The quick solution would have been to just start a Selenium server with the details for the Android node. This feels like too much of a hack, plus I didn’t really want the server running constantly when it’s not really doing anything other than consuming resources.

I decided to write a very simple Python package that would take a few arguments, construct the necessary JSON and post it to the Selenium Grid. I’ve released this as FlynnID (there’s a reference to ‘The Grid’ if you look hard enough) and it’s available on PyPI, with the source code on github.

A more long term solution will be to have the AndroidDriver register with a Selenium Grid when it’s started. If anyone is keen to look into this please get in touch!

So with these issues resolved, I was able to finally put together the first mobile test suite. I migrated the Input tests to WebDriver, did a lot of cleaning up, and split them into desktop/mobile tests. There’s only one mobile test there at the moment, but now we have everything in place we can go full speed into test development! If you’re interested in helping us out to write more tests or improve our frameworks/infrastructure then the WebQA page on QMO is a great place to start.

Oh, I’ve also added a cute Android icon  to Selenium Grid, which will be available from version 2.16.0.

Q3/2011 in review

In the hope that I might inspire others to do the same, I’ve created a few screencasts showing some of the cool things I worked on in the last quarter. I’ve tried to keep them all short, and they’re all available in HD so no need to squint to see details.

pytest plugin for WebQA

Endurance tests daily results

System graphics details in endurance reports

Running the Mozmill tests in Jenkins

Running the Selenium IDE Mozmill tests in Bamboo

Adding Mozmill tests to the Selenium IDE build system

Back in April I blogged about the Mozmill tests I’d written to test Selenium IDE. I followed up in June with a blog post covering how to run these tests. The natural progression is to add these tests into the existing Selenium IDE build environment, which is run using Atlassian’s continuous integration server, Bamboo.

At Mozilla, we want to run the Mozmill tests for the released versions of add-ons against the latest builds of Firefox. This is to determine any regressions in Firefox that will potentially cause issues for the add-on authors. It also gives the add-on authors an early warning if there is a potential compatibility issue with an upcoming release of Firefox. We currently run these tests on a schedule, but will soon be looking to move to a continuous integration solution ourselves. You can see the results of these daily tests on our dashboard.

The add-on authors (Selenium in this case) are typically more interested to know if the latest build of their add-on is functioning in the current version of Firefox, to give them confidence to release bug fixes and new features without regressions. As the Selenium project already uses continuous integration, adding the Mozmill test step to this is a great step towards achieving the same within Mozilla, and immediately benefits the add-on author.

The Selenium IDE project plan in Bamboo has three stages: Build, Test, and Package. I’m going to focus on the Test stage in this blog post, as the other stages are very specific to Selenium. If you’re reading this and you develop a Firefox add-on then you should be able to apply the following to your project without too much tweaking.


There are just two prerequisites for running these tests:

  • Python: The automation scripts are written in Python, so this is required.
  • Mercurial: You must have the Python modules for Mercurial installed, as this is the source code management tool that the automation scripts and tests are using.


There are several tasks to complete for the Tests build. Below I list these tasks, with an explanation and configuration steps for each.

Clone mozmill-tests

Because we’re wanting to run the tests against a specific build of the add-on rather than the latest release, we need to clone the mozmill-tests repository and override the addons.ini file with the location of our target add-on. This is simply a Mercurial task, as follows:

hg clone http://hg.mozilla.org/qa/mozmill-tests

Note that if you have Mercurial set up as an executable in your continuous integration server then you may not need the ‘hg’ part of this command, as it will be substituted based on the agent running the command.

Switch branch

By default the mozmill-tests repository will be set to the default branch. This is paired with the mozilla-central branch of Firefox, and therefore is relevant only when running tests against the very latest nightly builds of Firefox. As we’re interested in testing against the latest release, we switch to the mozmill-release branch.

hg checkout mozilla-release

You will need to make sure this command is run within the directory of the cloned mozmill-tests repository, which by default will be simply mozmill-tests.

Create addon.ini

The addon.ini file tells the script where to download/install the add-on from. If you didn’t create your own then it would likely be downloaded from addons.mozilla.org or the add-on author’s preferred download location for the latest release. The file is essentially an ini file, with locations for linux, mac, and windows. If your continuous integration server provides a link to latest artifacts then you can add this, or you can use some sort of artifact sharing such as there is in Bamboo. The following commands create a suitable addon.ini for Selenium IDE.

echo "[download]" > tests/addons/ide@seleniumhq.org/addon.ini
echo "linux=file://${bamboo.build.working.directory}/selenium-ide.xpi" >> tests/addons/ide@seleniumhq.org/addon.ini
echo "mac=file://${bamboo.build.working.directory}/selenium-ide.xpi" >> tests/addons/ide@seleniumhq.org/addon.ini
echo "win=file://${bamboo.build.working.directory}/selenium-ide.xpi" >> tests/addons/ide@seleniumhq.org/addon.ini

If you’re using Jenkins, then the Copy Artifact Plugin could be useful for sharing artifacts between builds.

Commit addon.ini

It’s necessary to commit the replacement addon.ini file so that it is included when the local repository is cloned when running the tests.

hg commit -m 'Specify latest build of addon.'

Note: It really doesn’t matter what you put for the commit message here as this commit is not preserved between builds.

Download latest Firefox release

Rather than having to make sure your build agent always has the latest version of Firefox installed, there’s a handy script that can download this for you. This is a Python script, and therefore needs to be set up to use the Python executable.

./download.py --directory=latest-release --platform=mac --type=release --version=latest

Substitute the value of the platform for whatever platform your build agent is running. The latest release of Firefox will be downloaded to the latest-release directory. Note that the version value of ‘latest’ is relying on a symlink on Mozilla’s FTP server, that points to the directory of the latest released version number.

Run Mozmill tests

You can now run the script that executes the Mozmill tests.

./testrun_addons.py --junit=results.xml --logfile=results.log --repository=mozmill-tests --target-addons=ide@seleniumhq.org --with-untrusted latest-release

Here’s an explanation of the command line options:

  • The junit command line option determines where the results will be stored. The JUnit report format is one supported by many continuous integration servers, and often provides some nice reporting and visualizations of the results. This destination filename is substituted with a counter for each file created, for example results.xml will be results_0.xml.
  • Adding the logfile is optional, however this can be a useful build artifact if you have failures.
  • As we’re using a locally modified repository, we need to specify the location of this using the repository command line option. The default location will be mozmill-tests.
  • We need to set the target add-on using the target-addon option. This must match the directory beneath tests/addons, which in the case of Selenium IDE is ide@seleniumhq.org.
  • The with-untrusted flag is necessary if the add-on is not hosted at addons.mozilla.org. Any add-on hosted by Mozilla will be implicitly trusted. As Selenium IDE is not currently hosted by Mozilla, this flag is necessary.
  • Finally, the path to the Firefox binary is needed. It’s possible to simply point to a directory that contains a downloaded copy of Firefox, so we just use latest-release as that’s where our download task was told to put Firefox.

Parse test results

As mentioned above, a lot of continuous integration servers support results in JUnit report format, so your final task may be to specify the location of these files. If you used the example given above, then you will specify these using results*.xml.

The Bamboo instance for Selenium is publically viewable, so you can see the results of recent builds for Selenium IDE, and the report for the latest build. You can also see the results on the Mozmill archive dashboard. The Selenium IDE project is built whenever a change is committed to the core or dependent code sections of the repository. It can also be triggered manually.

Unfortunately the Selenium build hardware is experiencing stability issues at the time of writing this, meaning that there is not always a suitable build agent for the Mozmill tests.

Known issues

Currently there is an issue with the Mozmill automation script, in that it will exit without an error code even when tests have failed. Fortunately, the continuous integration servers that I’ve been working with update the build success based on the JUnit reports. If this wasn’t the case then builds with failing tests would be incorrectly reported as successful. We have a bug 626712 on file for this issue, and it will hopefully be resolved soon.

Running the ‘Mem Buster’ endurance test

I blogged a few weeks ago about how I was able to demonstrate improvements to the memory usage of Firefox using endurance tests. The test I was using was inspired by Stuart Parmenter’s Mem Buster test, and it has now been checked into the repository and available for anyone to run.

At the moment it’s only possible to run the Mem Buster test from the command line (hopefully Mozmill Crowd support won’t be too far off). The first thing you’ll need is the mozmill-automation repository checked out. If you already have this then you’ll need to do a pull and update to make sure you have the latest changes.

hg clone http://hg.mozilla.org/qa/mozmill-automation

Then, from the mozmill-automation directory, run the following command:

./testrun_endurance.py --reserved=membuster --delay=3 --iterations=2 --entities=100 --report=http://mozmill-crowd.brasstacks.mozilla.com/db/ /Applications/Firefox.app

The reserved argument test the script to only run the Mem Buster test and not the general endurance tests. The test opens a site for each entity, so by specifying 100 entities and 2 iterations it will open a total of 200 sites. The delay of 3 seconds is from the original Mem Buster test. I would recommend including the report argument as this shares your results and allows you to see the visualisation of memory usage during the test. The final argument is the location of the version of Firefox you want to run the tests against.

Below is a screencast demonstrating the Mem Buster endurance test on Windows 7:

If you’re interested in following the progress of the endurance tests project, check out the project page. For further help you can find the documentation here, post a comment to this entry, or ask a question in the QMO forums.

QA Automation Services Work Week 2011 – Day 1

QAASWW11 kicked off yesterday with a day of planning at IdeaSpace in Cambridge, UK. We had a meeting room for the day – kindly offered up by our new friends at Springboard – and plenty of instant whiteboard! As with the last work week I attended, it was organised UnCon style, which worked really well before. I will say that the first day is usually the most painful, as the entire team filled out their thoughts/needs for the week onto post-it notes, which gradually were organised into groups and ultimately sessions with agendas. Once this was finally done, the schedule for the week was set out. Although the schedule is incredibly flexible, it really helps to have this set of intentions outlined on the first day.

In the evening, the Springboard teams were kind enough to practice their investor pitches on us, and we saw 10 very promising ideas presented by an incredibly smart and enthusiastic bunch of people. I noticed a very strong lean towards mobile devices in the pitches, which really reflects the current state of the market and the direction things are heading.

After the pitches, everybody relaxed with will deserved beer and pizza, and we had an opportunity to talk with the Springboard guys, who are in their last week.

So ends the first day. Tomorrow we will be working from a cottage, which unfortunately we have already discovered has slow connection to the Internet. This won’t affect our work week sessions, but will be an obstacle during the time we have scheduled to get on with our day-to-day work activities, as well as communicating with our colleagues and community around the world! If you want to follow our activities for the week, we have created a Twitter hashtag of #mozautoqa.

Good luck to everyone at Springboard for the investor pitches on Friday, and thank you so much for inviting us to spend the day with you at IdeaSpace!

Endurance tests demonstrate Firefox’s memory usage improvements

Thanks to the amazing efforts of the MemShrink project, Firefox’s memory usage is seeing some great improvements. In particular, Firefox 7 will be much more efficient with memory than the current version. As endurance tests monitor resources such as memory, it makes sense for us to work together to ensure that we’re moving in the right direction, and that we don’t regress in any of these areas.

At this point there are only five endurance tests, and although these can be run with many hundreds of iterations in order to seek out memory leaks in the tested areas, they do nothing to simulate a user. It was suggested that we have a special endurance test similar to Stuart Parmenter’s Mem Buster test.

Creating an initial version of this new test did not take long. Instead of opening sites in new windows I open them in tabs, and the number of sites opened is controlled by iterations and micro-iterations. I also increased the number of sites so we’d be hitting the same ones less often, and based this new list on Alexa’s top sites. Once I added in handling of modal dialogs that some sites were causing to be displayed then I was able to consistently get results.

This test would appear to be similar to Talos tp5 in that is loads sites from Alexa’s index, however we’re not measuring how long each site takes to load. Instead, we move onto the next site after a delay as specified on the command line. I have kept the same delay as the original Mem Buster test, which is 3 seconds.

After running the Mem Buster Endurance Test five times across five versions of Firefox, I found the results to clearly reflect the MemShrink efforts. Although the memory consumption varies somewhat for each run, the general downward trend is unmistakable.

In the following charts you can see the improvement in memory usage between Firefox 4 & 5. These can be directly compared as the endurance tests were measuring the same metrics (allocated memory & mapped memory).

Charts showing allocated and mapped memory usage in Firefox 4 & 5

In Firefox 6 there were several improvements to memory reporting, and the endurance tests were updated to record new metrics (explicit memory & resident memory). You can see in the following charts that explicit memory usage in Firefox 7 is rough half that of Firefox 6! It appears that this has increased in Firefox 8, which will require some further investigation. The resident memory has continued to decrease in each version.

Charts showing explicit and resident memory usage in Firefox 6, 7, & 8

You can follow the progress of the Mem Buster Endurance Test in Bugzilla. Full reports from the test runs used in this blog post can be found here.

Update: It appears that the explicit memory calculated for Firefox 7 on Mac was artificially low. This explains the slight increase in Firefox 8. If you’re interested you can read further details on Bugzilla.

Micro-iterations in Endurance Tests

Last week micro-iterations landed in Mozmill Endurance Tests. These allow tests to accumulate resources during an iteration. This was previously achieved by leaving the state of the test snippet in a different state to how it started, allowing the iterations themselves to accumulate. The problem with this is that these accumulating tests have a very different pattern compared to other tests that clean up before ending the iteration.

To solve this we decided to add a micro-iteration parameter and to use it to loop within an iteration. An example use for this is the new tab tests. Now, if you specify 5 iterations and 10 micro-iterations then these tests will open 10 new tabs, close them, and repeat that 5 times.

The endurance tests documentation has been updated with details on writing and running tests with micro-iterations.

Endurance Tests in Firefox 6

One of the features of the upcoming Firefox 6 is an improvement to the handling and reporting of memory resources. As you can probably imagine, this is very applicable to the endurance tests project. As a result of the changes, running the endurance tests with the previews of Firefox 6 was failing to gather any metrics at all.

I’m pleased to announce that as of yesterday, the endurance tests now support Firefox 6! One of the main differences you will see is that we’re no longer gathering mapped/allocated memory, and are instead gathering explicit/resident, which we are expecting to provide much more useful results. You don’t need to do anything to get the latest changes, just run the tests as described here (using the command line) or here (using Mozmill Crowd).

If you’re interested, here are the relevant bugs:

  • Bug 633653 – Revamp about:memory
  • Bug 657327 – Merge the “mapped” and “heap used” trees, and make the tree flatter
  • Bug 656869 – No memory results on endurance testrun with Nightly 6.0a1
  • Bug 657508 – Update dashboard to display endurance tests results from Firefox 6.0