pytest-mozwebqa 1.1 released

It’s been a long time coming, but pytest-mozwebqa 1.1 has finally been released! The main feature of this new version is the ability to specify a proxy server for the browsers launched. It will also use this in conjunction with upcoming plugins pytest-browsermob-proxy (to record and report network traffic) and pytest-zap (to spider and scan for known security vulnerabilities). Check out the complete changelog for 1.1.

More realistic endurance test results

If you’re not already familiar with the Firefox endurance tests, these are Mozmill tests that repeat a small snippet of user interaction over and over again while gathering metrics. This allows us to detect if there’s a memory leak in an very localised area, or if there’s a memory regression within the areas tested. I’ve blogged about them a few times.

We’ve known for a while that the results we’ve been getting aren’t entirely realistic, and this is due to the fact that we only wait for 0.1 seconds between each iteration. This doesn’t give Firefox any time to perform tasks such as garbage collection. Unfortunately we couldn’t just increase this delay as that would cause other Mozmill tests to be queued behind the much longer running endurance tests.

So now that we have our new VMWare ESX cluster in place (which has given us an awesome three VMs per platform) we’ve configured Jenkins to run endurance tests on just one node per platform. This allows other Mozmill tests to continue on the remaining available nodes. We were then finally able to increase the delay to 5 seconds.

The results are as we had hoped. The memory usage has dropped, and the duration has increased. Also, the individual testrun results became a lot less erratic. This can be seen in the following charts:

It should now be much easier for us to spot regressions, and hopefully we’ll have less false positives! If you’re interested in the latest endurance results, you can find them in our Mozmill Dashboard, along with the endurance charts.

Related bugs/issues:

  1. Bug 788531 – Revise default delay for endurance test to make scenarios more realistic
  2. Issue 173 – Have dedicated nodes for endurance tests
  3. Issue 201 – Revise default delay for all endurance jobs
  4. Issue 203 – Increase build timeout for endurance tests

Running Firefox OS UI Tests Without a Device

Firefox OSNote: This post has been revised.

It’s a little difficult to get your hands on a device that can run Firefox OS right now, but if you’re interested in running the UI tests a device is not essential. This guide will show you how to run the tests on the nightly desktop client builds we provide.

Step 1: Download the latest desktop client

The Firefox OS desktop client lets you run Gaia (the UI for Firefox OS) and web apps in a Gecko-based environment somewhat similar to an actual device. There are certain limitations of the desktop client, including: it doesn’t emulate device hardware (camera, battery, etc), it doesn’t support carrier based operations such as sending/receiving messages or calls, and it relies on the network connection of the machine it’s running on.

You can download the latest build of the desktop client from this location, but make sure you download the appropriate file for your operating system. Unfortunately, due to bug 832469 the nightly desktop builds do not currently work on Windows, so you will need either Mac or Linux (a virtual machine is fine) to continue:

  • Mac: b2g-[VERSION].multi.mac64.dmg
  • Linux (32bit): b2g-[VERSION].multi.linux-i686.tar.bz2
  • Linux (64bit): b2g-[VERSION].multi.linux-x86_64.tar.bz2

Once downloaded, you will need to extract the contents to a local folder. For the purposes of the rest of this guide, I’ll refer to this location as $B2G_HOME.

Step 2: Enable Marionette

Marionette is a test framework built into Gecko that allows remote control of the application. The Gaia UI tests use Marionette to launch applications and simulate a user interacting with them. By default, this is enabled in the desktop client but it is necessary for us to set a preference in the default profile before we can run the tests.

Add the following line to your gaia/profile/user.js file, which on Mac is located in $B2G_HOME/B2G.app/Contents/MacOS and on Linux in $B2G_HOME/b2g.

Step 3: Start Firefox OS

Firefox OS SimulatorYou can start Firefox OS by double clicking $B2G_HOME/B2G.app (Mac) or running $B2G_HOME/b2g/b2g (Linux). If everything went well, you should see the ‘powered by’ screen shortly followed by the first launch app. Complete the configuration steps and optionally follow the tour, and you will be presented with the lock screen. Unlock by dragging the bar up and clicking the padlock. You should be presented with the home screen (shown here).

Take a moment to familiarise yourself with Firefox OS. Launch a couple of applications, change some settings. You’ll soon discover the limitations of the simulator. Probably the most noticeable difference is that there’s no home/power/volume buttons as there would be on a device. The most useful of these is the home button, which allows you to return the to the home screen or to switch between open apps. You should be able to use the home key on your keyboard as a substitute. Here are some more usage tips.

Step 4: Run the tests!

Now you’ve got the simulator running, you can clone and run the automated UI tests against it. You will need to have git and Python installed (I recommend using version 2.7), and I highly recommend using virtual environments.

First, clone the gaia-ui-tests repository using the following command line, where $WORKSPACE is your local workspace folder:

If you’re using virtual environments, create a new environment and activate it. You will only need to create it once, but will need to activate it whenever you wish to run the tests:

Now you need to install the test harness (gaiatest) and all of it’s dependencies:

Once this is done, you will have everything you need to run the tests. Because we’re running against the desktop client we must filter out all tests that are not appropriate. This list may grow, but it currently includes tests that use: antenna, bluetooth, carrier, camera, sdcard, and wifi. You will probably also want to exclude any tests that are expected to fail (xfail). To run the tests, use the following command:

You should then start to see the tests running, with output similar to the following:

The first tests that run are unit tests for the gaiatest harness, so you won’t immediately see much happening in the simulator. You may encounter test failures, and we’re currently focusing on getting these resolved. You may also encounter bug 844498, which has the nasty side-effect of causing all remaining tests to fail. If this happens just try running the suite again for now.

The video shows a full suite run against the simulator. Note that where tests time out I have either cropped the video or increased the speed. This is just to keep the video shorter.

Step 5: Contribute?

Now you can run the tests, you’re in a great position to help us out! Our first focus is to get all the tests passing against the desktop build, but then we need to identify missing areas of coverage that are relevant to the simulator.

To contribute, you will need to set up a github account and then fork the main gaia-ui-tests repository. You will then need to update your local clone so it’s associated with your fork rather than the main one. You can do this with the following commands, replacing $USERNAME with your github username:

You can now create a branch, and make your changes. Once done, you should commit your changes and push them to your fork before submitting a pull request. I’m not going to cover these steps in detail here, as they’re fairly standard git practices and will be covered in far better detail elsewhere. In fact, github:help has some fantastic documentation.

If you’re looking for a task, you should first check the desktop issues list on github. If there’s nothing available there, see if you can find an area that needs more coverage. Feel free to add an issue and a comment to say you’ll work on it.

You can also ask us for tasks! There are several mailing lists that you can sign up to: Automation Development, Web QA, and B2G QA. We’re also on IRC, and you can find us in #automation, #mozwebqa, and #appsqa all on irc.mozilla.org.

Further reading