Running Firefox OS UI Tests Without a Device

Firefox OSNote: This post has been revised.

It’s a little difficult to get your hands on a device that can run Firefox OS right now, but if you’re interested in running the UI tests a device is not essential. This guide will show you how to run the tests on the nightly desktop client builds we provide.

Step 1: Download the latest desktop client

The Firefox OS desktop client lets you run Gaia (the UI for Firefox OS) and web apps in a Gecko-based environment somewhat similar to an actual device. There are certain limitations of the desktop client, including: it doesn’t emulate device hardware (camera, battery, etc), it doesn’t support carrier based operations such as sending/receiving messages or calls, and it relies on the network connection of the machine it’s running on.

You can download the latest build of the desktop client from this location, but make sure you download the appropriate file for your operating system. Unfortunately, due to bug 832469 the nightly desktop builds do not currently work on Windows, so you will need either Mac or Linux (a virtual machine is fine) to continue:

  • Mac: b2g-[VERSION].multi.mac64.dmg
  • Linux (32bit): b2g-[VERSION].multi.linux-i686.tar.bz2
  • Linux (64bit): b2g-[VERSION].multi.linux-x86_64.tar.bz2

Once downloaded, you will need to extract the contents to a local folder. For the purposes of the rest of this guide, I’ll refer to this location as $B2G_HOME.

Step 2: Enable Marionette

Marionette is a test framework built into Gecko that allows remote control of the application. The Gaia UI tests use Marionette to launch applications and simulate a user interacting with them. By default, this is enabled in the desktop client but it is necessary for us to set a preference in the default profile before we can run the tests.

Add the following line to your gaia/profile/user.js file, which on Mac is located in $B2G_HOME/ and on Linux in $B2G_HOME/b2g.

Step 3: Start Firefox OS

Firefox OS SimulatorYou can start Firefox OS by double clicking $B2G_HOME/ (Mac) or running $B2G_HOME/b2g/b2g (Linux). If everything went well, you should see the ‘powered by’ screen shortly followed by the first launch app. Complete the configuration steps and optionally follow the tour, and you will be presented with the lock screen. Unlock by dragging the bar up and clicking the padlock. You should be presented with the home screen (shown here).

Take a moment to familiarise yourself with Firefox OS. Launch a couple of applications, change some settings. You’ll soon discover the limitations of the simulator. Probably the most noticeable difference is that there’s no home/power/volume buttons as there would be on a device. The most useful of these is the home button, which allows you to return the to the home screen or to switch between open apps. You should be able to use the home key on your keyboard as a substitute. Here are some more usage tips.

Step 4: Run the tests!

Now you’ve got the simulator running, you can clone and run the automated UI tests against it. You will need to have git and Python installed (I recommend using version 2.7), and I highly recommend using virtual environments.

First, clone the gaia-ui-tests repository using the following command line, where $WORKSPACE is your local workspace folder:

If you’re using virtual environments, create a new environment and activate it. You will only need to create it once, but will need to activate it whenever you wish to run the tests:

Now you need to install the test harness (gaiatest) and all of it’s dependencies:

Once this is done, you will have everything you need to run the tests. Because we’re running against the desktop client we must filter out all tests that are not appropriate. This list may grow, but it currently includes tests that use: antenna, bluetooth, carrier, camera, sdcard, and wifi. You will probably also want to exclude any tests that are expected to fail (xfail). To run the tests, use the following command:

You should then start to see the tests running, with output similar to the following:

The first tests that run are unit tests for the gaiatest harness, so you won’t immediately see much happening in the simulator. You may encounter test failures, and we’re currently focusing on getting these resolved. You may also encounter bug 844498, which has the nasty side-effect of causing all remaining tests to fail. If this happens just try running the suite again for now.

The video shows a full suite run against the simulator. Note that where tests time out I have either cropped the video or increased the speed. This is just to keep the video shorter.

Step 5: Contribute?

Now you can run the tests, you’re in a great position to help us out! Our first focus is to get all the tests passing against the desktop build, but then we need to identify missing areas of coverage that are relevant to the simulator.

To contribute, you will need to set up a github account and then fork the main gaia-ui-tests repository. You will then need to update your local clone so it’s associated with your fork rather than the main one. You can do this with the following commands, replacing $USERNAME with your github username:

You can now create a branch, and make your changes. Once done, you should commit your changes and push them to your fork before submitting a pull request. I’m not going to cover these steps in detail here, as they’re fairly standard git practices and will be covered in far better detail elsewhere. In fact, github:help has some fantastic documentation.

If you’re looking for a task, you should first check the desktop issues list on github. If there’s nothing available there, see if you can find an area that needs more coverage. Feel free to add an issue and a comment to say you’ll work on it.

You can also ask us for tasks! There are several mailing lists that you can sign up to: Automation Development, Web QA, and B2G QA. We’re also on IRC, and you can find us in #automation, #mozwebqa, and #appsqa all on

Further reading

FlynnID 0.3

I’ve just released an update to FlynnID. The primary change is that I’ve reintroduced the command line arguments, meaning a single node can be registered without the need for a configuration file. I’ve also hopefully learned my lesson, and in the words of a friend ‘Deprecate, not annihilate!’

Along with this change, 0.3 also introduces a handy feature if you’re running FlynnID on a schedule. If the node you’re registering is already registered then it won’t attempt to register it again. You can override this behavior using the --force command line option.

Lastly, the output is now much more colourful…

You can install/upgrade using pip install -U flynnid.

Mozilla drops usage of Selenium RC

I thought this was important enough to share in a short blog post… Just 10 months ago, Mozilla started to migrate their Selenium projects from the Selenium RC API to the WebDriver API. I’m thrilled to say that this is now complete, and that no Selenium RC projects are actively being run or maintained!

BIDPOM drops support for Selenium RC

As all of the active Mozilla Web QA automation projects are now using WebDriver, there is no longer a need for BIDPOM (Browser ID Page Object Model) to support Selenium RC. I considered keeping this support purely for the community, however I would rather encourage anyone still using Selenium RC to upgrade to WebDriver.

If you require Selenium RC support then I recommend you fork the repository and continue to develop the RC page objects separately. The only difference you will notice if you’re upgrading to the latest version of BIDPOM is that you may need to import from pages now, rather than pages.webdriver.

FlynnID 0.2

In Tron, Flynn’s identity disc is the master key to getting onto the Grid. In the far less exciting real world, FlynnID is the key to registering a Selenium node to Selenium Grid. Yesterday I released FlynnID 0.2, which changes the usage from a list of optional arguments to a single expected argument: a configuration file. This means you can now register several nodes in one go. Below is an example configuration file.

Of course this does unfortunately mean that anyone upgrading from 0.1 may be a little surprised that the command line options have gone, but I strongly feel this is a better approach. This way, your configuration file can be backed up (or added to version control), and it’s much quicker to run. You can install/upgrade FlynnID using pip: pip install -U flynnid.

Announcing pytest-mozwebqa 1.0

Finally I can announce that I have released version 1.0 of the pytest plugin used by Mozilla’s Web QA team! It’s been in use for several months now, but I’ve paid off some long standing technical debt, and now consider it stable!

I should say that although this plugin was primarily developed for Mozilla’s Web QA team, anyone that wants to write test automation for websites in Python can take advantage of it. There’s very little that is specific to Mozilla, and this can easily be overridden on the command line, or you could simply fork the project and create your own version of the plugin. Anyway, as I haven’t previously announced the plugin, it’s probably a good idea for me to explain what it actually does…

Selenium integration
The primary feature of the plugin is the ability for it to launch and interact with a web browser. It does this by integrating with either the RC or WebDriver APIs provided by the Selenium browser automation framework. The browser is launched ahead of every test, unless the test is specifically decorated to indicate that the test does not require a browser:

The plugin works with a local WebDriver instance, with a remote server (RC or WebDriver), and with Selenium Grid (also RC or WebDriver).

Sauce Labs integration

You can also use this plugin to run your tests in the cloud using your Sauce Labs account. The integration allows you to specify a build identifier and tags, which help when filtering the Sauce Labs jobs. To enable Sauce Labs integration, you simply need to specify a credentials file on the command line, which is a YAML file in the following format:

When tests are run using Sauce Labs, there will also be additional items in the HTML report, such as the video of the test and a link to the job.

If you don’t have a Sauce Labs account already, you can sign up for one here. Sauce Labs is used by Mozilla whenever we need to run on browser/platform combinations that our own Selenium Grid doesn’t support, or whenever we need boost up the number of testruns such as before a big deployment.

The plugin allows you to store your application’s credentials in a YAML file as specified on the command line. This is an important feature for Mozilla, where the credentials files are stored in a private repository. Anyone wanting to contribute or run the tests themselves, simply has to create an account and a YAML file.

Fail fast
Have you ever been frustrated when you’ve kicked off your suite of several hundred tests, just for every single one of them to launch a browser despite the application under test being unavailable? It’s happened to me enough times that I added an initial check to ensure the base URL of the application is responding with a 200 OK; This saves so much time.

Protect sensitive sites
I’ll leave the debate on whether you should be running your tests against sensitive environments such as production for someone else, but if you do decide to do this, the plugin gives you a little bit of extra protection. For a start, all tests are considered destructive and therefore will not run by default. You can explicitly mark tests as non-destructive. Having an opt-in system is more maintenance (I know it’s a pain), but much lower risk. I’d rather accidentally not be running a non-destructive test against production than accidentally run a destructive one, and I have felt this pain before!

Of course, for some environments, you will want to run your destructive tests, and you can do so by specifying the --destructive command line option.

There’s also a final safety net just in case you try running destructive tests against a sensitive environment. This skips any destructive tests that are run against a URL that matches a regular expression. For Mozilla, this defaults to mozilla\.(com|org) and any skipped tests will give a suitable reason in your reports.

HTML report

Digging through console logs or JUnit reports can be a little frustrating when investigating failures, so the plugin provides a nicely formatted HTML report. This shows the options used when running the tests, a short summary of the results, and then lists each test along with timings and additional resources where appropriate.

I call the additional resources the test’s “death rattle” as it’s only captured for failing tests (it’s not particularly useful for passing tests, and consumes unnecessary resources). For tests that use Selenium this should at least include a screenshot, the HTML, and the current URL when the failure occurred. If you’re running in Sauce Labs then you should also see the video of the test and a link to the test job.

For full details and documentation of the plugin, take a look over the project’s README file on github. If you find any bugs or have any feature requests please raise them in github issues.

Mozilla application downloader released

This week part of the Automation Tools team have gathered at the London Mozilla Space to work on migrating our Firefox automated UI tests to Mozmill 2.0. A considerable part of this work is converting our automation scripts repository, which contains a number of packages that should really be dependencies. Our intention is to take these packages and either merge them to the appropriate mozbase packages, or configure them as suitable packages in their own right.

The first of these packages to be released independently is our impressive download script, which can be used to download a variety of Firefox or Thunderbird builds. We’ve appropriately name it mozdownload, and have released it on PyPI, and the repository can be found on github.

You can install it using pip install mozdownload or easy_install mozdownload and use mozdownload -h for a full list of command line options. A couple of simple examples are provided below:

To download the latest official Firefox release for your platform:

To download the latest official Thunderbird release for your platform:

Of course we don’t just use this to download the official releases. You can also download latest (or specific) builds from any of the channels with daily builds. Here are a few more examples for daily builds:

To download the Firefox Nightly build from 23rd May 2012:

To download the latest Thunderbird Daily build:

For Firefox there are daily builds for the mozilla-central and mozilla-aurora branches. For Thunderbird these are comm-central and comm-aurora.

Candidate builds can also be downloaded, so for example if you wanted to test a candidate build for the fourth beta of Firefox 13 you could use the following:

Finally, you can also download Tinderbox builds. For example, to download the latest tinderbox build use:

If you have any feature requests or find any issues please use github’s issue tracker.

Happy downloading!

Test day + Meetup = Success!

Yesterday WebQA held a combined test day and meetup event in San Francisco. The idea was to bring everyone together to hack on our QA projects alongside us, and in return we would be able to share our own knowledge and experiences, and demonstrate how we test.

The team had managed to pull together over 50 tasks for the evening, ranging from simple beginner level to a few advanced ones, and using a variety of technologies and projects. As people arrived we encouraged them to slap on a name tag and grab a task.

Once we had everyone in the room (I think there were 18 of us in total) we gave a brief introduction from the team over some beer and pizza, and then the fun really started! Of course it didn’t all go smoothly, and we quickly realised that the entire domain was unavailable, which meant that nobody could download the required Python packages, such as Selenium and pytest! Fortunately we found after a little scrambling that we could use the –use-mirrors command line argument to get us unstuck. In fact, we’ve now even added this to our test jobs running on Jenkins to hopefully make them a little more resilient to PyPI downtime, so in a way I’m glad we had the issues.

Not everyone was working on new tests, in fact Zac set up a mini grid of Android devices (tablets and mobile), and was giving demonstrations on running web tests on mobile. If you’ve not tried this yet then you should as it’s great fun – and the future is mobile, right?

We also had some great feedback on our documentation, which will soon lead to some further improvements and simplifications. I was really impressed when one of the attendees told me there were some issues with the documentation on our wiki, and then told me they just logged in and fixed it themselves. That’s totally the kind of involvement we’re encouraging and it’s so great to see if happening.

By the end of the evening we had three pull requests submitted (all for different projects), and I’m almost certain that number will increase as some of the other tasks were near completion. We’ll now work on reviewing those pulls and merging them in, and look forward to any more that come our way!

Finally, I want to say a huge thanks to everyone that was able to attend for making it such a great event. We all hope you enjoyed it as much as we did, and look forward to seeing you at future events (and on #mozwebqa on Oh and thanks to Kedo for the Ctrl+W keyboard shortcut for Terminal!

Automating BrowserID with Selenium

BrowserID is an awesome new approach to handling online identity. If you haven’t heard of it then I highly recommend reading this article, which explains what it is and how it works. Several Mozilla projects have already integrated with BrowserID, including Mozillians, Affiliates, and the Mozilla Developer Network.

With all of these sites now integrating with BrowserID (and more on their way) we needed to add support to our test automation to handle the new sign in process. Initially we started to do this independently in our projects, but the thought of updating all of our projects whenever a tweak was made to BrowserID was daunting to say the least! For this reason I have created a project that contains a page object model for BrowserID. This can be included in other projects as a submodule and then updated and maintained centrally.

The new project is called ‘BIDPOM’ (BrowserID Page Object Model) and can be found here. It currently only contains a page object for the Sign In page, however this currently meets the needs of the automation for projects that have integrated with BrowserID. As we have a mix of projects using Selenium’s two APIs (RC and WebDriver), it was necessary for BIDPOM to support both.

By adding BIDPOM as a submodule, we can easily pull the BrowserID page objects into our automation projects and reference them in a very similar way to the main project’s page objects. We can also update the version of BIDPOM simply by updating the git link and updating the submodule. What’s even better is that our continuous test builds running in Jenkins automatically initialise and update the submodule for us!

I hope that in addition to being a dependency for our own automation projects, this page object model can be utilised by others wanting to create or maintain automated tests using Selenium against sites that adopt BrowserID. If you would like to start using BIDPOM then I have provided below a guide to adding the project as a submodule to an existing git repository.

From within your project, add the BIDPOM project as a git submodule:

This will add an entry to .gitmodules and clone the BIDPOM project to the browserid subdirectory. It will also stage the new gitlink and .gitmodules items for commit.

You can now commit these changes to your project’s repository:

Before you can test the new submodule you will need to run the following command to copy the contents of .gitmodules into your .git/config file.

Now you can test the submodule by deleting the browserid directory and allowing it to be recreated:

The BIDPOM project should be cloned to the browserid directory.

You will now be able to integrate your project with BrowserID! Here follow a few examples of how to integrate your project.

Example: Short sign-in using Selenium’s RC API

Example: Long sign-in using Selenium’s RC API

Example: Short sign-in using Selenium’s WebDriver API

Example: Long sign-in using Selenium’s WebDriver API

For the latest documentation on the BIDPOM project refer to the github wiki.

Case Conductor pytest plugin proposal

Case Conductor is the new test case management tool being developed by Mozilla to replace Litmus. I’ve recently been thinking about how we can improve the relationship between our automated tests and our test case management, and want to sare my thoughts on how a plugin could help our WebQA team do just that.

Annotating tests

Currently our automated tests include a docstring referencing the Litmus ID. This is inconsistent (some even include a full URL to the test case) and hard to do anything with. It’s important to reference the test case, but I see this as the bare minimum.

Current method

I would prefer to use a custom pytest mark, which would accept a single ID or a list. By doing this we can cleanly use the IDs without having to write a regex or conform to a strict docstring format.

Proposed method

Submitting results

There’s already an API in development for Case Conductor, so it would be great to interface directly with it during automated test runs. We could, for example prompt the user for the product, test cycle, and either a test run or a collection of test suites. With these details it should be possible for every automated run to create a new test run in Case Conductor and mark the linked test cases as passed/failed depending on the result. In addition to the existing reports, we can then also offer a link to the Case Conductor report for the relevant test run.

Result reports

We could also use the Case Conductor plugin to enhance the existing HTML report generated by the plugin already in use by WebQA. For example, we could link to the Case Conductor report for the test run, and provide a link for each test case. In the following mockup the new details are highlighted.

Coverage reports

By knowing all test cases in the specified product/cycle/run/suites we can report on the automated coverage. This could be used to set goals such as ‘automate 75% of product A’s tests’, which suddenly become a lot easier to measure. Here’s another mockup of how this command line report may look.

We could also use tags to indicate test cases that aren’t worth automating so the coverage is more realistic.

Command options

I would propose several command line options in order to cover the above mentioned functionality. In the form of output from –help, here are my suggestions:

Some of these would be mandatory but could fail with useful messages if omitted. For example, if the product was not provided then a list of available products could be returned. The same could be done for test cycles and test runs.