Populating Firefox OS with test content

Working on the Firefox OS automation, it’s often been necessary to populate a device with some sample content. For example, when measuring the launch time of the contacts app it’s more realistic if we already have a bunch of contacts on our phone. To solve this, I created a small Python package called b2gpopulate, which uses Web APIs and mozdevice to push various types of content to a device with Marionette enabled.

To install b2gpopulate you will need Python and can simply run pip install b2gpopulate from the command line. If you don’t have pip installed then you can also use easy_install b2gpopulate. Running b2gpopulate is pretty straightforward, however you will need to have a Firefox OS device connected that’s running Marionette, and you will need to forward port 2828 by running adb forward tcp:2828 tcp:2828. The following example will populate the connected device with 200 of each content type:

Note that before pushing a database the b2g process is stopped, so don’t panic if you see your device restarting. Run b2gpopulate --help for full usage instructions.

Contacts

Initially I used just the Contacts API to add/remove contacts from the device, but this is a pretty slow process, especially for a large number of contacts. After finding out about the reference workload that Gaia uses in its build I modified this to push a prebuilt database of contacts. This is then topped up using the Contacts API as needed. There are prebuilt databases for 200, 500, 1000, and 2000 contacts.

Messages

The most recent addition to b2gpopulate is messages. Like contacts, this pushes a prebuilt database of 200, 500, 1000, or 2000. Unlike the contacts, there is currently no option to top this up.

Pictures & Videos

This uses mozdevice to push a reference picture or video to the device and then performs a remote copy. In a future version I would like to alternate through a number of reference files so there’s some variance.

Music

This has changed in the version of b2gpopulate I released today. Previously it worked in exactly the same way as the pictures and videos, but because the metadata files doesn’t vary, the music app doesn’t distinguish between them. Now, the metadata is modified for each file using mutagen, and the album/artist is changed every ten tracks.

I suspect there will be a need for more content types in the future. For example, we could potentially add events, alarms, history, favourites, bookmarks, emails, etc. If your interested in contributing, you can find the repository on GitHub.

More realistic endurance test results

If you’re not already familiar with the Firefox endurance tests, these are Mozmill tests that repeat a small snippet of user interaction over and over again while gathering metrics. This allows us to detect if there’s a memory leak in an very localised area, or if there’s a memory regression within the areas tested. I’ve blogged about them a few times.

We’ve known for a while that the results we’ve been getting aren’t entirely realistic, and this is due to the fact that we only wait for 0.1 seconds between each iteration. This doesn’t give Firefox any time to perform tasks such as garbage collection. Unfortunately we couldn’t just increase this delay as that would cause other Mozmill tests to be queued behind the much longer running endurance tests.

So now that we have our new VMWare ESX cluster in place (which has given us an awesome three VMs per platform) we’ve configured Jenkins to run endurance tests on just one node per platform. This allows other Mozmill tests to continue on the remaining available nodes. We were then finally able to increase the delay to 5 seconds.

The results are as we had hoped. The memory usage has dropped, and the duration has increased. Also, the individual testrun results became a lot less erratic. This can be seen in the following charts:

It should now be much easier for us to spot regressions, and hopefully we’ll have less false positives! If you’re interested in the latest endurance results, you can find them in our Mozmill Dashboard, along with the endurance charts.

Related bugs/issues:

  1. Bug 788531 – Revise default delay for endurance test to make scenarios more realistic
  2. Issue 173 – Have dedicated nodes for endurance tests
  3. Issue 201 – Revise default delay for all endurance jobs
  4. Issue 203 – Increase build timeout for endurance tests

Running Firefox OS UI Tests Without a Device

Firefox OSNote: This post has been revised.

It’s a little difficult to get your hands on a device that can run Firefox OS right now, but if you’re interested in running the UI tests a device is not essential. This guide will show you how to run the tests on the nightly desktop client builds we provide.

Step 1: Download the latest desktop client

The Firefox OS desktop client lets you run Gaia (the UI for Firefox OS) and web apps in a Gecko-based environment somewhat similar to an actual device. There are certain limitations of the desktop client, including: it doesn’t emulate device hardware (camera, battery, etc), it doesn’t support carrier based operations such as sending/receiving messages or calls, and it relies on the network connection of the machine it’s running on.

You can download the latest build of the desktop client from this location, but make sure you download the appropriate file for your operating system. Unfortunately, due to bug 832469 the nightly desktop builds do not currently work on Windows, so you will need either Mac or Linux (a virtual machine is fine) to continue:

  • Mac: b2g-[VERSION].multi.mac64.dmg
  • Linux (32bit): b2g-[VERSION].multi.linux-i686.tar.bz2
  • Linux (64bit): b2g-[VERSION].multi.linux-x86_64.tar.bz2

Once downloaded, you will need to extract the contents to a local folder. For the purposes of the rest of this guide, I’ll refer to this location as $B2G_HOME.

Step 2: Enable Marionette

Marionette is a test framework built into Gecko that allows remote control of the application. The Gaia UI tests use Marionette to launch applications and simulate a user interacting with them. By default, this is enabled in the desktop client but it is necessary for us to set a preference in the default profile before we can run the tests.

Add the following line to your gaia/profile/user.js file, which on Mac is located in $B2G_HOME/B2G.app/Contents/MacOS and on Linux in $B2G_HOME/b2g.

Step 3: Start Firefox OS

Firefox OS SimulatorYou can start Firefox OS by double clicking $B2G_HOME/B2G.app (Mac) or running $B2G_HOME/b2g/b2g (Linux). If everything went well, you should see the ‘powered by’ screen shortly followed by the first launch app. Complete the configuration steps and optionally follow the tour, and you will be presented with the lock screen. Unlock by dragging the bar up and clicking the padlock. You should be presented with the home screen (shown here).

Take a moment to familiarise yourself with Firefox OS. Launch a couple of applications, change some settings. You’ll soon discover the limitations of the simulator. Probably the most noticeable difference is that there’s no home/power/volume buttons as there would be on a device. The most useful of these is the home button, which allows you to return the to the home screen or to switch between open apps. You should be able to use the home key on your keyboard as a substitute. Here are some more usage tips.

Step 4: Run the tests!

Now you’ve got the simulator running, you can clone and run the automated UI tests against it. You will need to have git and Python installed (I recommend using version 2.7), and I highly recommend using virtual environments.

First, clone the gaia-ui-tests repository using the following command line, where $WORKSPACE is your local workspace folder:

If you’re using virtual environments, create a new environment and activate it. You will only need to create it once, but will need to activate it whenever you wish to run the tests:

Now you need to install the test harness (gaiatest) and all of it’s dependencies:

Once this is done, you will have everything you need to run the tests. Because we’re running against the desktop client we must filter out all tests that are not appropriate. This list may grow, but it currently includes tests that use: antenna, bluetooth, carrier, camera, sdcard, and wifi. You will probably also want to exclude any tests that are expected to fail (xfail). To run the tests, use the following command:

You should then start to see the tests running, with output similar to the following:

The first tests that run are unit tests for the gaiatest harness, so you won’t immediately see much happening in the simulator. You may encounter test failures, and we’re currently focusing on getting these resolved. You may also encounter bug 844498, which has the nasty side-effect of causing all remaining tests to fail. If this happens just try running the suite again for now.

The video shows a full suite run against the simulator. Note that where tests time out I have either cropped the video or increased the speed. This is just to keep the video shorter.

Step 5: Contribute?

Now you can run the tests, you’re in a great position to help us out! Our first focus is to get all the tests passing against the desktop build, but then we need to identify missing areas of coverage that are relevant to the simulator.

To contribute, you will need to set up a github account and then fork the main gaia-ui-tests repository. You will then need to update your local clone so it’s associated with your fork rather than the main one. You can do this with the following commands, replacing $USERNAME with your github username:

You can now create a branch, and make your changes. Once done, you should commit your changes and push them to your fork before submitting a pull request. I’m not going to cover these steps in detail here, as they’re fairly standard git practices and will be covered in far better detail elsewhere. In fact, github:help has some fantastic documentation.

If you’re looking for a task, you should first check the desktop issues list on github. If there’s nothing available there, see if you can find an area that needs more coverage. Feel free to add an issue and a comment to say you’ll work on it.

You can also ask us for tasks! There are several mailing lists that you can sign up to: Automation Development, Web QA, and B2G QA. We’re also on IRC, and you can find us in #automation, #mozwebqa, and #appsqa all on irc.mozilla.org.

Further reading

Mozilla drops usage of Selenium RC

I thought this was important enough to share in a short blog post… Just 10 months ago, Mozilla started to migrate their Selenium projects from the Selenium RC API to the WebDriver API. I’m thrilled to say that this is now complete, and that no Selenium RC projects are actively being run or maintained!

Mozilla application downloader released

This week part of the Automation Tools team have gathered at the London Mozilla Space to work on migrating our Firefox automated UI tests to Mozmill 2.0. A considerable part of this work is converting our automation scripts repository, which contains a number of packages that should really be dependencies. Our intention is to take these packages and either merge them to the appropriate mozbase packages, or configure them as suitable packages in their own right.

The first of these packages to be released independently is our impressive download script, which can be used to download a variety of Firefox or Thunderbird builds. We’ve appropriately name it mozdownload, and have released it on PyPI, and the repository can be found on github.

You can install it using pip install mozdownload or easy_install mozdownload and use mozdownload -h for a full list of command line options. A couple of simple examples are provided below:

To download the latest official Firefox release for your platform:

To download the latest official Thunderbird release for your platform:

Of course we don’t just use this to download the official releases. You can also download latest (or specific) builds from any of the channels with daily builds. Here are a few more examples for daily builds:

To download the Firefox Nightly build from 23rd May 2012:

To download the latest Thunderbird Daily build:

For Firefox there are daily builds for the mozilla-central and mozilla-aurora branches. For Thunderbird these are comm-central and comm-aurora.

Candidate builds can also be downloaded, so for example if you wanted to test a candidate build for the fourth beta of Firefox 13 you could use the following:

Finally, you can also download Tinderbox builds. For example, to download the latest tinderbox build use:

If you have any feature requests or find any issues please use github’s issue tracker.

Happy downloading!

Automating BrowserID with Selenium

BrowserID is an awesome new approach to handling online identity. If you haven’t heard of it then I highly recommend reading this article, which explains what it is and how it works. Several Mozilla projects have already integrated with BrowserID, including Mozillians, Affiliates, and the Mozilla Developer Network.

With all of these sites now integrating with BrowserID (and more on their way) we needed to add support to our test automation to handle the new sign in process. Initially we started to do this independently in our projects, but the thought of updating all of our projects whenever a tweak was made to BrowserID was daunting to say the least! For this reason I have created a project that contains a page object model for BrowserID. This can be included in other projects as a submodule and then updated and maintained centrally.

The new project is called ‘BIDPOM’ (BrowserID Page Object Model) and can be found here. It currently only contains a page object for the Sign In page, however this currently meets the needs of the automation for projects that have integrated with BrowserID. As we have a mix of projects using Selenium’s two APIs (RC and WebDriver), it was necessary for BIDPOM to support both.

By adding BIDPOM as a submodule, we can easily pull the BrowserID page objects into our automation projects and reference them in a very similar way to the main project’s page objects. We can also update the version of BIDPOM simply by updating the git link and updating the submodule. What’s even better is that our continuous test builds running in Jenkins automatically initialise and update the submodule for us!

I hope that in addition to being a dependency for our own automation projects, this page object model can be utilised by others wanting to create or maintain automated tests using Selenium against sites that adopt BrowserID. If you would like to start using BIDPOM then I have provided below a guide to adding the project as a submodule to an existing git repository.

From within your project, add the BIDPOM project as a git submodule:

This will add an entry to .gitmodules and clone the BIDPOM project to the browserid subdirectory. It will also stage the new gitlink and .gitmodules items for commit.

You can now commit these changes to your project’s repository:

Before you can test the new submodule you will need to run the following command to copy the contents of .gitmodules into your .git/config file.

Now you can test the submodule by deleting the browserid directory and allowing it to be recreated:

The BIDPOM project should be cloned to the browserid directory.

You will now be able to integrate your project with BrowserID! Here follow a few examples of how to integrate your project.

Example: Short sign-in using Selenium’s RC API

Example: Long sign-in using Selenium’s RC API

Example: Short sign-in using Selenium’s WebDriver API

Example: Long sign-in using Selenium’s WebDriver API

For the latest documentation on the BIDPOM project refer to the github wiki.

Case Conductor pytest plugin proposal

Case Conductor is the new test case management tool being developed by Mozilla to replace Litmus. I’ve recently been thinking about how we can improve the relationship between our automated tests and our test case management, and want to sare my thoughts on how a plugin could help our WebQA team do just that.

Annotating tests

Currently our automated tests include a docstring referencing the Litmus ID. This is inconsistent (some even include a full URL to the test case) and hard to do anything with. It’s important to reference the test case, but I see this as the bare minimum.

Current method

I would prefer to use a custom pytest mark, which would accept a single ID or a list. By doing this we can cleanly use the IDs without having to write a regex or conform to a strict docstring format.

Proposed method

Submitting results

There’s already an API in development for Case Conductor, so it would be great to interface directly with it during automated test runs. We could, for example prompt the user for the product, test cycle, and either a test run or a collection of test suites. With these details it should be possible for every automated run to create a new test run in Case Conductor and mark the linked test cases as passed/failed depending on the result. In addition to the existing reports, we can then also offer a link to the Case Conductor report for the relevant test run.

Result reports

We could also use the Case Conductor plugin to enhance the existing HTML report generated by the plugin already in use by WebQA. For example, we could link to the Case Conductor report for the test run, and provide a link for each test case. In the following mockup the new details are highlighted.

Coverage reports

By knowing all test cases in the specified product/cycle/run/suites we can report on the automated coverage. This could be used to set goals such as ‘automate 75% of product A’s tests’, which suddenly become a lot easier to measure. Here’s another mockup of how this command line report may look.

We could also use tags to indicate test cases that aren’t worth automating so the coverage is more realistic.

Command options

I would propose several command line options in order to cover the above mentioned functionality. In the form of output from –help, here are my suggestions:

Some of these would be mandatory but could fail with useful messages if omitted. For example, if the product was not provided then a list of available products could be returned. The same could be done for test cycles and test runs.

Q3/2011 in review

In the hope that I might inspire others to do the same, I’ve created a few screencasts showing some of the cool things I worked on in the last quarter. I’ve tried to keep them all short, and they’re all available in HD so no need to squint to see details.

pytest plugin for WebQA

Endurance tests daily results

System graphics details in endurance reports

Running the Mozmill tests in Jenkins

Running the Selenium IDE Mozmill tests in Bamboo

Adding Mozmill tests to the Selenium IDE build system

Back in April I blogged about the Mozmill tests I’d written to test Selenium IDE. I followed up in June with a blog post covering how to run these tests. The natural progression is to add these tests into the existing Selenium IDE build environment, which is run using Atlassian’s continuous integration server, Bamboo.

At Mozilla, we want to run the Mozmill tests for the released versions of add-ons against the latest builds of Firefox. This is to determine any regressions in Firefox that will potentially cause issues for the add-on authors. It also gives the add-on authors an early warning if there is a potential compatibility issue with an upcoming release of Firefox. We currently run these tests on a schedule, but will soon be looking to move to a continuous integration solution ourselves. You can see the results of these daily tests on our dashboard.

The add-on authors (Selenium in this case) are typically more interested to know if the latest build of their add-on is functioning in the current version of Firefox, to give them confidence to release bug fixes and new features without regressions. As the Selenium project already uses continuous integration, adding the Mozmill test step to this is a great step towards achieving the same within Mozilla, and immediately benefits the add-on author.

The Selenium IDE project plan in Bamboo has three stages: Build, Test, and Package. I’m going to focus on the Test stage in this blog post, as the other stages are very specific to Selenium. If you’re reading this and you develop a Firefox add-on then you should be able to apply the following to your project without too much tweaking.

Prerequisites

There are just two prerequisites for running these tests:

  • Python: The automation scripts are written in Python, so this is required.
  • Mercurial: You must have the Python modules for Mercurial installed, as this is the source code management tool that the automation scripts and tests are using.

Tasks

There are several tasks to complete for the Tests build. Below I list these tasks, with an explanation and configuration steps for each.

Clone mozmill-tests

Because we’re wanting to run the tests against a specific build of the add-on rather than the latest release, we need to clone the mozmill-tests repository and override the addons.ini file with the location of our target add-on. This is simply a Mercurial task, as follows:

hg clone http://hg.mozilla.org/qa/mozmill-tests

Note that if you have Mercurial set up as an executable in your continuous integration server then you may not need the ‘hg’ part of this command, as it will be substituted based on the agent running the command.

Switch branch

By default the mozmill-tests repository will be set to the default branch. This is paired with the mozilla-central branch of Firefox, and therefore is relevant only when running tests against the very latest nightly builds of Firefox. As we’re interested in testing against the latest release, we switch to the mozmill-release branch.

hg checkout mozilla-release

You will need to make sure this command is run within the directory of the cloned mozmill-tests repository, which by default will be simply mozmill-tests.

Create addon.ini

The addon.ini file tells the script where to download/install the add-on from. If you didn’t create your own then it would likely be downloaded from addons.mozilla.org or the add-on author’s preferred download location for the latest release. The file is essentially an ini file, with locations for linux, mac, and windows. If your continuous integration server provides a link to latest artifacts then you can add this, or you can use some sort of artifact sharing such as there is in Bamboo. The following commands create a suitable addon.ini for Selenium IDE.

echo "[download]" > tests/addons/ide@seleniumhq.org/addon.ini
echo "linux=file://${bamboo.build.working.directory}/selenium-ide.xpi" >> tests/addons/ide@seleniumhq.org/addon.ini
echo "mac=file://${bamboo.build.working.directory}/selenium-ide.xpi" >> tests/addons/ide@seleniumhq.org/addon.ini
echo "win=file://${bamboo.build.working.directory}/selenium-ide.xpi" >> tests/addons/ide@seleniumhq.org/addon.ini

If you’re using Jenkins, then the Copy Artifact Plugin could be useful for sharing artifacts between builds.

Commit addon.ini

It’s necessary to commit the replacement addon.ini file so that it is included when the local repository is cloned when running the tests.

hg commit -m 'Specify latest build of addon.'

Note: It really doesn’t matter what you put for the commit message here as this commit is not preserved between builds.

Download latest Firefox release

Rather than having to make sure your build agent always has the latest version of Firefox installed, there’s a handy script that can download this for you. This is a Python script, and therefore needs to be set up to use the Python executable.

./download.py --directory=latest-release --platform=mac --type=release --version=latest

Substitute the value of the platform for whatever platform your build agent is running. The latest release of Firefox will be downloaded to the latest-release directory. Note that the version value of ‘latest’ is relying on a symlink on Mozilla’s FTP server, that points to the directory of the latest released version number.

Run Mozmill tests

You can now run the script that executes the Mozmill tests.

./testrun_addons.py --junit=results.xml --logfile=results.log --repository=mozmill-tests --target-addons=ide@seleniumhq.org --with-untrusted latest-release

Here’s an explanation of the command line options:

  • The junit command line option determines where the results will be stored. The JUnit report format is one supported by many continuous integration servers, and often provides some nice reporting and visualizations of the results. This destination filename is substituted with a counter for each file created, for example results.xml will be results_0.xml.
  • Adding the logfile is optional, however this can be a useful build artifact if you have failures.
  • As we’re using a locally modified repository, we need to specify the location of this using the repository command line option. The default location will be mozmill-tests.
  • We need to set the target add-on using the target-addon option. This must match the directory beneath tests/addons, which in the case of Selenium IDE is ide@seleniumhq.org.
  • The with-untrusted flag is necessary if the add-on is not hosted at addons.mozilla.org. Any add-on hosted by Mozilla will be implicitly trusted. As Selenium IDE is not currently hosted by Mozilla, this flag is necessary.
  • Finally, the path to the Firefox binary is needed. It’s possible to simply point to a directory that contains a downloaded copy of Firefox, so we just use latest-release as that’s where our download task was told to put Firefox.

Parse test results

As mentioned above, a lot of continuous integration servers support results in JUnit report format, so your final task may be to specify the location of these files. If you used the example given above, then you will specify these using results*.xml.

The Bamboo instance for Selenium is publically viewable, so you can see the results of recent builds for Selenium IDE, and the report for the latest build. You can also see the results on the Mozmill archive dashboard. The Selenium IDE project is built whenever a change is committed to the core or dependent code sections of the repository. It can also be triggered manually.

Unfortunately the Selenium build hardware is experiencing stability issues at the time of writing this, meaning that there is not always a suitable build agent for the Mozmill tests.

Known issues

Currently there is an issue with the Mozmill automation script, in that it will exit without an error code even when tests have failed. Fortunately, the continuous integration servers that I’ve been working with update the build success based on the JUnit reports. If this wasn’t the case then builds with failing tests would be incorrectly reported as successful. We have a bug 626712 on file for this issue, and it will hopefully be resolved soon.

QA Automation Services Work Week 2011 – Day 1

QAASWW11 kicked off yesterday with a day of planning at IdeaSpace in Cambridge, UK. We had a meeting room for the day – kindly offered up by our new friends at Springboard – and plenty of instant whiteboard! As with the last work week I attended, it was organised UnCon style, which worked really well before. I will say that the first day is usually the most painful, as the entire team filled out their thoughts/needs for the week onto post-it notes, which gradually were organised into groups and ultimately sessions with agendas. Once this was finally done, the schedule for the week was set out. Although the schedule is incredibly flexible, it really helps to have this set of intentions outlined on the first day.

In the evening, the Springboard teams were kind enough to practice their investor pitches on us, and we saw 10 very promising ideas presented by an incredibly smart and enthusiastic bunch of people. I noticed a very strong lean towards mobile devices in the pitches, which really reflects the current state of the market and the direction things are heading.

After the pitches, everybody relaxed with will deserved beer and pizza, and we had an opportunity to talk with the Springboard guys, who are in their last week.

So ends the first day. Tomorrow we will be working from a cottage, which unfortunately we have already discovered has slow connection to the Internet. This won’t affect our work week sessions, but will be an obstacle during the time we have scheduled to get on with our day-to-day work activities, as well as communicating with our colleagues and community around the world! If you want to follow our activities for the week, we have created a Twitter hashtag of #mozautoqa.

Good luck to everyone at Springboard for the investor pitches on Friday, and thank you so much for inviting us to spend the day with you at IdeaSpace!