London Tester Gathering with Mozilla

Last week several members of the Mozilla QA team were in London, and we took advantage of this by sponsoring one of the regular London Tester Gatherings. We were really keen on keeping it a social/networking event, but also really wanted to show of some of the awesome stuff Mozilla are doing and the challenges that we’re facing in testing.

London Tester Gathering Meetup

Matt Evans kicked things off with a short presentation about Mozilla’s mission, our focus on community and how to get involved, and also introduced the team at the event. Here’s who was we had there:

  • Matt Evans – QA Director
  • Juan Becerra – Desktop QA
  • Tony Chung – Mobile QA
  • Henrik Skupin – Desktop Automation
  • David Burns – WebQA Automation
  • Dave Hunt – WebQA/Desktop Automation
  • Desigan Chinniah (Dees) – Mozilla Labs

My plan was to do ad-hoc demos of automated Firefox endurance tests and add-ons tests, but I ended up spending most of the evening in various engaging conversations about tools, community, web apps, and mobile!

London Tester Gathering Meetup

Of course it wouldn’t be a Mozilla event without some cool swag to give away. We had various shirts, bags, posters, and raffled these off. A cool side-effect of giving out raffle tickets is that you have a good idea of the number of attendees. We handed out tickets to 55 people, and certainly filled the room well!

Congratulations to all of our winners, and thank you everyone for coming along and helping to make it such an awesome event. Thanks also to Tony Bruce for holding the event and helping me to organise it. I’ll be trying to make it to these gatherings regularly, and hopefully Mozilla will be able to sponsor another one in the future!

Check out all of Henrik’s photos of the event.

Testing Selenium IDE with Mozmill

Mozmill tests can be written for any Gecko based application, and can therefore be used to test Firefox extensions (add-ons). Since October I have been working on a new suite of tests for the Selenium IDE extension in the hope that we will be able to discover any regressions in new versions of either the add-on or in Firefox itself. Another reason for creating the suite is to demonstrate the ease of which such tests can be written and to encourage add-on developers to create test suites themselves.

Once tests have been created they can be checked into the Mozmill Tests repository. We will soon be running these on a daily basis against nightly Firefox builds and making reports available on our public dashboard.

The Selenium IDE tests currently comprise of three major parts:

  1. The shared module (selenium.js) abstracts the tests from the location of elements, provides centralised methods for common tasks, and exposes properties based on the UI.
  2. The checks helper module (checks.js) provides methods for common assertions to avoid duplication across tests.
  3. The tests themselves.

There are currently just 20, which basically executes Selenium commands and check that they pass or fail as expected. Below is a guide to construct one of these tests if you weren’t using the shared module or checks helper module. The test verifies that the assertText command executes and passes as expected.

First, we open Selenium IDE using the Tools menu item:

Then we clear the current contents of the Base URL field by selecting all of the text in it and hitting the delete key, and then type in our test data. You will notice here a reference to the getElement method, which allows us to gather all locators in a single location for less duplication, and much simpler test maintenance:

Now we add three new commands to our Selenium test case by selecting the next available row and typing into the various fields:

With our commands in place, we click the toolbar button to execute the test and wait for the test to complete:

Now that the test has run, we check that the suite progress indicator has the ‘success’ class:

We also check that the run counts are correct. The total number of tests run should be 1, and the number of failed tests should be 0:

Now we check that there are no errors in the log console:

Because the command we are testing should pass, we also check that the final command was executed:

Finally, we close Selenium IDE:

As there are a lot of things here that will be shared across several tests, you can see that there would be a lot of duplicated code. This is the reason we abstract the useful user interface interactions into the shared module and the useful checks into a helper module. A nice side-effect of this is the test becoming much more readable. Below is the same test as above but calling out to the additional modules:

As mentioned, there are currently only 20 automated tests for Selenium IDE, and we need more! If you’re interested in helping out and you have any questions, then you can either get in touch with me directly, ask in the #selenium IRC channel on Freenode, or post a message to the selenium-developers Google group.

Running endurance tests with Mozmill Crowd

Ahead of our Mozmill Crowd testday last Friday we made some changes to the endurance tests, including enabling endurance test run within Mozmill Crowd! Running the endurance tests is now even easier – simply install the Mozmill Crowd extension, and in just a few clicks the tests will be running. We also updated the endurance dashboard reports and pushed them to our Mozmill Crowd report server.

I’ve created a short screencast that demonstrates installing Mozmill Crowd, running the endurance tests, and reviewing the results:

If you want to run with add-ons installed then you’ll still need to use the command line for now (support in Mozmill Crowd is planned).

It’s also important to note that delay is now specified in seconds, and not in milliseconds.

Endurance Results from Test Day

Today I finally finished reviewing the hundreds (yes, hundreds!) of endurance reports that were submitted on our Firefox 4 add-ons test day last Friday and on the days following. It was amazing to see so many reports coming in, and I would like to thank everyone that ran an endurance test run. By far the most active contributor was pxbuz, to whom I’m extra grateful!

Of all of the test runs, it turns out there are three major issues discovered:

The first is that we really need to improve the reporting system. Going through the results was a long and tedious job, so I will be thinking about how I can improve that experience.

Secondly, we need to come up with a way to dismiss any modal dialogs that add-ons might show on first run. There were a couple of these that resulted in what looked like memory leaks, but turns out would be impossible to replicate manually.

I saved the best for last – we found a memory leak when the Greasemonkey add-on is installed! It seems that when entering/leaving private browsing mode there is memory allocated but not released. A bug has been raised and hopefully it’ll soon be resolved. Greasemonkey is one of our most popular add-ons with a current average active daily usage of over 2.5 million users!

Below you can see how the memory leak was spotted. On the left is an example of an endurance test without any add-ons installed, and on the right is a test run with Greasemonkey installed. Those five spikes that start around the 500 checkpoints mark occur during the private browsing test.

You can see the actual reports here and here.

Automated Firefox tests with add-ons installed

Mozmill has a feature that allows the tester to install an add-on during the test run. Until recently this was only used by one of our automated testruns, which was for specifically testing the installed add-on rather than Firefox.

With the recent development on the endurance tests project, it has been necessary to take more advantage of this feature. A bug was reported where if Adblock Plus – our most popular add-on – is installed memory usage increased rapidly when navigating a web page, and none of the memory was being released. To start investigating this I created a very basic (and specific) test for the site mentioned in the bug report, and then simply hacked something together based on the existing add-ons test run. A short time later, the need to run the endurance tests with multiple add-ons installed came up, so I hacked some more to get that in place too. Rather than keep these hacks around, it made sense to allow testers to specify add-ons to be installed during any of our testruns, so I started to work on the necessary patches.

As a result, testers can now run any of our automation scripts with one or more add-ons installed by simply specifying the addons command line parameter. To install multiple add-ons simply repeat the parameter. The argument can either be a path on your machine, or a web/ftp server. In the latter case the add-on will be downloaded to a temporary location before the testrun and removed at the end. The latest version of Mozmill (1.5.2) also now disables the compatibility check, meaning that we can run tests with add-ons that are not marked as compatible with the version of Firefox in use.

An example of running the endurance tests with two add-ons installed:

./testrun_endurance.py --addons=https://addons.mozilla.org/firefox/downloads/latest/748/addon-748-latest.xpi --addons=/Users/dave/Downloads/noscript.xpi --delay=1000 --iterations=10 /Applications/Firefox.app/

Introducing Firefox Endurance Testing

Since late last year I have been working on a prototype of an Endurance Testing project for Firefox. The idea is to use our existing Mozmill framework for automating UI testing of Firefox to write tests that stress and strain the browser over time. I’ve heard many times from people that Firefox needs to be restarted once in a while because it’s become sluggish, and indeed I’ve experienced this myself. The problem is that there are rarely clear steps to reproduce this issues as they normally are an accumulation of many actions over an extended period of time. What actions cause a degradation in performance, and why? This is what we hope to discover with the endurance tests project.

The initial implementation of endurance tests is rather simple: Create a test snippet that exercises a function of Firefox, and execute it repeatedly whilst gathering details of resources in use. Ultimately we may come up with more elaborate tests, and but it’s important to get a proof of concept.

There are several components to the endurance tests:

  1. Command line automation script
  2. Test snippets
  3. Resource gathering
  4. Reporting

The number of iterations each snippet repeats can be set on the command line, as well as an optional delay between each iteration. The normal command line options allow for logging, and reporting to a Mozmill Dashboard instance.

Triggering the endurance tests currently looks something like this:

./testrun_endurance.py --delay=1000 --iterations=50 --report=http://davehunt.couchone.com/mozmill /Applications/Minefield.app

This will then launch Firefox, run through all of the endurance tests (each one iterating over it’s test snippet 50 times), and then close Firefox. Because I’ve included a report parameter, the report will also be sent to the Mozmill Dashboard instance. These reports are currently available here.

Here’s a short screencast that demonstrates running the endurance tests:

If you’re interested in following the progress of the endurance tests project, check out the project page or the tracking bug for phase one.

I’m (sort of) going to Selenium Conference 2011!

For a long time now I’ve been chasing the idea of a conference for Selenium. It probably started when I heard about the San Francisco Selenium users meetup group, and was certainly on my mind when I started the London version of the group.

I think it was back in 2009 when Adam Goucher started some initial discussions on when/where such a conference would be held, and although it’s taken a while, now here we are just 34 (there’s an inside joke there) days from Selenium Conference 2011! Okay, so it’s not the first Selenium conference[1], but it’s still pretty awesome!

As if it wasn’t enough excitement that I’ll be attending, and finally meeting some of the people I’ve had the pleasure of working with over the last few years, but I will also be giving a presentation on automating canvas using Selenium! For me this is equal parts exciting and terrifying as I’m not a confident speaker, but it does help that I’ll be co-presenting with my good friend (and old colleague), Andy Smith. It also helps that together we have created a presentation and demo that I’m really pleased with and excited to show off!

Unfortunately, it turns out that the Mozilla all-hands will clash with the conference. I have to say I’m really frustrated about this, although I do understand organising these events is not easy, and that it would have been a struggle for me to fit in a longer trip. So the plan now is for me to hire a car and drive between San Francisco and Mountain View for the various planned activities.

Today the speaker schedule was announced, and it turns out that we will be the first talk after Jason Huggins gives the opening keynote. Part of this fills me with fear as Jason will be a tough act to follow, and any technical issues might not have been ironed out. On the other hand at least we will get the nervous job of presenting out of the way early and then relax into the event.

If you’re planning on going to Selenium Conference you can get your tickets here, or if you’re interested in our presentation then we’ll be giving a preview of it at the London Selenium users meetup group on March 23rd, which you can sign up for here.

[1] The first Selenium conference was Selenium Camp, which took place last weekend. Here’s a nice writeup from David Burns who was invited along to give the opening talk.

Mozilla in London for Selenium Meetup #3

On Wednesday, the third London Selenium meetup went ahead. Attendance was good considering there was a London Underground strike in action, however it had clearly made an impact. I arrived at midday and enjoyed a chilled out afternoon at Google’s offices – much more relaxed than my previous visit as I wasn’t actually presenting this time!

The event started with a presentation on how Mozilla uses Selenium to test their sites. Stephen Donner described the infrastructure and technologies in place, and explained the benefits of a page object approach to testing.

This was followed by a rather anticipated update from Simon Stewart on the progress of Selenium 2 (beta soon!) We were also fortunate enough to have Jason (Mr Selenium) Huggins at the event, and he even stepped up to answer some questions on the advantages of moving to Selenium 2.

David Burns then demonstrated a much updated version of his GTAC 2009 presentation on automating gathering client side performance metrics – now using Selenium 2! There was also a peek into using the bleeding edge WebTimings API.

In my personal favourite presentation of the evening (probably because it’s my new area of work), Henrik Skupin gave a demonstration of how Mozilla are approaching crowdsourcing their test automation with the upcoming MozMill Crowd Testing addon for Firefox. There were definitely a few ‘wow’s from the audience when Henrik ran the tests.

Something I found very encouraging was the quality of questions coming from the audience. I’d like to thank those that came, it’s great to get everyone together, and really great to recognise people from previous events!

This was also the first event where the #ldnse hashtag was actively used on Twitter, which is also encouraging. After LDNSE2 I was considering trying to find someone else to continue the events, but I’m glad I didn’t as I’m now really looking forward to organising LDNSE4! Our frequency at the moment is about one every six months, so I’ll be looking to at least keep this regular.

Thanks again to everyone for coming, to all of our presenters and contributors, and to Google for hosting again!

Update: slides/videos/blogs available below.

YouTube channel (all the videos):
http://www.youtube.com/user/londonselenium

How Mozilla Uses Selenium – Stephen Donner
Slides: http://www.slideshare.net/stephendonner/selenium-londonmeetup-5671730
Video (Part 1): http://www.youtube.com/watch?v=Kvd_TIxLziI
Video (Part 2): http://www.youtube.com/watch?v=ATtXDuUlt9Q

Update on Selenium 2 – Simon Stewart & Jason Huggins
Video (Part 1): http://www.youtube.com/watch?v=AYJMct82YXg
Video (Part 2): http://www.youtube.com/watch?v=HYSJUSI3_VU

Client-side profiling with Selenium 2 – David Burns
Slides: http://prezi.com/dgqpq7bywuin/client-side-profiling-with-selenium-2/
Video (Part 1): http://www.youtube.com/watch?v=2TSJHJfbOHE
Video (Part 2): http://www.youtube.com/watch?v=NrvN8HwmpQ4

Crowd-sourcing Automated Firefox UI Testing – Henrik Skupin
Slides: http://www.slideshare.net/hskupin/crowdsourced-automated-firefox-ui-testing
Video (Part 1): http://www.youtube.com/watch?v=O8NaG07NoLc
Video (Part 2): http://www.youtube.com/watch?v=TIfH5Bku20U
Blog: http://www.hskupin.info/2010/11/19/mozmill-crowd-talk-at-selenium-meetup-3-in-london/