Running the Selenium IDE Mozmill tests

A short while ago I posted about the Mozmill tests I’ve created for Selenium IDE, however I didn’t cover how you can run these tests yourself. Currently I run these manually as needed to ensure that the nightly Firefox builds have not regressed or introduced changes in any areas that the addon depends on. We ultimately intend this to be a scheduled job.

I also have added a job on the Selenium continuous integration server that runs the tests against a released version of Firefox. In the future this will test the latest build of Selenium IDE, and will run every time the addon is built.

In order to run the Selenium IDE tests you will need to have Mercurial and Mozmill installed, which you can do simply by using pip install mercurial mozmill. Once you have these you can clone the mozmill-automation repository, using the following command:

hg clone

Then from the repository directory run the following:

./ --report= --with-untrusted /Applications/

Reports will be sent to our dashboard as specified by the --report parameter, and available to see here.

The --target-addons parameter specifies that we only want to run the Selenium IDE tests, and not all of the addons tests we have, and the --with-untrusted parameter is required because Selenium IDE is not listed on and is therefore ‘untrusted’.

The final parameter is the version of Firefox you want to run the tests against. These tests can currently be run against Nightly (7.0), Aurora (6.0), Beta (5.0), as well as the current releases (4.0, 3.6, and 3.5).

Below is a short screencast demonstrating how to run the tests:

With the recent release of Selenium IDE 1.0.11, I was able to push some new tests. These check a few more commands, and brings the total number of tests up to 40. If you’re interested in helping out and you have any questions, then you can either get in touch with me directly, ask in the #selenium IRC channel on Freenode, or post a message to the selenium-developers Google group.

Testing Selenium IDE with Mozmill

Mozmill tests can be written for any Gecko based application, and can therefore be used to test Firefox extensions (add-ons). Since October I have been working on a new suite of tests for the Selenium IDE extension in the hope that we will be able to discover any regressions in new versions of either the add-on or in Firefox itself. Another reason for creating the suite is to demonstrate the ease of which such tests can be written and to encourage add-on developers to create test suites themselves.

Once tests have been created they can be checked into the Mozmill Tests repository. We will soon be running these on a daily basis against nightly Firefox builds and making reports available on our public dashboard.

The Selenium IDE tests currently comprise of three major parts:

  1. The shared module (selenium.js) abstracts the tests from the location of elements, provides centralised methods for common tasks, and exposes properties based on the UI.
  2. The checks helper module (checks.js) provides methods for common assertions to avoid duplication across tests.
  3. The tests themselves.

There are currently just 20, which basically executes Selenium commands and check that they pass or fail as expected. Below is a guide to construct one of these tests if you weren’t using the shared module or checks helper module. The test verifies that the assertText command executes and passes as expected.

First, we open Selenium IDE using the Tools menu item:

Then we clear the current contents of the Base URL field by selecting all of the text in it and hitting the delete key, and then type in our test data. You will notice here a reference to the getElement method, which allows us to gather all locators in a single location for less duplication, and much simpler test maintenance:

Now we add three new commands to our Selenium test case by selecting the next available row and typing into the various fields:

With our commands in place, we click the toolbar button to execute the test and wait for the test to complete:

Now that the test has run, we check that the suite progress indicator has the ‘success’ class:

We also check that the run counts are correct. The total number of tests run should be 1, and the number of failed tests should be 0:

Now we check that there are no errors in the log console:

Because the command we are testing should pass, we also check that the final command was executed:

Finally, we close Selenium IDE:

As there are a lot of things here that will be shared across several tests, you can see that there would be a lot of duplicated code. This is the reason we abstract the useful user interface interactions into the shared module and the useful checks into a helper module. A nice side-effect of this is the test becoming much more readable. Below is the same test as above but calling out to the additional modules:

As mentioned, there are currently only 20 automated tests for Selenium IDE, and we need more! If you’re interested in helping out and you have any questions, then you can either get in touch with me directly, ask in the #selenium IRC channel on Freenode, or post a message to the selenium-developers Google group.

I’m (sort of) going to Selenium Conference 2011!

For a long time now I’ve been chasing the idea of a conference for Selenium. It probably started when I heard about the San Francisco Selenium users meetup group, and was certainly on my mind when I started the London version of the group.

I think it was back in 2009 when Adam Goucher started some initial discussions on when/where such a conference would be held, and although it’s taken a while, now here we are just 34 (there’s an inside joke there) days from Selenium Conference 2011! Okay, so it’s not the first Selenium conference[1], but it’s still pretty awesome!

As if it wasn’t enough excitement that I’ll be attending, and finally meeting some of the people I’ve had the pleasure of working with over the last few years, but I will also be giving a presentation on automating canvas using Selenium! For me this is equal parts exciting and terrifying as I’m not a confident speaker, but it does help that I’ll be co-presenting with my good friend (and old colleague), Andy Smith. It also helps that together we have created a presentation and demo that I’m really pleased with and excited to show off!

Unfortunately, it turns out that the Mozilla all-hands will clash with the conference. I have to say I’m really frustrated about this, although I do understand organising these events is not easy, and that it would have been a struggle for me to fit in a longer trip. So the plan now is for me to hire a car and drive between San Francisco and Mountain View for the various planned activities.

Today the speaker schedule was announced, and it turns out that we will be the first talk after Jason Huggins gives the opening keynote. Part of this fills me with fear as Jason will be a tough act to follow, and any technical issues might not have been ironed out. On the other hand at least we will get the nervous job of presenting out of the way early and then relax into the event.

If you’re planning on going to Selenium Conference you can get your tickets here, or if you’re interested in our presentation then we’ll be giving a preview of it at the London Selenium users meetup group on March 23rd, which you can sign up for here.

[1] The first Selenium conference was Selenium Camp, which took place last weekend. Here’s a nice writeup from David Burns who was invited along to give the opening talk.

Mozilla in London for Selenium Meetup #3

On Wednesday, the third London Selenium meetup went ahead. Attendance was good considering there was a London Underground strike in action, however it had clearly made an impact. I arrived at midday and enjoyed a chilled out afternoon at Google’s offices – much more relaxed than my previous visit as I wasn’t actually presenting this time!

The event started with a presentation on how Mozilla uses Selenium to test their sites. Stephen Donner described the infrastructure and technologies in place, and explained the benefits of a page object approach to testing.

This was followed by a rather anticipated update from Simon Stewart on the progress of Selenium 2 (beta soon!) We were also fortunate enough to have Jason (Mr Selenium) Huggins at the event, and he even stepped up to answer some questions on the advantages of moving to Selenium 2.

David Burns then demonstrated a much updated version of his GTAC 2009 presentation on automating gathering client side performance metrics – now using Selenium 2! There was also a peek into using the bleeding edge WebTimings API.

In my personal favourite presentation of the evening (probably because it’s my new area of work), Henrik Skupin gave a demonstration of how Mozilla are approaching crowdsourcing their test automation with the upcoming MozMill Crowd Testing addon for Firefox. There were definitely a few ‘wow’s from the audience when Henrik ran the tests.

Something I found very encouraging was the quality of questions coming from the audience. I’d like to thank those that came, it’s great to get everyone together, and really great to recognise people from previous events!

This was also the first event where the #ldnse hashtag was actively used on Twitter, which is also encouraging. After LDNSE2 I was considering trying to find someone else to continue the events, but I’m glad I didn’t as I’m now really looking forward to organising LDNSE4! Our frequency at the moment is about one every six months, so I’ll be looking to at least keep this regular.

Thanks again to everyone for coming, to all of our presenters and contributors, and to Google for hosting again!

Update: slides/videos/blogs available below.

YouTube channel (all the videos):

How Mozilla Uses Selenium – Stephen Donner
Video (Part 1):
Video (Part 2):

Update on Selenium 2 – Simon Stewart & Jason Huggins
Video (Part 1):
Video (Part 2):

Client-side profiling with Selenium 2 – David Burns
Video (Part 1):
Video (Part 2):

Crowd-sourcing Automated Firefox UI Testing – Henrik Skupin
Video (Part 1):
Video (Part 2):