Endurance tests demonstrate Firefox’s memory usage improvements

Thanks to the amazing efforts of the MemShrink project, Firefox’s memory usage is seeing some great improvements. In particular, Firefox 7 will be much more efficient with memory than the current version. As endurance tests monitor resources such as memory, it makes sense for us to work together to ensure that we’re moving in the right direction, and that we don’t regress in any of these areas.

At this point there are only five endurance tests, and although these can be run with many hundreds of iterations in order to seek out memory leaks in the tested areas, they do nothing to simulate a user. It was suggested that we have a special endurance test similar to Stuart Parmenter’s Mem Buster test.

Creating an initial version of this new test did not take long. Instead of opening sites in new windows I open them in tabs, and the number of sites opened is controlled by iterations and micro-iterations. I also increased the number of sites so we’d be hitting the same ones less often, and based this new list on Alexa’s top sites. Once I added in handling of modal dialogs that some sites were causing to be displayed then I was able to consistently get results.

This test would appear to be similar to Talos tp5 in that is loads sites from Alexa’s index, however we’re not measuring how long each site takes to load. Instead, we move onto the next site after a delay as specified on the command line. I have kept the same delay as the original Mem Buster test, which is 3 seconds.

After running the Mem Buster Endurance Test five times across five versions of Firefox, I found the results to clearly reflect the MemShrink efforts. Although the memory consumption varies somewhat for each run, the general downward trend is unmistakable.

In the following charts you can see the improvement in memory usage between Firefox 4 & 5. These can be directly compared as the endurance tests were measuring the same metrics (allocated memory & mapped memory).

Charts showing allocated and mapped memory usage in Firefox 4 & 5

In Firefox 6 there were several improvements to memory reporting, and the endurance tests were updated to record new metrics (explicit memory & resident memory). You can see in the following charts that explicit memory usage in Firefox 7 is rough half that of Firefox 6! It appears that this has increased in Firefox 8, which will require some further investigation. The resident memory has continued to decrease in each version.

Charts showing explicit and resident memory usage in Firefox 6, 7, & 8

You can follow the progress of the Mem Buster Endurance Test in Bugzilla. Full reports from the test runs used in this blog post can be found here.

Update: It appears that the explicit memory calculated for Firefox 7 on Mac was artificially low. This explains the slight increase in Firefox 8. If you’re interested you can read further details on Bugzilla.

Micro-iterations in Endurance Tests

Last week micro-iterations landed in Mozmill Endurance Tests. These allow tests to accumulate resources during an iteration. This was previously achieved by leaving the state of the test snippet in a different state to how it started, allowing the iterations themselves to accumulate. The problem with this is that these accumulating tests have a very different pattern compared to other tests that clean up before ending the iteration.

To solve this we decided to add a micro-iteration parameter and to use it to loop within an iteration. An example use for this is the new tab tests. Now, if you specify 5 iterations and 10 micro-iterations then these tests will open 10 new tabs, close them, and repeat that 5 times.

The endurance tests documentation has been updated with details on writing and running tests with micro-iterations.

Endurance Tests in Firefox 6

One of the features of the upcoming Firefox 6 is an improvement to the handling and reporting of memory resources. As you can probably imagine, this is very applicable to the endurance tests project. As a result of the changes, running the endurance tests with the previews of Firefox 6 was failing to gather any metrics at all.

I’m pleased to announce that as of yesterday, the endurance tests now support Firefox 6! One of the main differences you will see is that we’re no longer gathering mapped/allocated memory, and are instead gathering explicit/resident, which we are expecting to provide much more useful results. You don’t need to do anything to get the latest changes, just run the tests as described here (using the command line) or here (using Mozmill Crowd).

If you’re interested, here are the relevant bugs:

  • Bug 633653 – Revamp about:memory
  • Bug 657327 – Merge the “mapped” and “heap used” trees, and make the tree flatter
  • Bug 656869 – No memory results on endurance testrun with Nightly 6.0a1
  • Bug 657508 – Update dashboard to display endurance tests results from Firefox 6.0

Running endurance tests with Mozmill Crowd

Ahead of our Mozmill Crowd testday last Friday we made some changes to the endurance tests, including enabling endurance test run within Mozmill Crowd! Running the endurance tests is now even easier – simply install the Mozmill Crowd extension, and in just a few clicks the tests will be running. We also updated the endurance dashboard reports and pushed them to our Mozmill Crowd report server.

I’ve created a short screencast that demonstrates installing Mozmill Crowd, running the endurance tests, and reviewing the results:

If you want to run with add-ons installed then you’ll still need to use the command line for now (support in Mozmill Crowd is planned).

It’s also important to note that delay is now specified in seconds, and not in milliseconds.

Automated Firefox tests with add-ons installed

Mozmill has a feature that allows the tester to install an add-on during the test run. Until recently this was only used by one of our automated testruns, which was for specifically testing the installed add-on rather than Firefox.

With the recent development on the endurance tests project, it has been necessary to take more advantage of this feature. A bug was reported where if Adblock Plus – our most popular add-on – is installed memory usage increased rapidly when navigating a web page, and none of the memory was being released. To start investigating this I created a very basic (and specific) test for the site mentioned in the bug report, and then simply hacked something together based on the existing add-ons test run. A short time later, the need to run the endurance tests with multiple add-ons installed came up, so I hacked some more to get that in place too. Rather than keep these hacks around, it made sense to allow testers to specify add-ons to be installed during any of our testruns, so I started to work on the necessary patches.

As a result, testers can now run any of our automation scripts with one or more add-ons installed by simply specifying the addons command line parameter. To install multiple add-ons simply repeat the parameter. The argument can either be a path on your machine, or a web/ftp server. In the latter case the add-on will be downloaded to a temporary location before the testrun and removed at the end. The latest version of Mozmill (1.5.2) also now disables the compatibility check, meaning that we can run tests with add-ons that are not marked as compatible with the version of Firefox in use.

An example of running the endurance tests with two add-ons installed:

./testrun_endurance.py --addons=https://addons.mozilla.org/firefox/downloads/latest/748/addon-748-latest.xpi --addons=/Users/dave/Downloads/noscript.xpi --delay=1000 --iterations=10 /Applications/Firefox.app/

I’m (sort of) going to Selenium Conference 2011!

For a long time now I’ve been chasing the idea of a conference for Selenium. It probably started when I heard about the San Francisco Selenium users meetup group, and was certainly on my mind when I started the London version of the group.

I think it was back in 2009 when Adam Goucher started some initial discussions on when/where such a conference would be held, and although it’s taken a while, now here we are just 34 (there’s an inside joke there) days from Selenium Conference 2011! Okay, so it’s not the first Selenium conference[1], but it’s still pretty awesome!

As if it wasn’t enough excitement that I’ll be attending, and finally meeting some of the people I’ve had the pleasure of working with over the last few years, but I will also be giving a presentation on automating canvas using Selenium! For me this is equal parts exciting and terrifying as I’m not a confident speaker, but it does help that I’ll be co-presenting with my good friend (and old colleague), Andy Smith. It also helps that together we have created a presentation and demo that I’m really pleased with and excited to show off!

Unfortunately, it turns out that the Mozilla all-hands will clash with the conference. I have to say I’m really frustrated about this, although I do understand organising these events is not easy, and that it would have been a struggle for me to fit in a longer trip. So the plan now is for me to hire a car and drive between San Francisco and Mountain View for the various planned activities.

Today the speaker schedule was announced, and it turns out that we will be the first talk after Jason Huggins gives the opening keynote. Part of this fills me with fear as Jason will be a tough act to follow, and any technical issues might not have been ironed out. On the other hand at least we will get the nervous job of presenting out of the way early and then relax into the event.

If you’re planning on going to Selenium Conference you can get your tickets here, or if you’re interested in our presentation then we’ll be giving a preview of it at the London Selenium users meetup group on March 23rd, which you can sign up for here.

[1] The first Selenium conference was Selenium Camp, which took place last weekend. Here’s a nice writeup from David Burns who was invited along to give the opening talk.

Mozilla in London for Selenium Meetup #3

On Wednesday, the third London Selenium meetup went ahead. Attendance was good considering there was a London Underground strike in action, however it had clearly made an impact. I arrived at midday and enjoyed a chilled out afternoon at Google’s offices – much more relaxed than my previous visit as I wasn’t actually presenting this time!

The event started with a presentation on how Mozilla uses Selenium to test their sites. Stephen Donner described the infrastructure and technologies in place, and explained the benefits of a page object approach to testing.

This was followed by a rather anticipated update from Simon Stewart on the progress of Selenium 2 (beta soon!) We were also fortunate enough to have Jason (Mr Selenium) Huggins at the event, and he even stepped up to answer some questions on the advantages of moving to Selenium 2.

David Burns then demonstrated a much updated version of his GTAC 2009 presentation on automating gathering client side performance metrics – now using Selenium 2! There was also a peek into using the bleeding edge WebTimings API.

In my personal favourite presentation of the evening (probably because it’s my new area of work), Henrik Skupin gave a demonstration of how Mozilla are approaching crowdsourcing their test automation with the upcoming MozMill Crowd Testing addon for Firefox. There were definitely a few ‘wow’s from the audience when Henrik ran the tests.

Something I found very encouraging was the quality of questions coming from the audience. I’d like to thank those that came, it’s great to get everyone together, and really great to recognise people from previous events!

This was also the first event where the #ldnse hashtag was actively used on Twitter, which is also encouraging. After LDNSE2 I was considering trying to find someone else to continue the events, but I’m glad I didn’t as I’m now really looking forward to organising LDNSE4! Our frequency at the moment is about one every six months, so I’ll be looking to at least keep this regular.

Thanks again to everyone for coming, to all of our presenters and contributors, and to Google for hosting again!

Update: slides/videos/blogs available below.

YouTube channel (all the videos):
http://www.youtube.com/user/londonselenium

How Mozilla Uses Selenium – Stephen Donner
Slides: http://www.slideshare.net/stephendonner/selenium-londonmeetup-5671730
Video (Part 1): http://www.youtube.com/watch?v=Kvd_TIxLziI
Video (Part 2): http://www.youtube.com/watch?v=ATtXDuUlt9Q

Update on Selenium 2 – Simon Stewart & Jason Huggins
Video (Part 1): http://www.youtube.com/watch?v=AYJMct82YXg
Video (Part 2): http://www.youtube.com/watch?v=HYSJUSI3_VU

Client-side profiling with Selenium 2 – David Burns
Slides: http://prezi.com/dgqpq7bywuin/client-side-profiling-with-selenium-2/
Video (Part 1): http://www.youtube.com/watch?v=2TSJHJfbOHE
Video (Part 2): http://www.youtube.com/watch?v=NrvN8HwmpQ4

Crowd-sourcing Automated Firefox UI Testing – Henrik Skupin
Slides: http://www.slideshare.net/hskupin/crowdsourced-automated-firefox-ui-testing
Video (Part 1): http://www.youtube.com/watch?v=O8NaG07NoLc
Video (Part 2): http://www.youtube.com/watch?v=TIfH5Bku20U
Blog: http://www.hskupin.info/2010/11/19/mozmill-crowd-talk-at-selenium-meetup-3-in-london/