Joining Web QA

Dylan with origami foxI’m excited to announce that as of last week I am officially on Mozilla’s Web QA team! Despite working closely with the team since I started at Mozilla over four years ago, I’ve always reported to another team. Originally I had a hybrid role, where I reported to the Director of QA and assisted with automation of both Web and Desktop products. Later, we formed the QA Automation Services team, which existed to provide automation services to any QA team. This was mostly absorbed into the A-Team, which shares a lot of the same goals but is not just focused on QA. During my time with the A-Team a lot of my work started to shift towards Firefox OS, so it made sense during a organisational shift towards vertical stacks for me to officially join the Firefox OS automation team.

Many times since I started at Mozilla I’ve felt that I had more to offer the Web QA team, and I’ve always been a keen contributor to the team. I can’t say it was an easy decision to move away from Firefox OS, as it’s a terrifically exciting project, but the thought of joining Web QA just had me bursting with enthusiasm! In a time where there’s been a number of departures, I’m pleased to say that I feel like I’m coming home. Look out web – you’re about to get even more automated!

Last week Stephen Donner interviewed me on being a Mozilla Web QA contributor. You can read the blog post over on the Web QA blog.

I’d like to take this opportunity to thank Stephen Donner, Matt Evans, Clint Talbert, Jonathan Griffin, and James Lal for their support and leadership. I’ve made so many great friendships at Mozilla, and with our Whistler work week just around the corner I’m so looking forward to catching up with many of them in person!

Case Conductor pytest plugin proposal

Case Conductor is the new test case management tool being developed by Mozilla to replace Litmus. I’ve recently been thinking about how we can improve the relationship between our automated tests and our test case management, and want to sare my thoughts on how a plugin could help our WebQA team do just that.

Annotating tests

Currently our automated tests include a docstring referencing the Litmus ID. This is inconsistent (some even include a full URL to the test case) and hard to do anything with. It’s important to reference the test case, but I see this as the bare minimum.

Current method

I would prefer to use a custom pytest mark, which would accept a single ID or a list. By doing this we can cleanly use the IDs without having to write a regex or conform to a strict docstring format.

Proposed method

Submitting results

There’s already an API in development for Case Conductor, so it would be great to interface directly with it during automated test runs. We could, for example prompt the user for the product, test cycle, and either a test run or a collection of test suites. With these details it should be possible for every automated run to create a new test run in Case Conductor and mark the linked test cases as passed/failed depending on the result. In addition to the existing reports, we can then also offer a link to the Case Conductor report for the relevant test run.

Result reports

We could also use the Case Conductor plugin to enhance the existing HTML report generated by the plugin already in use by WebQA. For example, we could link to the Case Conductor report for the test run, and provide a link for each test case. In the following mockup the new details are highlighted.

Coverage reports

By knowing all test cases in the specified product/cycle/run/suites we can report on the automated coverage. This could be used to set goals such as ‘automate 75% of product A’s tests’, which suddenly become a lot easier to measure. Here’s another mockup of how this command line report may look.

We could also use tags to indicate test cases that aren’t worth automating so the coverage is more realistic.

Command options

I would propose several command line options in order to cover the above mentioned functionality. In the form of output from –help, here are my suggestions:

Some of these would be mandatory but could fail with useful messages if omitted. For example, if the product was not provided then a list of available products could be returned. The same could be done for test cycles and test runs.