Selenium/Ruby/Browser testing user satisfaction survey

The goal of the survey was to get a baseline of user satisfaction in our browser-testing infrastructure/tooling. Target audience were members of a few mailing lists:


 * wikitech-l
 * engineering
 * qa

The survey was created as a Google docs form. It had 5 sections, each section had a few questions. The last question in each section was a free form text field, so participants could leave any feedback.

Most of the questions had simple 5 level linear scale, from 1-5, or from :'( to :D.


 * :'( = 1
 * :( = 2
 * :| = 3
 * :) = 4
 * :D = 5

18 people participated in the survey. Questions got from 8 to 16 answers.

All details (including all questions) can be found in T131123 Phabricator task.

Jenkins
Participants of the survey are pretty happy about stability of mwext-mw-selenium job. Most of them think they are fast enough. That was not a surprise to me, since I share that view.

What surprised me is that they are also mostly happy with stability and speed of selenium* jobs, since some jobs are failing with hard to reproduce failures, and some jobs take hours to run.

Most of the people also know how to use continuous integration entry points for Ruby (Rake) and JavaScript (Grunt).

We have received two comments. We are working on making CI more reliable. We will also improve the documentation, with explanations how CI works and tips for debugging.
 * One was thanking us for our work.
 * The other one said that the majority of the failures were related to CI, not actual failures of tested code. They are also having trouble debugging and do not understand how CI works.

Gerrit triggered Selenium+Jenkins jobs (mwext-mw-selenium, after patch set submission).


Jobs are stable enough (n=16)

Jobs are fast enough (n=16)

Time triggered Selenium+Jenkins jobs (selenium*, daily).


Jobs are stable enough (n=16)

Jobs are fast enough (n=16)

I know how to use continuous integration entry points.


Rake (Ruby) (n=16)

Grunt (JavaScript) (n=16)

Ruby
I was surprised to see that (slightly) more people are happy with Ruby testing tools (mediawiki_selenium, Cucumber, RSpec). The majority of the people did not have strong feelings. I was expecting more people being unhappy with the Ruby testing tools.

We have received three comments.
 * The first one is saying that Ruby is not the optimal choice for testing tools, another language that is more popular with MediaWiki developers would be a better choice. Some libraries we use are poorly documented.
 * The second one agrees that Ruby is not the best language in our context and suggests that Node.js would be a better choice.
 * The third one says that having less languages on the stack would simplify things, but agrees that Ruby has good testing tools, so the decision is not easy.

How do you feel about...


Selenium+Ruby framework mediawiki_selenium (n=15)

Cucumber (n=14)

RSpec (n=15)

JavaScript
JavaScript testing framework Malu is in it's early development and this survey was probably the first time some of the participants have heard about it. That might explain the highest &lt;nowiki&gt;:|&lt;/nowiki&gt; (or 3) response about Malu. 11 participants (69%) gave that response, the highest number of votes that any other question got. Participants were in general positive towards Mocha and QUnit.

We have received two comments.
 * Not heard about Malu and did not find much documentation when they went looking. No opinion on Mocha. QUnit does the job well, but has it's problems.
 * Not familiar with Malu but likes the idea. Prefers Mocha to QUnit but not sure if another testing framework would cause problems.

How do you feel about...


Selenium+JavaScript framework (Malu) (n=16)

Mocha (n=15)

QUnit (n=15)

Selenium
It surprised me that participants were pretty happy about writing tests, but it was not surprise that the feelings were not the same when it came to fixing failed tests.

We have received one comment.
 * Lot of time is spent fixing tests because the testing environment is not stable and tests are not written in a good way, the latter caused by lack of understanding of the framework.

How do you feel about...
Writing tests (n=16)



Fixing failed tests (n=15)



Getting help
Everybody gets stuck sometimes, so knowing how to get help is important. I do not think our documentation (at mediawiki.org) is good enough or up to date, but participants were more happy with it than expected. Looks like the most helpful tool was IRC, and to some degree Phabricator and mailing lists.

Four comments.
 * Documentation not great. Getting started documentation targeting Mac. Getting started tutorial should be as simple as possible, the rest of the documentation should be moved to other pages.
 * Would like to know more about testing infrastructure, preferable as a session.
 * Hard to get help. Better process for getting help needed. Hard to schedule pair programming.
 * Thanking us and saying we are awesome. (Thank you. You are awesome too.)

Rate your experience getting help via the below methods.
mediawiki.org (n=16)



IRC (n=16)



Phabricator (n=16)



Mailing list (n=16)



Documentation
Documentation is important. There is never enough of it, and what is out there gets outdated quickly. Anyway, both our getting started and general documentation got more positive than negative answers.

How do you feel about...


Getting started documentation (n=16)

Documentation on how to do testing (n=16)

I need help
We were offering training in the past, but we were not sure what type of training people prefer. Based on the answers, participants equally like pairing, on-line and in-person workshops. That is true for both getting started sessions and advanced sessions (like fixing failed tests).



I need help with getting started with testing (n=9)

I need help with fixing failed tests (n=8)