Wikimedia mobile engineering/Mobile QA/Spec

Mobile QA can be broken down into three categories: automation, crowd sourcing, & device testing. No one solution can solve the needs of the WMF and it takes a blend of all of these into a robust QA plan in order to suppor the work that the Mobile team is doing.

= Automation =

Automation includes anything that can be done without human intervention. This includes both the concept of unit tests and browser based tests. Automation works best when you know what might break. It works poorly in finding issues that you can't anticipate.

Unit Tests

 * Jenkins - We already use it and Ant to build the Wikipedia Android app
 * Test Swarm - Used by the WMF but not specifically for mobile related projects

Browser Tests
Were currently not doing anything within this category. Some potential options include:


 * Selenium - Now has automated support for android
 * Perfecto/DeviceAnywhere - Has support for writing scripts

= Crowd Sourcing =

Crowd sourcing inclues any outreach to real users for debugging our software. It's exceptionally good at testing new user experience, finding problems we can't anticipate, and acts as a great outreach tool. Were currently making use of some crowd sourced options but could be doing even more.


 * uTest - Not currently used but would open up our testing user base
 * Mailing lists - Actively using mobile-l@
 * Twitter - Actively using @WikimediaMobile
 * Signpost - Not currently used but could be effective
 * m. app (advertising app testing) - Not used
 * m. beta (advertising beta mobile web testing) - Beta present but poorly advertised

= Device Testing =

On device testing is best when trying to reproduce issues. But its slow and cumbersome as you have to either have the real device or work through a remote service.


 * Perfecto/DeviceAnywhere - We have an active account with Perfecto
 * Real devices - We have a collection of real devices in the office