Discussion:
Proposed QA rules for apps.meego.com
Andrew Flegg
2011-05-29 17:23:16 UTC
Permalink
FollowUp-To: meego-community (please)

On Wednesday lunchtime at the Conference, there was an impromptu BoF
about how QA should work at in the Community OBS. Henri will, no
doubt, be following up with more detailed minutes; but one of the
actions I got was to draw up an initial set of criteria for
discussion.

Present: Henri Bergius (bergie)
Niels Breet (X-Fade)
Ferenc Szekely (feri)
Andrew Flegg (Jaffa)
Ed Page (epage)
Randall Arnold (Texrat)
Dawn Foster (geekygirldawn)
+ others?

Suggestion:
http://wiki.meego.com/MeeGo_Apps/QA

The aims are:

* To simplify the criteria down as far as possible compared with
maemo.org's Extras-testing QA
* To make sure the system does as much as possible.
* To provide a checklist approach, but without mandating that
everyone can (and should) investigate all aspects (not everyone
needs to, or can, check for power management issues)

Separately, Ed Page raised sensible points[1] about being able to
promote earlier versions alongside a current testing release. This
could be done by having multiple versions of the application in the
repository; or pausing the current one and resuming it after the
urgent fix has gone through.

Comments & changes welcome, as ever.

Cheers,

Andrew


[1] http://eopage.blogspot.com/2011/05/meego-conference-first-impressions.html
--
Andrew Flegg -- mailto:andrew-po+***@public.gmane.org  |  http://www.bleb.org/
Attila Csipa
2011-05-29 19:17:08 UTC
Permalink
Post by Andrew Flegg
actions I got was to draw up an initial set of criteria for
discussion.
A remark, the questions should generally not place the burden of
interpretation on the tester. I.e. "Should this be in apps" style questions
can be very opinionated because different people can/will use different criteria
for it, as we've seen in Extras. This also applies to the terms 'obvious',
'appropriate', etc. I understand that you want to avoid long boring paragraphs
of what is and isn't OK and just use common sense, but when devices start
bringing in people without a developer/engineer mindset you will see all sorts
of interpretations of those generic rules.

Best regards,
Attila Csipa
Andrew Flegg
2011-05-29 19:40:28 UTC
Permalink
Post by Attila Csipa
Post by Andrew Flegg
actions I got was to draw up an initial set of criteria for
discussion.
A remark, the questions should generally not place the burden of
interpretation on the tester. I.e. "Should this be in apps" style questions
can be very opinionated because different people can/will use different
criteria for it, as we've seen in Extras.
Indeed. And, in that instance, it's purposeful. We've seen with
maemo.org Extras as set of rules that grow organically - without
consultation or consensus - and those rules get applied
inconsistently. The intent of that question is to truly make the
opinionated stuff the most important, with as few qualitative measures
as possible (automating as many as possible), and not having everyone
rate the same.

The problem with the current prescriptive set of rules is that most
testers ignore them or, unlikely IMHO, repeat the work of others in
identifying optification, bug tracking, feature completeness,
uninstallability etc.

Too many prescriptive rules leads to an over selection of testers and
a backlog of ever increasing apps. I might not be able to test (for
example) an app for participating with a particular piece of hardware,
but if I see a report from someone I trust saying it works, I might
vote "it should be in Extras".
Post by Attila Csipa
This also applies to the terms 'obvious', 'appropriate', etc. I
understand that you want to avoid long boring paragraphs of what is and
isn't OK and just use common sense, but when devices start bringing in
people without a developer/engineer mindset you will see all sorts
of interpretations of those generic rules.
Agreed, there still needs to be a wider description of the criteria
besides this checklist - although without seeing the client, it's
difficult to know. What's "user-facing" for example? In Maemo it's
"Section: user/*"; but this'll be different for MeeGo's RPM (and
possibly Harmattan). Similarly, "icon"; where does the
pre-installation icon come from?

However, I would say that any such descriptions should be shorter than:

http://wiki.maemo.org/Extras-testing/QA_checklist
http://wiki.maemo.org/Extras-testing#Quality_Assurance_criteria

Cheers,

Andrew
--
Andrew Flegg -- mailto:andrew-po+***@public.gmane.org  |  http://www.bleb.org/
Attila Csipa
2011-05-29 22:31:55 UTC
Permalink
Post by Andrew Flegg
Too many prescriptive rules leads to an over selection of testers and
a backlog of ever increasing apps. I might not be able to test (for
example) an app for participating with a particular piece of hardware,
but if I see a report from someone I trust saying it works, I might
vote "it should be in Extras".
Some potential problems with this - one, the issue all-too-present on major
appstores, being that most of the subjective feedback is simply rubbish.

The other problem is that we (still) retain the same single scale approach
which mixes popularity with quality, say, comparing an app that has 10 OKs and
another that has (for the same testing period) 100 OKs and 100 not-OKs
(remember, if we're in subjective land, the not-OKs might be due to anything).

Three is the trust used in the way you mention. If you vote on it simply
because someone else voted on it, your vote is redundant and, worse, puts
popularity on the same scale as technical prowess. Say, you vote OK, and then
100 folks, seeing 'hey, if Jaffa says it's ok it must be ok' go the same way.
But then a random dude discovers an actual showstopper bug and suddenly it
becomes the question of is that issue more recognized by others than you are
popular (wtf?).

Finally, device-specific behavior can ruin your subjective score - I had that
issue with Mirror on the N900 - I had to note, comment and alert everywhere
because I systematically got thumbs downs and one stars due to image quality
(which was the result of physical hardware and firmware on a particular
device).
Post by Andrew Flegg
http://wiki.maemo.org/Extras-testing/QA_checklist
http://wiki.maemo.org/Extras-testing#Quality_Assurance_criteria
To underline - I do want apps.meego.com to have somewhat more liberal criteria
than maemo.org had, and with the extra manpower and tools already has a good
headstart on it, but let's have a clear perspective (and learn not just from
Extras, but also other stores and efforts) what certain changes can bring, as
tweaking is always more difficult than starting with a good set of rules, which
will ultimately stem from what the target goal of the QA process is - just
filter out glaring problems and then hope community feedback takes care of the
rest, or protect users who will have that repository enabled by default and
prefer tried and tested status to extra features.

Best regards,
Attila Csipa

Loading...