12:00:03 #startmeeting Mer QA meeting 26/07/2012 12:00:03 Meeting started Thu Jul 26 12:00:03 2012 UTC. The chair is E-P. Information about MeetBot at http://wiki.merproject.org/wiki/Meetings. 12:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic. 12:00:12 #topic Current status 12:00:28 #info QA ToDo list updated and tasks added 12:01:08 #info Test coverage draft done, https://wiki.merproject.org/wiki/Quality/Test_coverage 12:01:32 #info Test execution howto https://wiki.merproject.org/wiki/Quality/ExecuteTests 12:01:44 (this missing some stuff, but it is a start) 12:01:59 hello 12:02:00 hmm.. that's all from me 12:02:46 for testrunner-ui refactor.. didn't have as much time for it as I thought so only refactored the settings 12:02:58 good work on test coverage 12:03:21 anyone willing to write Qt c++, feel free to contribute :) 12:03:38 i've been struggling with mer release not getting out because of us not doing enough automatic qa :P 12:03:59 issues with systemd that could be solved way earlier (and part of blame rests with releasing method) 12:04:12 Stskeeps: hopefully I will have something for you soon, I am creating a smoke test plan 12:04:27 yep 12:05:14 we have to discuss what would be necessary for smoke testing 12:05:15 and as a general note, I'm trying to come up something else to do around Mer than QA 12:05:26 not much from me this time, busy with day job 12:05:39 ots replacement 12:06:14 phone call ... here now 12:06:38 features that make it different from ots: 12:07:24 - transition cost based test distribution (e.g. don't reflash device when not needed, send test to device that already has necessary image flashed ) 12:07:55 - support for running non-packaged tests 12:08:26 (with ots you can also run non-packaged tests) 12:08:32 - web ui for queue observation/control/access to logs/results / workers maintenance info 12:09:03 - ability to restart workers or server (e.g. for upgrade) without stopping tests 12:09:42 - usage of multiple devices in single testcase 12:10:49 yunta_: am I right that it's using BOSS workflows for logic? 12:11:34 - less interesting things (planned): pre/post device analysis, gradual results submission (live progress monitoring), testcase modification on-the-fly (for long-period-testing) 12:12:03 + all things we had&liked in ots, I hope 12:12:09 sounds good 12:12:12 we can discuss details in the next topic 12:12:16 yes, sounds good 12:12:33 lbt: no, not exactly. not even amqp yet, but I'm getting there. 12:13:41 hm 12:13:47 OK - may be worth running some of this by phaeron1 and how IMG works - I'd personally like to see some reuse of tech between the two 12:14:05 they work together 12:14:35 test distributor / dister / whatever we call it - uses img for some operations 12:14:43 as I know there is no code duplication 12:14:54 nice 12:14:56 good to hear 12:15:09 does anyone something to report for current status? 12:15:23 yunta_: lbt means the interprocess comms. but you said you can easily replace that later when we work out the amqp queuing methods 12:15:53 (minor aside - some changes pending upstream) 12:16:23 phaeron1: well, ampq can be used all right, but I'm not sure about boss workflows for central "decision making" unit yet 12:16:52 let's move to the next topic 12:16:54 yunta_: no decision making needed :) 12:17:00 agreed 12:17:08 #topic OTS replacement 12:17:13 now you can continue :) 12:17:20 hehe 12:17:28 #info features that make it different from ots 12:17:36 #info transition cost based test distribution (e.g. don't reflash device when not needed, send test to device that already has necessary image flashed ) 12:17:56 #info support for running non-packaged tests 12:17:59 #info web ui for queue observation/control/access to logs/results / workers maintenance info 12:18:03 #info ability to restart workers or server (e.g. for upgrade) without stopping tests 12:18:06 #info usage of multiple devices in single testcase 12:18:11 #info less interesting things (planned): pre/post device analysis, gradual results submission (live progress monitoring), testcase modification on-the-fly (for long-period-testing) 12:18:18 * yunta_ should have done it (info) himself. well.... next time 12:18:26 so yunta_ I don't think you should use BOSS workflows for any serious logic - it's not designed for that 12:18:45 yes, that's what I meant 12:19:18 it may make sense to have the decision making be a BOSS participant and do task distribution via workflows 12:19:53 the problem is, I'm not yet sure if I need any serious logic :) 12:19:55 yunta_: do you have a name for this ots replacement yet? 12:20:13 I call it test distributor, or dister (for short) 12:20:22 stupid name though 12:21:09 is it written in python? 12:21:14 ruby 12:21:36 ok 12:21:58 does it use anyhow the test results or test definition xml formats? 12:21:59 I'm ruby/emacs/gnome3/(8char)tabs&spaces person 12:22:16 auts... 8 char 12:22:22 i like 4 12:22:27 :) 12:22:31 * Stskeeps blinks 12:22:32 there is no results processing yet 12:22:44 and that part is in img anyway :) 12:22:54 * timoph feels a war of the worlds closing in 12:22:55 E-P: it uses testrunner-lite --> eventually qa-reports 12:23:16 phaeron1: ok 12:23:49 feel free to change the xml formats if needed 12:23:57 they are not so flexible at the moment 12:24:31 yunta_: does it work as standalone with out boss? 12:24:42 yes 12:24:49 boss is currently one of the triggers 12:24:55 and result collectors 12:25:03 through custom participant 12:25:05 and what is needed for creating a test run, similar stuff as in ots? (eg. image url and some settings? 12:25:13 hm 12:25:46 if you're asking about how it works in boss-like workflow then yes, I think we use same input data ( phaeron1 ? ) 12:25:46 or are you going to use own test plan format? 12:26:20 dister has 2 kinds of modules: 12:26:22 eg. how it makes decisions what to execute and where 12:27:04 1. test case providers - know how to split your request into Tasks (minimal units of execution) 12:27:30 e.g. we have a provider that takes package names and creates one Task per package 12:28:00 we can have another one that takes names of test cases from robotic testing unit and splits them in whatever way 12:28:22 ok 12:28:39 2. execution drivers - know how to execute Tasks on devices (vm, hardware, robots, etc). 12:29:00 e.g. we have a driver that uses testrunner-lite to execute testrunner Tasks 12:29:06 TMI I guess 12:29:16 nop, good info 12:29:34 task distribution is made regardless of provider & driver 12:29:47 good to know that it support old tools and formats 12:29:58 tasks know their requirements, inputs, and the name of driver 12:30:53 btw. requirements are collected from "Provides" field, as discussed last week (?) 12:31:15 at least for standard package Provider, other Providers may use different sources 12:31:40 * lbt has serious reservations about use of Provides: like this 12:31:58 I'm not sure that spec is published anywhere? 12:32:41 you mean definition of our Provides field content? 12:32:49 (blah, lost in english) 12:33:05 can somebody refer to the earlier meeting? 12:33:09 yeah and the use of Provides->repo ->api as a mechanism to essentially transfer data 12:33:10 just so we're on same page 12:33:25 is that 'provides' defined in .spec? 12:33:30 did it get discussed? I thought it was only private so far? 12:33:40 phaeron1: help me here :) 12:33:44 wasn't discussed last time 12:33:44 *g* 12:33:50 oh, my mistake, sorry 12:34:01 I think phaeron1 described it in one of the bugs then 12:34:09 only mentioned too 12:34:11 anyway 12:34:15 the point is 12:34:28 we mark various things using Provides: in the spec 12:34:37 they get exposed in the repo xml 12:34:47 we can query that xml and select test packages 12:35:02 that's the "method" we came up with 12:35:08 lbt: has reservations 12:35:36 *nod* the goal is clear 12:35:46 I would like to see the test-definition xml extended and exposed as Provides similar as pkgconfig() provides 12:35:47 I feel it's a bit "clever" :) 12:36:06 let the debate begin :) 12:36:12 clever can be good if it gets the job done.. :P 12:36:19 mmm 12:36:42 either way, i'd really suggest that we wiki-fy this and it's easier to discuss 12:36:49 AIUI there are test specs and they are stated in the .spec and transfered via repo .xml encoded inside the Provides: field 12:36:54 it's on the internal wiki 12:37:06 phaeron1: so tests.xml should have some tag/parameter that defines requirements? 12:37:07 mer doesn't have an internal wiki, except for IT, let's just get it out :) 12:37:10 the encoding is ... limited (as is the current need) 12:37:15 my doubt is that requirements may be "live" on device and hence not represented correctly in RPM database 12:37:25 like "has more than 8 gigs of ram" 12:37:41 we didn't want to publish it before there's consencus so it is not seen as imposing a standard 12:37:48 I feel we should just include a .json file in/near the test package, import it to a DB and query the DB 12:37:49 mark it as draft and it's fine 12:38:00 i put drafty stuff up all the time :) 12:38:30 * lbt notes once again that using trac and mediawiki makes for painful sharing of docs 12:38:48 Stskeeps: and people find it and start using and before you know it , it is a defacto guide 12:39:01 E-P: it already does 12:39:10 http://wiki.meego.com/Quality/QA-tools/Test_packaging 12:39:12 phaeron1: yep ... but better that it's openly developed 12:39:17 phaeron1: that's fine :) 12:39:42 that's what we did did with the platform sdk and it worked just fine 12:39:46 either way, - i have doubts on the runtime dependencies for tests 12:39:54 http://wiki.meego.com/Quality/QA-tools/Test_package 12:39:56 just need to remember to change the wiki as the thing changes 12:39:58 as you can't easily inject into rpm dependencies 12:40:00 it builds on top of thins 12:40:01 thins 12:40:03 grr 12:40:04 this 12:40:11 Stskeeps: Provides: qa-tests-requirement-memory-minimum-8192 <------ dister will make sure this gets routed only to right devices 12:40:43 yunta_: hmm, so the Provides: is in the test package? 12:40:50 yes 12:40:57 backwards? :D 12:41:01 yes, a bit 12:41:06 but it should work 12:41:15 it will work 12:41:19 it should work but you have to wonder if there's better ways to do custom attributes 12:41:25 sure 12:41:26 but so does TCP over carrier pigeon :D 12:41:34 I'd go for a custom one if I had a choice 12:42:00 i'm not against the method of havign test packages indicate them, but it'd be nice to look into if we can instead perhaps do it with associated xml files in a repo 12:42:06 as you can add custom .xml files to a repo 12:42:13 just like we add patterns 12:42:30 so my concern is that this is a mechanism to get attributes from the test packaging activity into a DB for the decision tool 12:43:27 well, or into memory, in general 12:43:54 agreed, may not be mysql 12:43:58 and as Stskeeps says, I'd favour an associated file 12:44:12 I hate XML so I said json 12:44:14 yunta_: can you file a bug for me to identify a sane way to do this on RPM side? 12:44:23 as it's not going to be first time we'll have this issue 12:44:25 test requirements arise in several places I guess. some of them are test-design related, some are packaging-level decisions. 12:44:27 have I understood this correctly: test plan has list of test cases, test case has requirement to a test package, test package has then the provides which effects to the decision where the test is executed? 12:44:58 yunta_: agreed - but if you're placing them into Provides: then you've selected packaging time as the encoding time 12:45:13 yes 12:45:19 I mean: lbt: yes 12:45:41 which is the only reason I picked up on that 12:46:06 lbt: I don't find it sexy, but it's convenient. 12:46:31 *nod* - it'll never die when you've started it :) 12:47:01 yunta_: i think he means that we decide at packaging time what requirements it has.. and aren't there chance we'd want at times to do that on later times? 12:47:14 not so much the method 12:47:19 ah, later than packaging 12:47:25 so like, overriding requirements 12:47:29 yes, for example 12:47:39 mmm... I am only really objecting to use of Provides: 12:47:44 all the rest is fine 12:47:59 ok, then i'm wondering about if packaging time is the right time ;) 12:48:23 I think (guess) overriding would be easy with obs and dister. Can't judge for other combinations. 12:48:25 lbt: we can all agree it's not sexy, though i'm struggling to find a better way, if the time is indeed at packaging time 12:48:42 we might have similar problems in future with app store like things 12:48:49 "must have cd cdrive" 12:48:55 "cd drive", ish 12:48:56 I'd gladly get rid of Provides convention AND *-tests convention - and just create new fields in spec ... 12:49:12 I feel we should just include a .json file in/near the test package 12:49:19 why should they go in the spec? 12:49:39 E-P: i'd like to take an action to see if we can come up with a better way on this 12:49:49 Stskeeps: feel free 12:49:55 but as principle, the idea is sound, i think 12:50:04 implementation/specifics may need a bit of polishing 12:50:30 #action stskeeps to investigate if we can do rpm custom fields or otherwise relay information out of band when building test package 12:50:33 I do like the ability to identify test requirements like this 12:51:17 requirements should be in the test case level 12:51:40 yunta_: no objections from me beside that, do you feel you can move on with implementation? 12:51:42 lbt: in test selection process you may want to query by requirements. that may not be so easy if you have to index package content (json). I don't know how repos/obs work though..... 12:51:45 example. bluetooth test package has 100 of test cases, and 2 requires headset 12:51:59 E-P makes a good point.. 12:52:14 then it should be in json file or similar 12:52:41 Stskeeps: I can move on. The only thing we'll have to change is single small participant and maybe Provider. 12:52:45 yunta_: :nod: 12:53:01 yunta_: yep - I'm just saying that repo.xml isn't the place to store requirements 12:53:11 lbt: libc.so.6 ;) 12:53:24 qa-tests-requirement-memory-maximum-1024 12:53:33 libc.so.qa-tests-requirement-memory-maximum-1024.6 12:54:16 either way, let's see where this leads us 12:54:17 dister sounds promising and it fixes many known issues what we had in ots, good work! 12:54:23 yup, good work 12:54:26 yep ... very happy 12:54:45 yeah 12:55:10 anything else to ask/discuss about dister? 12:55:23 #info currently it is called as dister 12:55:27 #info dister has 2 kinds of modules 12:55:30 #info 1. test case providers - know how to split your request into Tasks (minimal units of execution) 12:55:34 #info 2. execution drivers - know how to execute Tasks on devices (vm, hardware, robots, etc). 12:55:37 #info task distribution is made regardless of provider & driver 12:55:42 #info tasks know their requirements, inputs, and the name of driver 12:56:04 #info defining test and environment requirements are under planning 12:56:47 if not, then we are done for today 12:57:06 yunta_: thanks for your time and info 12:57:19 lol, np 12:57:27 I am looking forward to seeing dister in use 12:57:52 have a nice day everyone 12:57:56 #endmeeting