11:00:36 #startmeeting Mer QA meeting 3/4/2012 11:00:36 Meeting started Tue Apr 3 11:00:36 2012 UTC. The chair is E-P. Information about MeetBot at http://wiki.merproject.org/wiki/Meetings. 11:00:36 Useful Commands: #action #agreed #help #info #idea #link #topic. 11:01:17 Good afternoon all o/ 11:01:18 o/ 11:01:36 Welcome to Mer's initial QA meeting, I propose that we have following agenda: current status, QA goal, tools & tests and next meeting 11:01:59 * Stskeeps is here, on laggy gprs, and mostly listening 11:02:08 great :) 11:02:24 i'm here too 11:03:16 We can start with the current status and if say if you want to discuss about some specific topic 11:03:27 I'll add in 'scope' 11:03:47 by which I mean I thought of 4 areas : SDK & Tools; Core; Hardware adaptations and other vendor issues; System 11:03:59 s 11:04:50 ok 11:05:09 #topic Current QA status 11:05:53 I have been following Mer about couple of weeks, so I don't have much of knowledge yet 11:06:23 well, Mer Core has a CI system in place; each change to a core package is submitted for review before acceptance 11:06:54 as a part of that review, we use BOSS to trigger some builds in the OBS 11:07:29 that then uses the OBS build dependency capability to trigger other packages that depend on the package in question 11:07:40 this happens over all architectures 11:07:59 good, that is already a lot 11:08:06 if there are any failures then BOSS rejects the submission automatically 11:08:44 BOSS is not a QA system - it 'just' provides process automation. Typically it says what test-suites to run and what to do based on the results. 11:09:36 I am familiar with BOSS, but I haven't used it before 11:09:37 for example. we'd like to extend the process so that when a build finishes succesfully, it (BOSS) builds an image 11:10:45 and if I have understood the BOSS correctly, it is easy to extend? 11:10:52 very much so 11:11:08 great 11:11:15 think of it as robust shell-scripting for systems 11:11:43 so it can do things like update bugzilla. make commits to git, trigger flashing of images 11:12:07 anything you can do using an api really :) 11:12:48 what is the status of the signaling the changes to the vendors? 11:13:18 pipedream :) 11:13:51 we need to figure out what to say, to whom, when and how to handle responses and non-responses 11:14:21 but this is all process definition - when we know what we want to do, BOSS will be able to do it 11:14:31 nod 11:14:47 eg in Apps for MeeGo we implement a voting system 11:15:01 in Mer Core we may grant some vendors veto rights 11:15:12 depending on their contributions to QA 11:15:38 that's all human level design of how to interact as a group 11:16:05 My gut reaction is that we provide a "please test" signal 11:16:33 wait for a few hours and then prompt a human reviewer to make a decision based on collated results 11:17:17 ie we automate rejections for some criteria (eg core build fail) and ask for intervention on others 11:17:36 as we learn we should be able to define and apply heuristics 11:17:42 . 11:18:09 that was one of issues, what I posted to the mailing list is that, how we can be sure that the results are valid and the tests are valid 11:18:31 *nod* ... we have to make a call 11:19:11 collaborative testing like this seems to be unusual 11:19:20 yes, I think there will be many issues and we will solve them then 11:20:43 there is a image building "process" added to the BOSS? 11:20:55 (or how do you call the BOSS's plugins) 11:21:25 BOSS has process definitions 11:22:17 http://autodoc.meego.com/boss/processes/CE/MW/MTF/ 11:22:25 lets not get too deep into them :) 11:23:00 we have a plugin in OBS which triggers a named 'obs' process on certain events 11:23:25 the process calls a plugin (participant) to look at the event and decide how to handle it 11:23:43 typically it uses a lookup based on the project name 11:23:58 and then runs a project specific process 11:24:12 each participant is a bit of python 11:24:19 ok 11:24:29 and it can do anything python can do 11:24:33 we can discuss the details later in #mer channel 11:25:11 So... proper testing is done with something like testrunner lite 11:25:31 it's just an executor 11:25:48 to help to get unified result format, etc. 11:26:05 yes, the testrunner doesn't specify any test framework 11:26:26 let me wrap up the current status, and then we can move on 11:26:43 #info Mer Core has a CI system in place; each change to a core package is submitted for review before acceptance 11:27:31 #info Changed package is compiled to all architectures in OBS, and rejected if any of the build fails 11:28:02 #info BOSS is used to handle this process 11:28:24 #info Signaling system is not implemented and requires planning 11:28:33 anything else to add to current status? 11:29:38 #info automated image generation after each build and before acceptance is almost ready 11:30:59 thanks 11:31:22 let's move on 11:31:48 #topic QA Scope 11:32:25 lbt you mentioned the 4 areas 11:32:52 yes, I thought of: SDK & Tools; Core; Hardware adaptations and other vendor issues; Systems 11:33:28 the HA/vendor covers the bit we mentioned about triggering some kind of distributed QA 11:33:56 SDK & Tools is probably the easiest since they can run in a VM :) 11:34:11 yep 11:34:15 Systems is very hard since there are complex interactions and very large datasets 11:34:43 can you open the Systems a bit, what do you mean by that? 11:34:43 eg testing a change to a BOSS process needs a test using OBS, IMG, bugzilla, testrunner 11:36:01 usually we end up testing on production but using test projects - not always ideal 11:36:50 Right now I'm working on OBS deployment testing 11:37:08 so creating VMs, running a deployment, building a package 11:37:30 but this needs to work with latest scratchbox2 11:37:43 ok 11:37:46 and vice-versa - we need to test sb2 before deploying to production 11:37:48 all nasty 11:38:22 so ... Systems are in scope mainly to say they're nasty and we should special-case them :) 11:39:15 * lbt wonders again why he actually *chose* to do systems... 11:39:42 I wouldn't separate the system testing as its own, it could be part of the tools 11:40:20 there are defintely parts that can be tested in isolation 11:40:29 sure 11:40:34 OBS has some kind of test suite that should be run 11:41:15 we could define dependencies between the tools, and when we change a tool, we would test the tool + all its dependencies 11:41:31 meaning, in the system wide testing 11:41:39 just an idea 11:42:42 how do you see the core testing, like Stskeeps posted to the mailing list, there is no reference hardware or adaptation where to run tests for Mer core 11:42:44 *nod* ... I feel it's a different area to 'device testing' which is what the other 3 relate to 11:43:13 that is true if you think that from that point of view :) 11:43:37 I feel we should setup a VM based reference HA which is not part of Core and also acts as a template for Vendors 11:44:01 Nemo and Vivaldi could support this too 11:44:19 and that could be in the early notification list 11:45:05 possibly 11:45:24 since it's a VM it may give false failures 11:45:48 I'd make it a peer at 'level 1' notification 11:46:19 any vendors who want to hold back until some sanity checks are done can wait for level 2 notices 11:46:41 but I'd hope to see some participate at level 1 aswell 11:47:08 would be good 11:47:24 eventually if the VM turns out to actually catch a lot of problems we can optimise 11:47:36 but we may find it's no substitute for hardware 11:47:43 that kind of VM HA could be the way how we start to build the QA system 11:47:51 100% 11:48:27 I'd like us to have a step called "flash the VM" :) 11:48:44 to reinforce that it's not special - just a virtual device 11:49:01 I have built something like that :) 11:49:11 * lbt notices time ... 11:49:12 using OTS and KVM 11:49:19 uh, we are running out of time sono 11:49:22 yep - perfect 11:49:37 #info 4 areas : SDK & Tools; Core; Hardware adaptations and other vendor issues; System 11:49:55 are there any other areas BTW? 11:50:16 we can start with those and define more later if needed 11:50:28 or change them 11:50:35 OK 11:50:57 let's skip the tools topic for today 11:51:08 we can use the tools from meego pretty well 11:51:22 and define what we have to do next 11:51:49 #topic QA tools 11:52:34 #info Using QA tools from MeeGo, like testrunner-lite, OTS, test-definition, Testplanner, QA-Reports 11:52:55 timoph: something to add briefly? 11:54:58 for the last, lets discuss what to do next 11:55:29 Deciding on a per-package basis what 'test plan' to run. 11:56:09 we can choose couple of packages first and start the automation with them 11:56:17 yep - I'd like to pick a package, setup some tests for it and run them 11:56:26 #topic Next in QA 11:56:49 so what do we need to achieve that? 11:57:02 A Mer VM HA ? 11:57:07 wouldnn't it be better to start with some of the mcts/blts/mwts? 11:57:40 mwts requires Qt, and the tests are not tested with Qt5 11:58:08 not an issue atm 11:58:09 (please expand acronyms in writeup) 11:58:45 mcts (meego core test suite) 11:58:54 blts (basic layer test suite) 11:59:01 mwts (middleware test suite) 11:59:17 or something like that 11:59:21 do these need a 'device' to run on 11:59:23 ? 11:59:30 * Stskeeps has to go, bbl 11:59:36 ie the Mer VM HA ? 11:59:46 not all of them, you can run them in the chroot 11:59:54 but results might be weird :) 12:00:47 it feels like there are some parallel activities we can have 12:01:31 I have tried the mwts and blts tests, and they are working on Mer 12:01:48 in the SDK ? 12:01:51 yes 12:02:02 could you document how to do that? 12:02:09 action? 12:02:14 I have done it 12:02:29 http://wiki.merproject.org/wiki/Quality/ExecuteTests 12:02:53 :D 12:02:54 info then :D 12:02:59 #info http://wiki.merproject.org/wiki/Quality/ExecuteTests 12:03:00 :) 12:03:35 one task could be to investigate what is needed for mer VM HA 12:04:09 so that comment on the test plan... 12:04:25 "sudo zypper in blts-bluetooth-tests" 12:04:48 how do I know that blts package exists? 12:04:58 and is that the only bluetooth test package? 12:05:30 that is one problem, how we add the tests to the image 12:05:50 so I would like to see some kind of outline off the test:package mappings 12:06:18 so that's a design/document task 12:06:31 yes 12:06:51 lbt. test:package usuaully doesn't work, one test usually goes through multiple layers 12:07:07 test coverage etc 12:07:22 and one change can affect many packages 12:07:25 *nod* ... comes back to the same point... how BOSS makes a test-plan 12:09:28 we need to define somewere what is needed to execute when a package changes 12:10:04 example in the spec file or in another xml or similar 12:10:54 *nod* 12:11:07 but already over time 12:11:21 I think this is an action/info area for discussion in #mer/ml 12:11:39 lets have it as action 12:12:07 #action We need to define how the BOSS makes a test-plan 12:12:24 so much issues to discuss and so less time 12:12:33 yep - it's a crucial area 12:12:56 I will propose next meeting time to mailing list, ok? 12:13:07 yep - this has been useful 12:13:17 thanks for a good meeting 12:13:23 #action Propose next meeting time 12:13:36 thanks for everyone 12:13:48 ty 12:13:56 lets continue the chat at #mer 12:14:06 #endmeeting